Basic Image Segmentation
Last updated
Last updated
Image segmentation (aka semantic segmentation) is a method to classify/label each pixel in an image as belonging to a type of object, and instance segmentation takes this a step further to classify these segmented objects.
The following example shows a source image (left) of a scene and a segmented image (right) identifying the pixels belonging to the sky in the scene:
This tutorial describes how to perform image segmentation in PerceptiLabs using a UNet Component, by generating a model based around the Segmented Bob Ross Paintings dataset, accessible through PerceptiLabs' Data Wizard.
Note: this tutorial assumes a basic working knowledge of PerceptiLabs. If you have not already done so, we recommend you first complete the Basic Image Recognition tutorial.
The dataset consists of images of paintings by the famous painter Bob Ross, and their corresponding target/expected segmentation image masks. The following is an example of one of the painting images:
When PerceptiLabs creates the model, it generates a data.csv file that maps the painting images to their respective image masks. Below is a partial example of what this CSV data looks like:
The original dataset has nine classes of segmentations (e.g., sky, tree, etc.). The model created by PerceptiLabs segments those classes and maps them to the numbers listed below:
sky: 0
tree: 1
grass: 2
earth;rock: 3
mountain;mount: 4
plant;flora;plant;life: 5
water: 6
sea: 7
river: 8
The model sets up nine channels representing the nine classes and assigns a 0 or 1 to each channel for a given image, to indicate whether the class is active or not for that image.
For more information about these classes, check out the labels.csv file included in the original dataset.
Follow the steps below to have PerceptiLabs' Data Wizard generate a new model:
1) Click Create on the Overview screen:
2) Select Segment Images for the model type:
3) Scroll to the Segmenting Bob Ross Paintings dataset, hover your mouse over it, and click Load:
4) Wait for the dataset to complete loading, and then click Create:
5) Ensure the images column is set to Input and type image, and the masks column as Target and type mask:
6) (Optional) Modify the Data partition settings below the column configurations.
7) Click Create in the Data Wizard. PerceptiLabs generates the model with a UNet Component and navigates to the Modeling Tool:
You can now customize and train the model in the Modeling Tool.
Tips:
For binary segmentation (i.e., object or no object), you may want to use the Dice loss since it's more efficient in unbalanced cases. Just make sure to set the Activation Function in the UNet to be Sigmoid in this case.
If you have multi-class segmentation, use Cross-Entropy loss instead, and set the Activation Function to Softmax.
Tip: if you need to modify data settings (e.g., column pre-processing settings) after the model has been created, you can access them at any time by clicking on Data Settings in the Modeling Tool's toolbar.
You have now created a simple Image Segmentation model. The next steps are to learn more about what happens during training, and also, how to later evaluate and deploy the model.