Comment on page
Basic Image Segmentation
Image segmentation (aka semantic segmentation) is a method to classify/label each pixel in an image as belonging to a type of object, and instance segmentation takes this a step further to classify these segmented objects.
The following example shows a source image (left) of a scene and a segmented image (right) identifying the pixels belonging to the sky in the scene:
Source image (left) and a segmentation image (right)
The dataset consists of images of paintings by the famous painter Bob Ross, and their corresponding target/expected segmentation image masks. The following is an example of one of the painting images:
When PerceptiLabs creates the model, it generates a data.csv file that maps the painting images to their respective image masks. Below is a partial example of what this CSV data looks like:
The original dataset has nine classes of segmentations (e.g., sky, tree, etc.). The model created by PerceptiLabs segments those classes and maps them to the numbers listed below:
- sky: 0
- tree: 1
- grass: 2
- earth;rock: 3
- mountain;mount: 4
- plant;flora;plant;life: 5
- water: 6
- sea: 7
- river: 8
The model sets up nine channels representing the nine classes and assigns a 0 or 1 to each channel for a given image, to indicate whether the class is active or not for that image.
2) Select Segment Images for the model type:
3) Scroll to the Segmenting Bob Ross Paintings dataset, hover your mouse over it, and click Load:
4) Wait for the dataset to complete loading, and then click Create:
5) Ensure the images column is set to Input and type image, and the masks column as Target and type mask:
6) (Optional) Modify the Data partition settings below the column configurations.
7) Click Create in the Data Wizard. PerceptiLabs generates the model with a UNet Component and navigates to the Modeling Tool:
You can now customize and train the model in the Modeling Tool.
- For binary segmentation (i.e., object or no object), you may want to use the Dice loss since it's more efficient in unbalanced cases. Just make sure to set the Activation Function in the UNet to be Sigmoid in this case.
- If you have multi-class segmentation, use Cross-Entropy loss instead, and set the Activation Function to Softmax.
Tip: if you need to modify data settings (e.g., column pre-processing settings) after the model has been created, you can access them at any time by clicking on Data Settings in the Modeling Tool's toolbar.