Basic Image Segmentation
Image segmentation (aka semantic segmentation) is a method to classify/label each pixel in an image as belonging to a type of object, and instance segmentation takes this a step further to classify these segmented objects.
The following example shows a source image (left) of a scene and a segmented image (right) identifying the pixels belonging to the sky in the scene:
Source image (left) and a segmentation image (right)
The dataset consists of images of paintings by the famous painter Bob Ross, and their corresponding target/expected segmentation image masks. The following is an example of one of the painting images:
When PerceptiLabs creates the model, it generates a data.csv file that maps the painting images to their respective image masks. Below is a partial example of what this CSV data looks like:
The original dataset has nine classes of segmentations (e.g., sky, tree, etc.). The model created by PerceptiLabs segments those classes and maps them to the numbers listed below:
- sky: 0
- tree: 1
- grass: 2
- earth;rock: 3
- mountain;mount: 4
- plant;flora;plant;life: 5
- water: 6
- sea: 7
- river: 8
The model sets up nine channels representing the nine classes and assigns a 0 or 1 to each channel for a given image, to indicate whether the class is active or not for that image.
2) Select Segment Images for the model type:
3) Scroll to the Segmenting Bob Ross Paintings dataset, hover your mouse over it, and click Load:
4) Wait for the dataset to complete loading, and then click Create:
5) Ensure the images column is set to Input and type image, and the masks column as Target and type mask:
6) (Optional) Modify the Data partition settings below the column configurations.
7) Click Create in the Data Wizard. PerceptiLabs generates the model with a UNet Component and navigates to the Modeling Tool:
You can now customize and train the model in the Modeling Tool.