Image segmentation (aka semantic segmentation) is a method to classify/label each pixel in an image as belonging to a type of object, and instance segmentation takes this a step further to classify these segmented objects.
The following example shows a source image (left) of a scene and a segmented image (right) identifying the pixels belonging to the sky in the scene:
Source image (left) and a segmentation image (right)
7) Enable Normalize (1)on the popup (shown below) and set it to Min Max (2). Alsoenable resize (3) and set it to Custom (4) with a width and height (5) of 224, and click Save (6):
The Min Max setting (2) for Normalize guarantees that the mask will only contain values between 0 and 1, so that the input doesn't cause the model to get initial activations which are too high.
You also need to ensure the size of the input and target are divisible with 2^n where n ≥ the number of levels in the U-Net (otherwise you get rounding errors). 224x224 is a good size since 224 is divisible with up to and including five levels, and 224x224 is what pre-trained models such as the VGG need.
Tip: if you need to modify any of these data settings (e.g., column pre-processing settings) after the model has been created, you can access this popup at any time by clicking on Data Settings in the Modeling Tool's toolbar.
8) Click Create in the Data Wizard. PerceptiLabs generates the model with a UNet Component and navigates to the Modeling Tool:
You can now customize and train the model in the Modeling Tool.
For binary segmentation (i.e., object or no object), you may want to use the Dice loss since it's more efficient in unbalanced cases. Just make sure to set the Activation Function in the UNet to be Sigmoid in this case.
If you have multi-class segmentation, use Cross-Entropy loss instead, and set the Activation Function to Softmax.
You have now created a simple Image Segmentation model. The next steps are to learn more about what happens during training, and also, how to later evaluate and deploy the model.