Basic Image Segmentation

Image segmentation (aka semantic segmentation) is a method to classify/label each pixel in an image as belonging to a type of object, and instance segmentation takes this a step further to classify these segmented objects.

The following example shows a source image (left) with a round feature and a segmented image (right) identifying the pixels belonging to that feature:

This tutorial describes how to perform image segmentation in PerceptiLabs using a UNet Component. It shows how to take the RGB_Magnetic_tiles example dataset included with PerceptiLabs, and modify the standard convolution model that PerceptiLabs generates with this dataset to be a U-Net based model.

Note: this tutorial assumes a basic working knowledge of PerceptiLabs. If you have not already done so, we recommend you first complete the Basic Image Recognition tutorial.

The RGB_Magnetic_tiles dataset consists of:

  • magnetic tile images and their corresponding target/expected segmentation image masks.

  • a data.csv file that maps the magnetic tile images to their respective image masks. Below is a partial example of what this CSV data looks like:

images	masks
RGBMT_Fray/exp0_num_797.jpg	RGBMT_Fray/exp0_num_797.png
RGBMT_Fray/exp1_num_135544.jpg	RGBMT_Fray/exp1_num_135544.png
RGBMT_Fray/exp1_num_136254.jpg	RGBMT_Fray/exp1_num_136254.png

Create a new Model

Follow the steps below to create a new model using PerceptiLabs' Data Wizard:

1) Click Create on the Model Hub screen to display the New Model popup.

2) Click Load .CSV on the New Model popup.

3) Select data.csv and click Confirm to load the CSV file into PerceptiLabs.

4) Set the images column to be an Input and the masks column to be a Target:

5) (Optional) Modify the Data partition settings.

6) Click the pre-processing settings button for the images column:

7) Enable Normalize on the popup (shown below) and set it to Min Max. Also enable resize and set it to Custom with a width and height of 128, and click Save. Repeat this for the masks column as well.

The Min Max setting for Normalize guarantees that the mask will only contain only 0s and 1s, so that the input doesn't cause the model to get initial activations which are too high.

You also need to ensure the size of the input and target are divisible with 2^n where n≥#levels in the UNet (otherwise you get rounding errors). You could also try 224x224 which is a good size as 224 is divisible with up to and including five levels, and 224x224 is what pre-trained models such as the VGG need.

Tip: if you need to modify any of these data settings (e.g., column pre-processing settings) after the model has been created, you can access this popup at anytime by clicking on Data Settings in the Modeling Tool's toolbar.

8) Click Next.

9) (Optional) Modify the Training settings when the Training Settings popup appears.

10) Click Customize to close the Training Settings popup. This navigates to the Modeling Tool showing the convolution model generated by PerceptiLabs:

In the next section you will modify this convolution model to become a U-Net model for image segmentation.

Modify the Model to use a UNet Component

Follow the steps below to modify the generated model to become a U-Net model for image segmentation:

1) Highlight all of the Components in between (but not including) the Input and Target Components:

The model should now look as follows:

2) Drag and Drop the UNet component from the Deep Learning dropdown:

3) Connect the Components as follows.

Export the Trained Model

Now that the model is fully trained, you can export it as a TensorFlow model to use for inference. See Exporting for more information.

Last updated