Using a ResNet to Detect Anomalies in Textiles

This topic describes how a residual neural network (ResNet) can help support an image classification model that classifies anomalies in textiles, and how to implement this in PerceptiLabs. This is a great example of how PerceptiLabs can support models for industrial IoT and manufacturing use cases. You can find an example PerceptiLabs model and links to the datasets on GitHub. In this use case we train a model using the Textile Defect Detectiondataset from Kaggle that has the following data:

  • a collection of 72000 64x64 monochrome images, each showing a magnified view of the fibers found in a textile, which have some sort of anomaly

  • a collection of 72000 defect labels that map to each of the monochrome images. Each defect type is an integer label and there are six types of defects in total. The following image lists the six defect types and shows a few sample images for each:

In this topic we will cover the following:

  • Overview of ResNet Blocks: an overview of how residual blocks work.

  • Residual Blocks in PerceptiLabs: how to build residual blocks in PerceptiLabs.

  • Sample Model: a summary of our example PerceptiLabs ResNet project on GitHub.

Overview of ResNet Blocks

Machine learning models for image classification often use convolution layers to extract features from images while employing max-pooling layers to reduce dimensionality. The goal is to extract increasingly higher-level features from regions of the image, to ultimately make some kind of prediction such as an image classification.

This is typically accomplished with a "chain" of layers that feed forward into each other, where additional layers can lead to better accuracy in the final classification. However, the addition of too many layers can lead to the vanishing gradient problem during back propagation in which smaller and smaller gradients of the loss function cause updates to the weights to become smaller to the point where they tend towards zero. This can mean that training time increases or stalls altogether, and in some cases, the model's ability to classify images accurately can even decrease.

One popular method to overcome this problem is to incorporate residual blocks, which collectively form a ResNet:

A residual block solves this problem by introducing an architectural pattern in which earlier layers skip over future layers. These “skip connections” are generally implemented by the addition operation where by the features from earlier in the network are added to the newly-computed features after some convolutional layers. These skip connections provide multiple, shorter paths for gradients to flow. It has been empirically shown that this enables deeper computer vision networks to train faster.

Residual Blocks in PerceptiLabs

In PerceptiLabs, residual blocks can be easily created by adding two Convolution components and a Merge component. Connections are then created from the first convolution layer to the second as well as directly to the Merge component:

The figure above shows three residual blocks using these components. The upper Convolution component takes the input from the previous layer and feeds it to the bottom Convolution component. It also feeds its values directly to the next layer via a connection to the Merge component that acts as a skip connection. The bottom Convolution component also feeds its updated activations into the next layer via the Merge component.

ResNets in PerceptiLabs

The main elements of the Textile-Classification example model for PerceptiLabs are shown here:

  1. Image Data: provides the 64x64 monochrome images for training. A subsequent Reshape component is then used to ensure a size of 64x64x1.

  2. Label Data: provides the corresponding integer labels for each image that enumerate what anomaly the image shows. Since this model solves a classification problem, the integer labels are converted to binary representation using a OneHot Encoding component.

  3. Residual Block: takes input and reduces its dimensionality. There are three residual blocks in the model in total.

  4. Dense Layer: compresses the output to six classes that represent the possible anomalies.

  5. Classification Training Component: the Classification training component ties the model together to train the model.

Sample Model

We've provided the ResNet-based model described above, along with sample data, in this GitHub Repo.

Loading the Sample Model

Follow the steps below to load the sample model in PerceptiLabs:

Note

You must be running PerceptiLabs 0.10.0 or higher to load this model.

1. Download the data files (X.npyand Y.npy) from here.

2. Clone or download the sample model from GitHub.

3. On the Model Hub screen, import the sample model into PerceptiLabs. When prompted for the model's directory, navigate to and select the location of the model.json file.

4. Open the topmost Data component in the model, navigate to its code tab and update the call to np.load()passing in the absolute path of the X.npy data file that you downloaded in Step 1. For example:

...
matrix_DataData_Data_1_0 = np.load("c:/Textile-Classification-master/X.npy", mmap_mode='r+').astype(np.float32)
...

5. Save the code changes for that Data component.

6. Open the bottom most Data component, navigate through its menus until you get to the Choose Files button.

7. Click Choose Files, navigate to the Y.npy data file that you downloaded in Step 1, and click Confirm.

Last updated