Basic Image Recognition
This tutorial shows you how to get started with PerceptiLabs. It walks you through the classic machine learning example of training a neural network to recognize 28x28 pixel images of handwritten digits representing the numbers 0 through 9.
PerceptiLabs provides access to sample data for trying out this basic image recognition use case. The data consists of:
- the classic MNIST dataset containing 28x28-pixel normalized grayscale images (in .png format) depicting handwritten digits.
- a data.csv file that maps the image names to the classifications (i.e., digits 0 through 9). The first column of this .csv file contains the image paths relative to this .csv file and the second column contains the labels. Below is a partial example of what this CSV data looks like:
2. Select Image Classification for the model type:
3. Scroll to the MNIST Digits dataset, hover your mouse over it, and click Load:
4. Wait for the dataset to complete loading, and then click Create:
5. Give the model a descriptive name (1), optionally set any pre-processing options (2), Ensure the Image_paths column is set to input (3), and its datatype to image (4). Then ensure the Labels column is set to target (5) and its type to categorical (6) (i.e., classifications). and click Create (6):
PerceptiLabs will generate a sub directory in the Model Path location using the specified model name. The model will be saved to a model.json file within that directory every time you save the model.
- 4.Dense (second): connects to the previous Component in a dense manner to process the data and set the target dimension to be the same as the targeted data.
- 5.Target: represents the targeted data provided to the model, in this case, the normalized probability for which digit the image represents. This Component is auto-generated by PerceptiLabs and cannot be modified.
6. Click Run near the upper-right region of PerceptiLabs:
7 Adjust the training settings on the Model training settings popup if required, and click Run Model to start training: