This topic provides a tutorial showing you how to get started with PerceptiLabs. This tutorial walks you through the classic machine learning example of training a neural network to recognize 28x28 pixel images of handwritten digits representing the numbers 0 through 9.
PerceptiLabs includes sample data for trying out this basic image recognition use case. The data consists of:
the classic MNIST dataset of containing 28x28-pixel normalized grayscale images (in .png format) depicting handwritten digits.
a data.csv file that maps the image names to the classifications (i.e., digits 0 through 9). The first column of this .csv file contains the image paths relative to this .csv file and the second column contains the labels. Below is a partial example of what this CSV data looks like:
Follow the steps below to create a new model using PerceptiLabs' Data Wizard:
1. Click Create on the Model Hub screen to display the New Model popup:
2. Click Load data on the New Model popup:
3. Select data.csv and click Confirm. This file maps the image files stored in the images sub directory to their corresponding digit classifications (0 through 9):
4. Set the Image_paths column to input (1) and its datatype to image (2), and then set the Labels column to output (3) and its type to be categorical (4) (i.e., classifications). and click Next (5):
5. (Optional) Give the model a descriptive name in the Name field and/or modify the other training settings:
6. Click Run model to start training the model or Customize to see the model in the Modeling Tool.
7. Save the model by selecting File > Save or File > Save As. PerceptiLabs will generate a sub directory in the model's location using the specified model name. The model will be saved to a model.json file within that directory everytime you save the model.
The final model in the Modeling Tool will look as follows:
The components of the model are as follows:
Input: represents the input data provided to the model. This component is auto-generated by PerceptiLabs and cannot be modified.
Convolution: performs a convolution to downsample the image.
Dense (first): connects to the previous component in a dense manner to process and downscale the data.
Dense (second): connects to the previous component in a dense manner to process the data and set the output dimension to be the same as the targeted data.
Output: represents the targeted data provided to the model, in this case, the normalized probability of which digit the image represents. This component is auto-generated by PerceptiLabs and cannot be modified.
Now that the model is fully trained, you can export it as a TensorFlow model to use for inference. See Exporting for more information.