Coral Sign Language Tutorial
Last updated
Was this helpful?
Last updated
Was this helpful?
This tutorial builds off of the by showing how to export an image recognition model from PerceptiLabs and transfer it to a Dev Board for inference at the edge. Coral is Google's platform for on-device AI.
It also demonstrates how can be used to optimize models for edge devices. The model from the Convolution Tutorial classifies pictures of sign language hand gestures representing the digits 0 through 9.
provides post-training conversion of models to reduce their size and increase inference speed at the expense of losing a bit of accuracy. Using Full integer quantization, all the 32-bit floating-point values in the model are converted to the nearest 8-bit fixed-point numbers. These values generally include weights and activation outputs. Specific types of hardware accelerators used for faster machine learning computations such as Coral's only support fully-quantized models.
Follow the steps below to export the model in PerceptiLabs to a trained TensorFlow Lite model:
Navigate to File > Export.
Enter a path.
Set Export as to TensorFlow Model.
Enable Quantized to set full integer quantization.
Click Export.
Locate the exported .tflite file in the path you specified in Step 2. This will be used in subsequent steps below.
To run the inference on the Dev Board, we need the following:
Quantized model (.tflite file) exported from PerceptiLabs
Input data to run the inference
Python code to run the inference
(Optional) Label data
Below is a sample Python wrapper class (ClassificationEngine
) that loads the image data and quantized (trained) model, and runs inference using the EdgeTPU API. Also included is a simple main()
function that instantiates that class and invokes its methods.
Copy and paste all of the code into a Python script (.py file) named hand_recognizer.py.
ClassificationEngine
derives from Coral's BasicEngine
that provides many useful methods which include calculating inference time, finding output shapes are included in the EdgeTPU API’s, etc.
ClassificationEngine
adds the following key methods:
load_data()
: loads the image data.
required_input_array_size()
: returns the required input shape for the model. Using this, the input can then be adjusted to match the required input shape and subsequently fed to the model.
inference_step()
: returns a generator that can be used to run inference on each input sample each step.
In main()
, ClassificationEngine
is used to run inference on one sample at a time and the output of the network is printed to the console.
Once you have all the files (Python script, quantized model, and data files) they can then be sent from your local computer to the Dev Board.
Follow the steps below to run the Python script at the edge (i.e., on the Dev Board):
3. Run the script on the Dev Board using the following command: python3 hand_recognizer.py --model tflite_model.tflite --input data/X.npy
python3 hand_recognizer.py --model tflite_model.tflite --input data/X.npy
This should output a 10-dimensional array with class probabilities for the current step, predicting what digit the current hand sign image represents.
4. Enter a non-empty value or a string to have the script continue to the next step.
You have now run image classification at the edge on a Coral Dev Board.
The is a single-board computer with an EdgeTPUcoprocessor. Before proceeding, set up the board by following the steps in Coral's which should take around 30 minutes.
Currently Coral has two APIs: and to perform inference using quantized models on EdgeTPU devices. We shall use the latter (EdgeTPU API) in this tutorial.
1. SSH into the Dev Board as described in the Coral documentation .
2. Copy the files to the Dev Board using Coral's command.