Deploy View
PerceptiLabs Deploy View allows you to export and deploy your model to different targets.
The view displays the following options to select the model(s) to export/deploy:

1. Model Selection Checkbox: enable this to select the model(s) for export/deployment.
2. Search bar: allows you to filter the list of models by name.
To the right of the model selection screen are the export/deployment targets that you can click:

The following subsections describe these targets:
The current export options include:
- FastAPI Server: generates a TensorFlow model along with a Python server app with a simple API that you can use for inference on your model.
- PL Package: exports a zipped package containing your PerceptiLabs model that you easily can share and load.
Selecting either of these displays a popup with some or all of the following options:
- Save to: allows you to specify the location to which the exported model files are to be placed.
- Optimize (available for TensorFlow model exports): provides options to compress and/or quantize your exported model(s) during export. Selecting either of these options will export to TensorFlow Lite format.
After you complete the export/deployment the model can be used for inference.
Last modified 1yr ago