PerceptiLabs
Search…
Deploy View

Overview

PerceptiLabs Deploy View allows you to export and deploy your model to different targets.
The view displays the following options to select the model(s) to export/deploy:
1. Model Selection Checkbox: enable this to select the model(s) for export/deployment.
2. Search bar: allows you to filter the list of models by name.
To the right of the model selection screen are the export/deployment targets that you can click:
The following subsections describe these targets:

Export Options

The current export options include:
  • TensorFlow: exports to TensorFlow's exported model format or to TensorFlow Lite.
  • FastAPI Server generates a TensorFlow model along with a Python server app with a simple API that you can use for inference on your model.
Selecting either of these displays a popup with some or all of the following options:
  • Save to: allows you to specify the location to which the exported model files are to be placed.
  • Optimize (available for TensorFlow model exports): provides options to compress and/or quantize your exported model(s) during export. Selecting either of these options will export to TensorFlow Lite format.

Deployment Options

Select Gradio to export and deploy your model as a Gradio app.

Using the Exported/Deployed Model

After you complete the export/deployment the model can be used for inference.
See Exporting and Deploying Models for information on how to use your exported/deployed model.

​

​
Last modified 27d ago