FastAPI Server: generates a TensorFlow model along with a Python server app with a simple API that you can use for inference on your model.
PL Package: exports a zipped package containing your PerceptiLabs model that you easily can share and load.
Selecting either of these displays a popup with some or all of the following options:
Save to: allows you to specify the location to which the exported model files are to be placed.
Optimize (available for TensorFlow model exports): provides options to compress and/or quantize your exported model(s) during export. Selecting either of these options will export to TensorFlow Lite format.
Select Gradio to export and deploy your model as a Gradio app.
Using the Exported/Deployed Model
After you complete the export/deployment the model can be used for inference.