Welcome to the documentation for PerceptiLabs v0.13.x. Note: Users running older versions of PerceptiLabs should select the respective version of the documentation from the navigation bar to the left.
PerceptiLabs provides an end-to-end visual modelling workflow for machine learning, built on top of TensorFlow. Its centered around our rich user interface which can load and preprocess datasets, and allows you to edit, train, and deploy machine learning models:
The following sections provide additional information about the key aspects of PerceptiLabs.
PerceptiLabs' rich, end-to-end workflow is comprised of the following:
- Data Pipeline: Map your data to classifications using a CSV file, or have PerceptiLabs do the work for you with a public dataset to get you to a working model quickly. Here, you have the option to let PerceptiLabs pre-process your data as well (e.g., to normalize data, resize images, etc.). Pre-processing in PerceptiLabs saves you time from manually performing this for each image, or conjuring up some sort of external data pipeline. In addition, the pre-processing settings are saved as part of your project in PerceptiLabs, so you can easily change them at any time to experiment with different settings. This also eliminates the need to hunt down these settings in the future, a task that would normally be required with external processes. For more information see: Load and Preprocess Data and be sure to check out our Dataset Garden.
- Model: Visually build (edit) your model's architecture, adjust settings, and optionally make low-code changes. Optionally create new models from the same dataset for comparison.
- Training: After you've built your model, Train it and watch how it performs with real-time statistics.
- Evaluate: Evaluate your trained model with a series of tests to gain further insight into how it might perform in the real-world.
- Optimize: When you're ready to export/deploy your model, PerceptiLabs can output to different targets like TensorFlow, with support for Intel's OpenVINO toolkit coming soon. Some targets include the option to further optimize the model. For example, when exporting to TensorFlow, PerceptiLabs provides the option to compress and/or quantize the model. Selecting either of these options exports to TensorFlow Lite, making the model smaller for faster inference and suitable to run on edge devices.
You can iterate through these steps as often as you like, until you get your model adjusted and optimized just how you want it.
In addition to creating models from your own datasets, PerceptiLabs' can also load models from our Model Garden. Alternatively you can create a Custom Component to load a model from external sources like TensorFlow Hub and Hugging Face.
Once you've trained your model, PerceptiLabs can help you export and deploy it for real-world inference. Currently PerceptiLabs can export your trained model to TensorFlow and TensorFlow Lite, or deploy as part of working FastAPI and Gradio apps.
Follow the quickstart guide to get up and running with Perceptilabs in a few minutes. Remember to check the requirements.