Welcome

Welcome to the documentation for PerceptiLabs v0.12.x. Note: Users running PerceptiLabs v0.11 should refer to the v0.11 documentation by selecting the version from the navigation bar to the left.

Overview

PerceptiLabs is a visual modelling tool for machine learning built on top of TensorFlow. It provides a rich user interface to edit, manage, and monitor your machine learning models while designing and training them:

Workflow

PerceptiLabs' makes the modeling workflow easy:

This workflow involves three main phases:

  1. Create a .CSV file to map your dataset (e.g., image files) to labels and then import both into PerceptiLabs using the Data Wizard.

  2. Edit your model(s) with the Data Modeling tool, run and train your models, and optimize them based on information in the Statistics View.

  3. Export your trained model to TensorFlow's exported model format and use the file for inference.

Getting Started

Follow the quickstart guide to get up and running with Perceptilabs in a few minutes. Remember to check the requirements.

We recommend you read the UI overview documentation to understand each area of the Perceptilabs UI in detail.

We describe in detail each Component in the Modeling tool, PerceptiLabs' main view to develop new models.

PerceptiLabs Technology Stack

PerceptiLabs is a dataflow-driven, visual API for TensorFlow, distributed as a free Python package (hosted on PyPI) for everyone to use. PerceptiLabs wraps low-level TensorFlow code to create visual components, which allows users to visualize the model architecture as the model is being built. As a visual API, PerceptiLabs sits on top of TensorFlow and other APIs:

Drag and Drop

In PerceptiLabs, you drag and drop components on a workspace for each layer you want to include in your model and connect them together. To complete and run the model, a Training component is connected at the end of the model’s graph. It’s designed in a similar way to Keras, where the user writes one-liners of code for each layer they want their model to include, and to wrap up and train the model, a .compile() and a .fit() method are invoked.

The Training components in PerceptiLabs make it easier to build complex models and to use different machine learning techniques. They support many model types and techniques. For example, if you want to use reinforcement learning or object detection, you will connect the respective a training components at the end of the model.

This visual, drag-and-drop approach provides a number of benefits:

  • view of the overall model architecture

  • granular visualizations during the modeling phase, run-time, and testing

  • debugging and diagnostic features

  • automatic suggestions for configs/settings and hyperparameters

  • dimensionality and I/O shape fitting

View and Edit Your Components' Code

PerceptiLabs automatically generates the code for each component you add to your model and assigns "good" hyperparameter values as you connect them together. You can then tweak these settings as required. You also have the option to view and edit this autogenerated code including both the hyperparameter values and the logic. You can select any component to view and edit code in the Code Editor.

Last updated