Welcome

Welcome to the documentation for PerceptiLabs v0.12.x.

We've launched v0.12.x. a major release on May 4/21. We appreciate your patience as we update our docs and fix a few bugs. Comments or bugs - let us know in the Issues/Feedback forum channel.

Note: Users running PerceptiLabs v0.11 should refer to the v0.11 documentation.

PerceptiLabs is a visual modelling tool for machine learning built on top of TensorFlow. It provides a rich user interface to edit, manage, and monitor your machine learning models while designing and training them.

Getting Started

Follow the quickstart guide to get up and running with Perceptilabs in a few minutes. Remember to check the requirements.

We recommend you read the UI overview documentation to understand each area of the Perceptilabs UI in detail.

We describe in detail each component in the Modeling tool , PerceptiLabs' main view to develop new models.

PerceptiLabs Technology Stack

PerceptiLabs is a dataflow-driven, visual API for TensorFlow, distributed as a free Python package (hosted on PyPI) for everyone to use. PerceptiLabs wraps low-level TensorFlow code to create visual components, which allows users to visualize the model architecture as the model is being built. As a visual API, PerceptiLabs sits on top of TensorFlow and other APIs:

Drag and Drop

In PerceptiLabs, you drag and drop components on a workspace for each layer you want to include in your model and connect them together. To complete and run the model, a Training component is connected at the end of the model’s graph. It’s designed in a similar way to Keras, where the user writes one-liners of code for each layer they want their model to include, and to wrap up and train the model, a .compile() and a .fit() method are invoked.

The Training components in PerceptiLabs make it easier to build complex models and to use different machine learning techniques. They support many model types and techniques. For example, if you want to use reinforcement learning or object detection, you will connect the respective a training components at the end of the model.

This visual, drag-and-drop approach provides a number of benefits:

  • view of the overall model architecture

  • granular visualizations during the modeling phase, run-time, and testing

  • debugging and diagnostic features

  • automatic suggestions for configs/settings and hyperparameters

  • dimensionality and I/O shape fitting

View and Edit Your Components' Code

PerceptiLabs automatically generates the code for each component you add to your model and assigns "good" hyperparameter values as you connect them together. You can then tweak these settings as required. You also have the option to view and edit this autogenerated code including both the hyperparameter values and the logic. You can select any component to view and edit code in the Code Editor. You can also view all of the code in your model and export it as a Jupyter Notebook.

Get Started

Check out our system requirements and then follow our Quickstart Guide. You'll be up and running in no time!