Comment on page
PerceptiLabs supports Nvidia GPUs. Note that since Mac use ATI GPUs, GPU-accelerated machine learning is not supported on that platform.
Currently you can export your model as a TensorFlow model. When you export your model, you can just deploy it to a TensorFlow server, if you have that set up.
PerceptiLabs allows you to build models using deep learning (neural networks) like Convolution Neural Networks, as well as simple methods like linear regression.
Your data does not go to any server in the desktop version. The data we collect (e.g. error logs and similar) is used to fix bugs and improve the user experience.
It seems we can not find any running kernel on your local machine. Download the kernel by “pip install perceptilabs” and then start it by entering “perceptilabs” in the installed environment. For more information, visit
This can happen if another application or service that is already running on your local machine is using port 5000, 8000, 8011, and/or 8080. Be sure to first close that application or service before running PerceptiLabs.
If you've just installed PerceptiLabs on Windows Subsystem for Linux (WSL) you must first stop and restart WSL before PerceptiLabs will run correctly.
The GPU version of PerceptiLabs (perceptilabs-gpu package) does not work with WSL.