FAQs
Last updated
Was this helpful?
Last updated
Was this helpful?
Yes, check out . Also be sure to join our where you can ask questions and interact with our community of users.
PerceptiLabs supports Nvidia GPUs. Note that since Mac use ATI GPUs, GPU-accelerated machine learning is not supported on that platform.
Currently you can export your model as a TensorFlow model. When you export your model, you can just deploy it to a TensorFlow server, if you have that set up.
PerceptiLabs v0.11.13 uses TensorFlow 2.x. Prior versions use TensorFlow 1.x. See our for more information.
PerceptiLabs allows you to build models using deep learning (neural networks) like Convolution Neural Networks, as well as simple methods like linear regression.
Your data does not go to any server in the desktop version. The data we collect (e.g. error logs and similar) is used to fix bugs and improve the user experience.
This can happen if another application or service that is already running on your local machine is using port 5000, 8000, 8011, and/or 8080. Be sure to first close that application or service before running PerceptiLabs.
If you've just installed PerceptiLabs on Windows Subsystem for Linux (WSL) you must first stop and restart WSL before PerceptiLabs will run correctly.
The GPU version of PerceptiLabs (perceptilabs-gpu package) does not work with WSL.
PerceptiLabs requires that you first install and run our PyPI kernal package on your machine before you launch PerceptiLabs. For more information see our .
PerceptiLabs supports a number of file formats. See for more information.