FAQs

Are there any tutorials to help me get started?

Yes, check out Quickstart guide. Also be sure to join our forums where you can ask questions and interact with our community of users.

What GPUs does PerceptiLabs support for GPU-accelerated machine learning?

PerceptiLabs supports Nvidia GPUs. Note that since Mac use ATI GPUs, GPU-accelerated machine learning is not supported on that platform.

How can I deploy a model to a production environment that was built with PerceptiLabs?

Currently you can export your model as a TensorFlow model. When you export your model, you can just deploy it to a TensorFlow server, if you have that set up.

What Version of TensorFlow does PerceptiLabs use?

PerceptiLabs v0.11.13 uses TensorFlow 2.x. Prior versions use TensorFlow 1.x. See our change log for more information.

What type(s) of AI does PerceptiLabs support?

PerceptiLabs allows you to build models using deep learning (neural networks) like Convolution Neural Networks, as well as simple methods like linear regression.

Does the data used in PerceptiLabs app stay in the app or it uploaded to some server?

Your data does not go to any server in the desktop version. The data we collect (e.g. error logs and similar) is used to fix bugs and improve the user experience.

Why do I get the following error message when I try different operations in PerceptiLabs?

It seems we can not find any running kernel on your local machine. Download the kernel by “pip install perceptilabs” and then start it by entering “perceptilabs” in the installed environment. For more information, visit 
https://perceptilabs.com.

PerceptiLabs requires that you first install and run our PyPI kernal package on your machine before you launch PerceptiLabs. For more information see our Installation page.

What data file formats are supported in PerceptiLabs’ Data component?

PerceptiLabs supports a number of file formats. See here for more information.

Why does PerceptiLabs not open an in browser after I execute PerceptiLabs on the command line?

This can happen if another application or service that is already running on your local machine is using port 5000, 8000, 8011, and/or 8080. Be sure to first close that application or service before running PerceptiLabs.

Why does PerceptiLabs fail to run on WSL?

If you've just installed PerceptiLabs on Windows Subsystem for Linux (WSL) you must first stop and restart WSL before PerceptiLabs will run correctly.

The GPU version of PerceptiLabs (perceptilabs-gpu package) does not work with WSL.

Last updated