PerceptiLabs supports Nvidia GPUs. Note that since Mac use ATI GPUs, GPU-accelerated machine learning is not supported on that platform.
Currently you can export your model as a TensorFlow model. When you export your model, you can just deploy it to a TensorFlow server, if you have that set up.
"PerceptiLabs Free" is our browser+local kernel app that you can run in your browser. "PerceptiLabs Enterprise" is our cloud-agnostic/on-prem version that trains models in the cloud or on-prem.
Yes, our enterprise version runs PerceptiLabs on OpenShift. This is a containerized version which can be run on any cloud or on-premise deployment, and utilizes the hardware in a clever way that scales very well.
PerceptiLabsallows you to build models using deep learning (neural networks) like Convolution Neural Networks, as well as simple methods like linear regression.
Your data does not go to any server in the desktop version. The data we collect (e.g. error logs and similar) is used to fix bugs and improve the user experience.
It seems we can not find any running kernel on your local machine. Download the kernel by “pip install perceptilabs” and then start it by entering “perceptilabs” in the installed environment. For more information, visithttps://perceptilabs.com.
PerceptiLabs requires that you first install and run our PyPI kernal package on your machine before you launch PerceptiLabs. For more information see our Installation page.
PerceptiLabs supports a number of file formats. See here for more information.
This can happen if another application or service that is already running on your local machine is using port 5000, 8000, 8011, and/or 8080. Be sure to first close that application or service before running PerceptiLabs.
If you've just installed PerceptiLabs on Windows Subsystem for Linux (WSL) you must first stop and restart WSL before PerceptiLabs will run correctly.
The GPU version of PerceptiLabs (perceptilabs-gpu package) does not work with WSL.