PerceptiLabs specializes in computer vision applications, but is also able to build models based on data in a CSV format, such as text analysis, regression, etc.
What GPUs does PerceptiLabs support for GPU-accelerated machine learning?
PerceptiLabs supports Nvidia GPUs. Note that since Mac use ATI GPUs, GPU-accelerated machine learning is not supported on that platform.
How can I deploy a model to a production environment that was built with PerceptiLabs?
You can export/deploy your model as a TensorFlow model, FastAPI server, or Gradio. See Deploy View for more information.
What Version of TensorFlow does PerceptiLabs use?
PerceptiLabs v0.12.x uses TensorFlow 2.x. Prior versions use TensorFlow 1.x. See our change log for more information.
What versions of PerceptiLabs are available?
"PerceptiLabs Free" is our browser+local kernel app that you can run in your browser.
"PerceptiLabs Enterprise" is our cloud-agnostic/on-prem version that trains models in the cloud or on-prem.
Does PerceptiLabs support a cloud infrastructure like AWS?
Yes, our enterprise version runs PerceptiLabs on OpenShift. This is a containerized version which can be run on any cloud or on-premise deployment, and utilizes the hardware in a clever way that scales very well.
What type(s) of AI does PerceptiLabs support?
PerceptiLabs allows you to build models using deep learning (neural networks) like Convolution Neural Networks, as well as simple methods like linear regression.
Does the data used in PerceptiLabs app stay in the app or it uploaded to some server?
Your data does not go to any server in the desktop version. The data we collect (e.g., error logs and similar) is used to fix bugs and improve the user experience.
What data file formats are supported in PerceptiLabs’ Data Wizard?
Why does PerceptiLabs not open in a browser after I execute PerceptiLabs on the command line?
This can happen if another application or service that is already running on your local machine is using port 5000, 8000, 8011, and/or 8080. Be sure to first close that application or service before running PerceptiLabs.
Why does PerceptiLabs fail to run on WSL?
If you've just installed PerceptiLabs on Windows Subsystem for Linux (WSL) you must first stop and restart WSL before PerceptiLabs will run correctly.
The GPU version of PerceptiLabs (perceptilabs-gpu package) does not work with WSL.
Why are previews not visible in some Components and/or statistics panes?
On some machines this can be caused by an incorrect mime-type for .js files on Windows 10. Check out this forum response for a work around.
How do I Convert a PyTorch Model to TensorFlow?
See the first part of this forum post which discusses how to convert a PyTorch model to TensorFlow.
How do I use my own Model in PerceptiLabs?
See the second part of this forum post which discusses how to use your own model in PerceptiLabs.
Does PerceptiLabs Work with Apple's M1 Chip?
No, PerceptiLabs is not yet compatible with M1 hardware.