Self-Driving Cars Using Nvidia PilotNet
Computer Vision is a key technology for building algorithms to enable self-driving cars. We've used PerceptiLabs to recreate Nvidia's end-to-end deep learning approach for mapping raw pixels using images captured from front-facing cameras mounted on a car. Each image has a corresponding steering angle associated with it that tells the position of the car's steering for that frame. For this model, we have used Udacity’s car simulator to collect the dataset:
Figure 1: Udacity's Car Simulator
The car captures three pictures – left, center, right – for every single frame using the cameras that are fitted on the front of the car:
Figure 2: Example Images From the Three Cameras Mounted to the Front of the Car.
Each frame has its own steering angle value that will be used as labels.
The model is based around the PilotNet model which is composed of nine layers:
Figure 3: Network Layout - Credits: Nvidia (https://developer.nvidia.com/blog/deep-learning-self-driving-cars/).
Figure 4: Screenshot of the PilotNet Model in PerceptiLabs.
You can also watch how to build and train this model in the following video: