End-to-end steering angle prediction and object detection using convolutional neural networks
MetadataShow full item record
Artificial neural networks have enabled a new generation of autonomous vehicles. Public datasets can possibly be used to build networks capable of predicting steering angles and detect objects for self-driving vehicles. The objective of this paper is to investigate whether steering angles and object bounding boxes can be predicted from a single image input. A novel architecture combining both models into a steering angle predictor with incorporated object detection is also explored. An object detection model was created based on modern bounding box regression networks. It was trained to detect cars, trucks, traffic lights, pedestrians, bikers, and traffic signs with data from the German Traffic Sign Detection Challenge and Udacity. A steering angle prediction network, which was trained using driving data provided by Udacity, was created. Both models shared pre-trained layers from VGG16 for feature extraction. The models were also combined into a single model that aimed to make the steering angle predictor more robust by providing explicit object detections as additional inputs. The detector achieved a mean average precision of 44.02 on the validation dataset, which was a combination of images from both the German Traffic Sign Detection Challenge and Udacity datasets. Training the detector on multiple datasets with similar input data, but different annotations reduced the models performance. The steering angle predictor achieved a root mean squared error of 0.0645. By incorporating object detection into the predictor model, the error rose to 0.0653. It is possible use public datasets to build models that are capable of predicting steering angles and detect objects from images. Incorporating an object detector into the steering angle predictor did not improve its performance.