Using the Kinect Sensor for Social Robotics
MetadataShow full item record
This thesis presents an innovative approach to social robotics through gesture recognition. The focus is on recognizing gestures because this is an important aspect regarding interpretation of a person's intent when he or she gives commands to a robot.The equipment used is a Kinect sensor, developed by Microsoft, attached to a moving platform. The Kinect communicates with software running on a PC through the OpenNI interface and uses the NITE middleware by PrimeSense.The results of this thesis are: - a broad literature study presenting the state of the art of gesture recognition - a system which handles the problems that arise when the Kinect is non-stationary - a gesture recognizer that observes and analyzes human actionsThere are mainly two problems that are solved by the implemented system. First, user labels might be incorrectly swapped when the Kinect's standard algorithm loses track of a user for a few frames. Second, false-positive users are detected, as the Kinect is assumed stationary. Because of this, everything that moves relative to the Kinect is marked as a user. The first problem is counteracted by mapping the observed label to where it was last seen. The second problem is solved using a combination of optical flow and feature analysis.The gesture recognizer has been developed to allow robust and efficient segmentation, joint detection and gesture recognition. To achieve both high efficiency and good results, these algorithms are tailored to be used with the high quality user silhouettes detected by the Kinect. In addition, the default Kinect algorithm needs some time to initialize when a new human user is detected. The implemented gesture recognizer has no such delay.