Research Article
Use of Kinect Depth Data and Growing Neural Gas for Gesture Based Robot Control
@INPROCEEDINGS{10.4108/icst.pervasivehealth.2012.248610, author={Paul Yanik and Joe Manganelli and Jessica Merino and Anthony Threatt and Johnell Brooks and Keith Green and Ian Walker}, title={Use of Kinect Depth Data and Growing Neural Gas for Gesture Based Robot Control}, proceedings={Situation Recognition and Medical Data Analysis in Pervasive Health Environments}, publisher={IEEE}, proceedings_a={PERVASENSE}, year={2012}, month={7}, keywords={kinect growing neural gas gesture recognition}, doi={10.4108/icst.pervasivehealth.2012.248610} }
- Paul Yanik
Joe Manganelli
Jessica Merino
Anthony Threatt
Johnell Brooks
Keith Green
Ian Walker
Year: 2012
Use of Kinect Depth Data and Growing Neural Gas for Gesture Based Robot Control
PERVASENSE
IEEE
DOI: 10.4108/icst.pervasivehealth.2012.248610
Abstract
Recognition of human gestures is an active area of research integral to the development of intuitive human-machine interfaces for ubiquitous computing and assistive robotics. In particular, such systems are key to effective environmental designs which facilitate aging in place. Typically, gesture recognition takes the form of template matching in which the human participant is expected to emulate a choreographed motion as prescribed by the researchers. The robotic response is then a one-to-one mapping of the template classification to a library of distinct responses. In this paper, we explore a recognition scheme based on the Growing Neural Gas (GNG) algorithm which places no initial constraints on the user to perform gestures in a specific way. Skeletal depth data collected using the Microsoft Kinect sensor is clustered by GNG and used to refine a robotic response associated with the selected GNG reference node. We envision a supervised learning paradigm similar to the training of a service animal in which the response of the robot is seen to converge upon the user’s desired response by taking user feedback into account. This paper presents initial results which show that GNG effectively differentiates between gestured commands and that, using automated (policy based) feedback, the system provides improved responses over time.