Research Article
Synchronized Video and Motion Capture Dataset and Quantitative Evaluation of Vision Based Skeleton Tracking Methods for Robotic Action Imitation
@INPROCEEDINGS{10.1007/978-3-319-95153-9_14, author={Selamawet Atnafu and Conci Nicola}, title={Synchronized Video and Motion Capture Dataset and Quantitative Evaluation of Vision Based Skeleton Tracking Methods for Robotic Action Imitation}, proceedings={Information and Communication Technology for Development for Africa. First International Conference, ICT4DA 2017, Bahir Dar, Ethiopia, September 25--27, 2017, Proceedings}, proceedings_a={ICT4DA}, year={2018}, month={7}, keywords={Joint angle Accuracy Tracking ability Human motion dataset 3D camera Ground truth}, doi={10.1007/978-3-319-95153-9_14} }
- Selamawet Atnafu
Conci Nicola
Year: 2018
Synchronized Video and Motion Capture Dataset and Quantitative Evaluation of Vision Based Skeleton Tracking Methods for Robotic Action Imitation
ICT4DA
Springer
DOI: 10.1007/978-3-319-95153-9_14
Abstract
Marker-less skeleton tracking methods are being widely used for applications such as computer animation, human action recognition, human robot collaboration and humanoid robot motion control. Regarding robot motion control, using the humanoid’s 3D camera and a robust and accurate tracking algorithm, vision based tracking could be a wise solution. In this paper we quantitatively evaluate two vision based marker-less skeleton tracking algorithms (the first, Igalia’s Skeltrack skeleton tracking and the second, an adaptable and customizable method which combines color and depth information from the Kinect.) and perform comparative analysis on upper body tracking results. We have generated a common dataset of human motions by synchronizing an XSENS 3D Motion Capture System, which is used as a ground truth data and a video recording from a 3D sensor device. The dataset, could also be used to evaluate other full body skeleton tracking algorithms. In addition, sets of evaluation metrics are presented.