Research Article
Multimodality Sensing for Eating Recognition
@INPROCEEDINGS{10.4108/eai.16-5-2016.2263281, author={Christopher Merck and Christina Maher and Mark Mirtchouk and Min Zheng and Yuxiao Huang and Samantha Kleinberg}, title={Multimodality Sensing for Eating Recognition}, proceedings={10th EAI International Conference on Pervasive Computing Technologies for Healthcare}, publisher={ACM}, proceedings_a={PERVASIVEHEALTH}, year={2016}, month={6}, keywords={eating recognition acoustic and motion sensing}, doi={10.4108/eai.16-5-2016.2263281} }
- Christopher Merck
Christina Maher
Mark Mirtchouk
Min Zheng
Yuxiao Huang
Samantha Kleinberg
Year: 2016
Multimodality Sensing for Eating Recognition
PERVASIVEHEALTH
EAI
DOI: 10.4108/eai.16-5-2016.2263281
Abstract
While many sensors can monitor physical activity, there is no device that can unobtrusively measure eating at the same level of detail. Yet, tracking and reacting to food consumption is key to managing many chronic diseases such as obesity and diabetes. Eating recognition has primarily used a single sensor at a time in a constrained environment but sensors may fail and each may pick up different types of eating. We present a multi-modality study of eating recognition, which combines head and wrist motion (Google Glass, smartwatches on each wrist), with audio (custom earbud microphone). We collect 72 hours of data from 6 participants wearing all sensors and eating an unrestricted set of foods, and annotate video recordings to obtain ground truth. Using our noise cancellation method, audio sensing alone achieved 92% precision and 89% recall in finding meals, while motion sensing was needed to find individual intakes.