Semantic Annotation for the CMU-MMAC Dataset
Providing ground truth is essential for activity recognition for three reasons: to apply methods of supervised learning, to provide context information for knowledge-based methods, and to quantify the recognition performance. Semantic annotation extends simple symbolic labelling by assigning semantic meaning to the label, enabling further reasoning. We create semantic annotation for three of the five sub datasets in the CMU grand challenge dataset, which is often cited but, due to missing and incomplete annotation, almost never used. The CMU-MMAC consists of five sub datasets (Brownie, Sandwich, Eggs, Salad, Pizza). Each of them contains recorded sensor data from one food preparation task. The dataset contains data from 55 subjects, were each of them participates in several sub experiments. While executing the assigned task, the subjects were recorded with five cameras and multiple sensors.
The produced annotation is publicly available, to enable further usage of the CMU grand challenge dataset. The annotation of three of the five datasets (Brownie, Sandwich, and Eggs) can be downloaded here. A workshop paper describing the annotation process can be found here and an extended journal paper can be found here.