Common Sense Based Joint Training of Human Activity Recognizers

Shiaokai Wang, William Pentney, Ana-Maria Popescu, Tanzeem Choudhury, Matthai Philipose

Given sensors to detect object use, commonsense priors of object usage in activities can reduce the need for labeled data in learning activity models. It is often useful, however, to understand how an object is being used, i.e., the action performed on it. We show how to add personal sensor data (e.g., accelerometers) to obtain this detail, with little labeling and feature selection overhead. By synchronizing the personal sensor data with object-use data, it is possible to use easily specified commonsense models to minimize labeling overhead. Further, combining a generative common sense model of activity with a discriminative model of actions can automate feature selection. On observed activity data, automatically trained action classifiers give 40/85% precision/recall on 10 actions. Adding actions to pure object-use improves precision/recall from 76/85% to 81/90% over 12 activities.