Emotion and mood recognition plays a key role in human-robot interaction, especially in the context of socially assistive robotics. Mood-aware robots could be useful as companions and social assistants for elders and people affected by depression and other mood disorders. An interesting option for continuously tracking a user's mood is the use of wearable and mobile devices. However, the classification of the mood from physiological and kinematics data is still a challenge, due to intersubjects differences: on one hand, 'one-fits-all' classification approaches usually achieve lower accuracy than person-specific methods; on the other hand, personalized models require in general a large amount of data from a single subject to be trained and, therefore, becomes effective after long periods of acquisition. In this paper, we propose a deep learning approach for mood recognition from a publicly available dataset that includes a gyroscope, accelerometer, and heart-rate data. We propose the use of long-short term memory networks (LSTM), testing them both as classifiers and as features extractors in hybrid models. We compared their performances both against and in conjunction with traditional machine learning approaches, namely support vector machines (SVM) and Gaussian mixture models (GMM). We also consider transfer learning strategies to reduce the amount of personal data needed to train the model. Our results show that the use of LSTMs significantly improves the classification accuracy with respect to machine learning approaches, especially if employed as feature extractors and combined with SVM. However, we observed that transfer learning does not achieve significant results in boosting the training of a personalized model.
|Titolo:||A Deep Learning Approach for Mood Recognition from Wearable Data|
|Data di pubblicazione:||2020|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|