Studies show that people expectations on robots and their behavior are similar to those regarding living objects, and that users ascribe robots with human attributes, qualities, and capabilities even when the robot is not conceived for social interaction. Actually, the increasing availability of sensors able to capture situational data makes it possible to achieve adaptive systems able to dynamically take into account users' and context information with unprecedented precision, thus showing some degree of empathy and emotional intelligence. Modern robots can use their sensors, like cameras and microphones, not only for their more traditional goals, but also for classifying human emotional states in order to emulate an empathic behavior, and to put the users at ease and tempt them in continuing the interaction. They can offer a human-like communication occurring over different verbal and non-verbal communication channels. Anyhow, since multi-modal emotion detection is a complex technique requiring a proper combination of all the deriving data, handling it can be very demanding, and maybe impossible to achieve for many machines because of hardware limitations or simply for an unaffordable battery power consumption, with an ultimate effect on usability, which can degrade up to an unacceptable degree. In this paper we discuss how these problems have been faced within the framework of the BlocksBot project and how its Hybrid Distributed approach allows to overcome such limitations.
BlocksBot: Towards an Empathic Robot Offering Multi-modal Emotion Detection Based on a Distributed Hybrid System
Salutari, A
;Tarantino, L
;De Gasperis, G
2022-01-01
Abstract
Studies show that people expectations on robots and their behavior are similar to those regarding living objects, and that users ascribe robots with human attributes, qualities, and capabilities even when the robot is not conceived for social interaction. Actually, the increasing availability of sensors able to capture situational data makes it possible to achieve adaptive systems able to dynamically take into account users' and context information with unprecedented precision, thus showing some degree of empathy and emotional intelligence. Modern robots can use their sensors, like cameras and microphones, not only for their more traditional goals, but also for classifying human emotional states in order to emulate an empathic behavior, and to put the users at ease and tempt them in continuing the interaction. They can offer a human-like communication occurring over different verbal and non-verbal communication channels. Anyhow, since multi-modal emotion detection is a complex technique requiring a proper combination of all the deriving data, handling it can be very demanding, and maybe impossible to achieve for many machines because of hardware limitations or simply for an unaffordable battery power consumption, with an ultimate effect on usability, which can degrade up to an unacceptable degree. In this paper we discuss how these problems have been faced within the framework of the BlocksBot project and how its Hybrid Distributed approach allows to overcome such limitations.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.