講師に、Angelica Lim氏をお招きしてお話を伺い、交流、ディスカッションを行いました。
|
Speaker: | Angelica Lim, Visiting Researcher, Honda Research Institute Japan Post-doctoral Researcher, Kyoto University |
Title: | Multimodal Emotional Intelligence for Robots inspired by Infant Development |
Abstract: | Could a robot be moved by music? How does music move *us*? In this talk, I will first introduce the SIRE model, which describes a common code underlying voice, music and movement. The SIRE model describes an emotion in terms of dynamics — Speed, Intensity, irRegularity and Extent. Using automatic analysis of human databases and perception studies with a humanoid robot, we found combinations of these parameters that underlie basic emotions across multiple modalities. Secondly, I will present a model for the development of this multimodal emotional intelligence (MEI). I implemented a proof-of-concept robot that trains a statistical SIRE model, based on real-time interactions with human caregivers. The robot synchronized with caregivers through voice and movement dynamics, associating vocalizations with its own internal physical state (e.g. battery energy levels). Our experiments show that a robot interacting in motherese conditions of comfort and praise associates novel happy voices with physical flourishing 90% of the time, and sad voices with distress 84% of the time respectively. Furthermore, interaction in the attention and prohibition conditions provide the ability to later recognize fear dynamics and, to some extent, anger. |