第1回コロキューム

講師に、Angelica Lim氏をお招きしてお話を伺い、交流、ディスカッションを行いました。

日 時 4月25日(金)16:30~18:30
場 所 実体情報学博士プログラム「工房」
東京都新宿区大久保2-4-12 新宿ラムダックスビル3F
講 師 Angelica Lim 氏
対 象 早稲田大学 理工学術院 修士課程学生、博士課程学生
募集人数 50名
申し込み 参加ご希望の方は、下記e-mailからお申込みください。
leading-sn-info_at_list.waseda.jp (※ “_at_”は“@”に置き換えてください)
実体情報学博士プログラム事務局宛

講演内容

Speaker: Angelica Lim,
Visiting Researcher, Honda Research Institute Japan
Post-doctoral Researcher, Kyoto University
Title: Multimodal Emotional Intelligence for Robots inspired by Infant Development
Abstract: Could a robot be moved by music? How does music move *us*? In this talk, I will first introduce the SIRE model, which describes a common code underlying voice, music and movement. The SIRE model describes an emotion in terms of dynamics — Speed, Intensity, irRegularity and Extent. Using automatic analysis of human databases and perception studies with a humanoid robot, we found combinations of these parameters that underlie basic emotions across multiple modalities. Secondly, I will present a model for the development of this multimodal emotional intelligence (MEI). I implemented a proof-of-concept robot that trains a statistical SIRE model, based on real-time interactions with human caregivers. The robot synchronized with caregivers through voice and movement dynamics, associating vocalizations with its own internal physical state (e.g. battery energy levels). Our experiments show that a robot interacting in motherese conditions of comfort and praise associates novel happy voices with physical flourishing 90% of the time, and sad voices with distress 84% of the time respectively. Furthermore, interaction in the attention and prohibition conditions provide the ability to later recognize fear dynamics and, to some extent, anger.

 

ホームへ戻る

▲ PAGE TOP