A Novel Multimodal Emotion Recognition Approach for Affective Human Robot Interaction

Felipe Cid Burgos, Luis J. Manso, Pedro Nunez

Research output: Chapter in Book/Published conference outputConference publication

Abstract

Facial expressions and speech are elements that provide emotional information about the user through multiple communication channels. In this paper, a novel multimodal emotion recognition system based on visual and auditory information processing is proposed. The proposed approach is used in real affective human robot communication in order to estimate five different emotional states (i.e., happiness, anger,fear, sadness and neutral), and it consists of two subsystems with similar structure. The first subsystem achieves a robust facial feature extraction based on consecutively applied filters to the edge image and the use of a Dynamic Bayessian Classifier.A similar classifier is used in the second subsystem, where the input is associated to a set of speech descriptors, such as speech-rate, energy and pitch. Both subsystems are finally combined in real time. The results of this multimodal approach show the robustness and accuracy of the methodology respect to single emotion recognition systems.
Original languageEnglish
Title of host publicationProceedings of the Workshop on Multimodal and Semantics for Robotics Systems
PublisherCEUR-WS.org
Number of pages9
ISBN (Electronic)1613-0073, 2015
Publication statusPublished - Jun 2015
EventMuSRobS 2015 - Hamburg, Germany
Duration: 1 Oct 20151 Oct 2015

Conference

ConferenceMuSRobS 2015
Country/TerritoryGermany
CityHamburg
Period1/10/151/10/15

Fingerprint

Dive into the research topics of 'A Novel Multimodal Emotion Recognition Approach for Affective Human Robot Interaction'. Together they form a unique fingerprint.

Cite this