Towards multimodal affective expression: merging facial expressions and body motion into emotion

Diego R. Faria, Fernanda C. C. Faria, Cristiano Premebida

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affect
recognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodily
motion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminary
results show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.
Original languageEnglish
Title of host publicationIEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.
PublisherIEEE
Pages16-20
Number of pages5
Publication statusPublished - 18 Sep 2017

Fingerprint

Merging
Human robot interaction
Communication
Fusion reactions

Bibliographical note

Copyright: IEEE & ARMADA 2017

Keywords

  • Emotion recognition
  • probabilistic approach
  • human-robot interaction

Cite this

Faria, D. R., Faria, F. C. C., & Premebida, C. (2017). Towards multimodal affective expression: merging facial expressions and body motion into emotion. In IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20. (pp. 16-20). IEEE.
Faria, Diego R. ; Faria, Fernanda C. C. ; Premebida, Cristiano. / Towards multimodal affective expression : merging facial expressions and body motion into emotion. IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.. IEEE, 2017. pp. 16-20
@inproceedings{0fb9a6a7b29f461ca14a265c75d43553,
title = "Towards multimodal affective expression: merging facial expressions and body motion into emotion",
abstract = "Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affectrecognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodilymotion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminaryresults show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.",
keywords = "Emotion recognition, probabilistic approach, human-robot interaction",
author = "Faria, {Diego R.} and Faria, {Fernanda C. C.} and Cristiano Premebida",
note = "Copyright: IEEE & ARMADA 2017",
year = "2017",
month = "9",
day = "18",
language = "English",
pages = "16--20",
booktitle = "IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.",
publisher = "IEEE",
address = "United States",

}

Faria, DR, Faria, FCC & Premebida, C 2017, Towards multimodal affective expression: merging facial expressions and body motion into emotion. in IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.. IEEE, pp. 16-20.

Towards multimodal affective expression : merging facial expressions and body motion into emotion. / Faria, Diego R.; Faria, Fernanda C. C.; Premebida, Cristiano.

IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.. IEEE, 2017. p. 16-20.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Towards multimodal affective expression

T2 - merging facial expressions and body motion into emotion

AU - Faria, Diego R.

AU - Faria, Fernanda C. C.

AU - Premebida, Cristiano

N1 - Copyright: IEEE & ARMADA 2017

PY - 2017/9/18

Y1 - 2017/9/18

N2 - Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affectrecognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodilymotion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminaryresults show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.

AB - Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affectrecognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodilymotion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminaryresults show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.

KW - Emotion recognition

KW - probabilistic approach

KW - human-robot interaction

M3 - Conference contribution

SP - 16

EP - 20

BT - IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.

PB - IEEE

ER -

Faria DR, Faria FCC, Premebida C. Towards multimodal affective expression: merging facial expressions and body motion into emotion. In IEEE RO-MAN'17: Workshop Proceedings on Artificial Perception, Machine Learning and Datasets for Human-Robot Interaction (ARMADA'17), pp.16-20.. IEEE. 2017. p. 16-20