Automatic Multimodal Emotion Recognition Using Facial Expression, Voice, and Text
Automatic Multimodal Emotion Recognition Using Facial Expression, Voice, and Text
Hélène Tran
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 5881-5882.
https://doi.org/10.24963/ijcai.2022/843
It has been a long-time dream for humans to interact with a machine as we would with a person, in a way that it understands us, advises us, and looks after us with no human supervision. Despite being efficient on logical reasoning, current advanced systems lack empathy and user understanding. Estimating the user's emotion could greatly help the machine to identify the user's needs and adapt its behaviour accordingly. This research project aims to develop an automatic emotion recognition system based on facial expression, voice, and words. We expect to address the challenges related to multimodality, data complexity, and emotion representation.
Keywords:
Machine Learning (ML): General