ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

An Effective Data Fusion Methodology for Multi-modal Emotion Recognition: A Survey

Journal: International Journal of Emerging Trends in Engineering Research (IJETER) (Vol.12, No. 7)

Publication Date:

Authors : ;

Page : 90-107

Keywords : Multimodal Emotion Recognition (MER); Deep Learning; Data Fusion; Speech Analysis; Text Analysis; Facial Expression Recognition; IEMOCAP; MELD; Hybrid Fusion;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Emotion recognition is a pivotal area of research with applications spanning education, healthcare, and intelligent customer service. Multimodal emotion recognition (MER) has emerged as a superior approach by integrating multiple modalities such as speech, text, and facial expressions, offering enhanced accuracy and robustness over unimodal methods. This paper reviews the evolution and current state of MER, highlighting its significance, challenges, and methodologies. We delve into various datasets, including IEMOCAP and MELD, providing a comparative analysis of their strengths and limitations. The literature review covers recent advancements in deep learning techniques, focusing on fusion strategies like early, late, and hybrid fusion. Identified gaps include issues related to data redundancy, feature extraction complexity, and real-time detection. Our proposed methodology leverages deep learning for feature extraction and a hybrid fusion approach to improve emotion detection accuracy. This research aims to guide future studies in addressing current limitations and advancing the field of MER. The main of this paper review recent methodologies in multimodal emotion recognition, analyze different data fusion techniques,identify challenges and research gaps

Last modified: 2024-07-19 16:32:52