MMEmAsis: multimodal emotion and sentiment analysis
Journal: Discrete and Continuous Models and Applied Computational Science (Vol.32, No. 4)Publication Date: 2025-04-10
Authors : Gleb Kiselev; Yaroslava Lubysheva; Daniil Weizenfeld;
Page : 370-379
Keywords : dataset; emotion analysis; multimodal data mining; artificial intelligence; machine learning; deep learning; neuroscience data mining;
Abstract
The paper presents a new multimodal approach to analyzing the psycho-emotional state of a person using nonlinear classifiers. The main modalities are the subject’s speech data and video data of facial expressions. Speech is digitized and transcribed using the Scribe library, and then mood cues are extracted using the Titanis sentiment analyzer from the FRC CSC RAS. For visual analysis, two different approaches were implemented: a pre-trained ResNet model for direct sentiment classification from facial expressions, and a deep learning model that integrates ResNet with a graph-based deep neural network for facial recognition. Both approaches have faced challenges related to environmental factors affecting the stability of results. The second approach demonstrated greater flexibility with adjustable classification vocabularies, which facilitated post-deployment calibration. Integration of text and visual data has significantly improved the accuracy and reliability of the analysis of a person’s psycho-emotional state
Other Latest Articles
- Two-queue polling system as a model of an integrated access and backhaul network node in half-duplex mode
- IMRAD structure
- Review of Douglas Mark Ponton. 2024. Exploring Ecolinguistics: Ecological Principles and Narrative Practices. Bloomsbury
- Review of Sune Vork Steffensen, Martin Doring and Stephen J. Cowley (eds.). 2024. Language as an Ecological Phenomenon.Languaging and Bioecologies in Human-Environment Relationships. London: Bloomsbury
- Nonverbal communication at the ecolinguistic grassroots
Last modified: 2025-04-10 05:56:48