ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Affective classification model based on emotional user experience and visual markers in YouTube video

Journal: International Journal of Advanced Technology and Engineering Exploration (IJATEE) (Vol.8, No. 81)

Publication Date:

Authors : ; ;

Page : 970-988

Keywords : Emotional UX; Human computer interaction; Kansei; Visual markers; YouTube video.;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

A video is composed of rendered elements such as text, audio, and visual elements. It may convey messages that emotionally engage viewers via embedded elements that demand visual attention, referred to as Visual Markers (VM). However, little attention has been paid to VM, particularly in terms of determining which VM influences viewers' emotional experience. Lack of understanding of VM and its impact on viewers' emotional experience may result in negative impact and hamper efficient video classification and filtering. This is crucial when, for instance, a YouTube video is used for malicious agenda. To fill this gap, this research was conducted to identify VM in Extremist YouTube Videos (EYV). It is helpful in determining significant viewers' emotional responses upon watching EYV, and to develop an affective classification model based on emotional User Experience (UX) and VM in YouTube videos. The research conducted in Kansei evaluation using 20 YouTube video specimens with 80 respondents. Multivariate analysis was performed to determine the structure of emotions, the relationship between a VM and emotional responses, and classify the emotional responses and influential VM. The result has enabled this research to develop an affective classification model comprising three emotional dimensions; offensive, intrigue and awkward. The model contributes a new understanding of the body of knowledge of emotional evocative video elements and provides insights to authorities, policy makers, and other stakeholders to manage the classification of emotional evocative video. It could be used as a basis for formulating an algorithm to filter video content. Although the model was based on work under certain limitations, they lend some novelty by linking affect to VM in video classification. Future work could explore enhancing its applicability using wider scope and population of subjects and instruments. Additionally, video producers could extend the model in producing videos capable of invoking a targeted emotion to the viewers.

Last modified: 2021-10-02 16:00:54