To Improve Is to Change Towards Improving Mood Prediction by Learning Changes in Emotion

2025-05-06 0 0 3.79MB 6 页 10玖币
侵权投诉
To Improve Is to Change: Towards Improving Mood Prediction
by Learning Changes in Emotion
Soujanya Narayana
University of Canberra
ACT, Australia
soujanya.narayana@canberra.edu.au
Ramanathan Subramanian
University of Canberra
ACT, Australia
ram.subramanian@canberra.edu.au
Ibrahim Radwan
University of Canberra
ACT, Australia
ibrahim.radwan@canberra.edu.au
Roland Goecke
University of Canberra
ACT, Australia
roland.goecke@ieee.org
Valence
Emotion change
Mood label:
"Negative"
Time
-1
Figure 1: Problem Illustration: Figure depicts emotion changes in an input video sample having a negative mood label. The
top colour bar denotes per-frame valence values for the video, while the bottom colour bar depicts emotional valence change
(Δ) labels over a window of ve frames (best viewed in colour).
ABSTRACT
Although the terms mood and emotion are closely related and often
used interchangeably, they are distinguished based on their dura-
tion, intensity and attribution. To date, hardly any computational
models have (a) examined mood recognition, and (b) modelled the
interplay between mood and emotional state in their analysis. In
this paper, as a rst step towards mood prediction, we propose a
framework that utilises both dominant emotion (or mood) labels,
and emotional change labels on the AFEW-VA database. Experi-
ments evaluating unimodal (trained only using mood labels) and
multimodal (trained with both mood and emotion change labels)
convolutional neural networks conrm that incorporating emo-
tional change information in the network training process can
signicantly improve the mood prediction performance, thus high-
lighting the importance of modelling emotion and mood simulta-
neously for improved performance in aective state recognition.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
ICMI ’22 Companion, November 7–11, 2022, Bengaluru, India
©2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9389-8/22/11. . . $15.00
https://doi.org/10.1145/3536220.3563685
CCS CONCEPTS
Human-centered computing Ambient intelligence
;
Computing methodologies Supervised learning by clas-
sication;Applied computing Psychology.
KEYWORDS
Mood; Emotion; Convolution neural network; Unimodal; Multi-
modal
ACM Reference Format:
Soujanya Narayana, Ramanathan Subramanian, Ibrahim Radwan,
and Roland Goecke. 2022. To Improve Is to Change: Towards Improving
Mood Prediction by Learning Changes in Emotion. In INTERNATIONAL
CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’22 Companion),
November 7–11, 2022, Bengaluru, India. ACM, New York, NY, USA, 6 pages.
https://doi.org/10.1145/3536220.3563685
1 INTRODUCTION
There is mounting evidence that emotions play an essential role in
rational and intelligent behaviour. Besides contributing to a richer
quality of interaction, they directly impact a person’s ability to
interact in an intelligent way [
21
]. Quite often, the terms mood and
emotion are used interchangeably, although dierences in duration,
intensity and attribution exist. While
emotion
is a short-term af-
fective state induced by a source which can last for a few minutes,
mood
refers to a longer-term aective state that can last for hours
or even days, and be without a causal source [
13
]. Most research
in aective computing has focused on inferring emotional states,
arXiv:2210.00719v1 [cs.HC] 3 Oct 2022
ICMI ’22 Companion, November 7–11, 2022, Bengaluru, India S. Narayana, R. Subramanian, I. Radwan, R. Goecke
Mood
classification
Single branch CNN
(mood labels only)
Model-fusion
(mood and delta
labels)
Two-branch CNN
(mood and delta
labels)
Teacher-Student
Model
Input
Figure 2: Overview of the proposed mood recognition framework.
while very little research has so far been devoted to automated
mood recognition [
15
] or the joint modelling of the interplay be-
tween emotion and mood for improved aective state recognition.
Psychological studies on mood have made substantial progress.
An eye-tracking study has revealed that positive mood results in
better global information processing than a negative mood [
23
].
The authors in [
22
] have observed a mood-congruity eect, where
positive mood hampers the recognition of mood-incongruent nega-
tive emotions and vice-versa. The mood-emotion loop is a theory
that posits mood and emotion as distinct mechanisms, which aect
each other repeatedly and continuously. This theory argues that
mood is a high-level construct activating latent low-level states
such as emotions [
29
]. Recognising the interactions between mood
and emotion has the potential to lead to a better understanding
of aective phenomena, such as mood disorders and emotional
regulation.
On the contrary, mood recognition has rarely been addressed
from a computational perspective and only a few studies have
explored mood [
15
]. Body posture and movement correlates of
mood have been explored in [
27
]. User mood is induced via musical
stimuli and the authors have observed that head posture and move-
ments characterise happy and sad mood. Katsimerou et al. [
15
] have
examined automatic mood prediction from recognised emotions,
showing that clustered emotions in the valence-arousal space pre-
dict single moods much better than multiple moods within a video.
Research on mood prediction has also neglected to investigate the
interplay between mood and emotion, though the psychological
literature recognises a relationship between the two [20].
From an aective computing viewpoint, developing a mood
recognition framework requires ground-truth mood labels for
model training, but only very few databases record the user mood
(directly or indirectly via an observer). Widely used aective cor-
pora, such as AFEW-VA [
16
], HUMAINE [
9
], SEMAINE [
19
] and
DECAF [
1
] only contain dimensional and/or categorical emotion
labels. One of the few datasets with mood ratings is EMMA [
14
],
where the annotations developed represent the overall emotional
impression of the human annotator (or observer) for the examined
stimulus [
14
]. Machine learning approaches have been extensively
used for inferring emotions from visual, acoustic, textual and neu-
rophysiological data [
4
,
5
,
17
,
26
,
28
]. Contemporary studies em-
phasise the improved performance of multimodal approaches to
the detection of emotional states vis-á-vis unimodal ones [
8
]. Re-
cent studies characterise mood disorders, such as depression, by
examining speech style, eye activity, and head pose [
2
,
3
,
25
]. Deng
et al. [
6
] propose a multitask emotion recognition framework that
can deal with missing labels employing a teacher-student paradigm.
Knowledge Distillation (KD) is a technique that enables the trans-
fer of knowledge between two neural networks, unifying model
compression and learning with privileged information [
12
,
18
]. KD
techniques have been employed for facial expression recognition
where the teacher has access to a fully visible face, whereas the
student model only has access to occluded faces [10].
While our research is ultimately aimed towards mood prediction
and understanding the interplay between mood and emotions from
video data, the present study is an initial step on this path. We
use the AFEW-VA dataset to derive (a) dominant emotion labels,
which refer to the emotion persisting for most consecutive frames
(termed mood labels), and (b)
Δ
or emotion change labels, which
represent the change in emotion over a xed window size. Given
the sparsity of in-the-wild data with mood annotations and the pre-
liminary nature of this study, the dominant emotion labels are used
here in lieu of actual mood labels. In the future, we will be using
actual mood labels derived from expert annotators. Fig. 1 illustrates
how emotion change is captured for an exemplar video clip, while
Fig. 2 overviews our dominant emotion or mood prediction frame-
work. A unimodal 3D Convolutional Network Network (3D CNN)
is trained using only mood labels, while a two-branch (multimodal)
CNN model, multi-layer perceptron, and a teacher-student model
are evaluated for fusing emotion-mood information for mood pre-
diction. Empirical evaluation reveals that incorporating emotion
change information improves mood prediction performance by as
much as 54%, conrming the salience of ne-grained emotional
information for coarse-grained mood prediction. This study makes
the following contributions:
To the best of our knowledge, from a computational mod-
elling perspective, this is the rst study to examine mood
prediction incorporating both mood and emotional infor-
mation. Mood labels are derived from valence annotations,
instead of subjective impressions provided by a human an-
notator.
The experimental evaluation of multiple models shows that
incorporating emotional change information is benecial
and can produce a signicant improvement in mood predic-
tion performance.
2 MATERIALS
2.1 Dataset
Here, the AFEW-VA [
16
] dataset, a subset of the AFEW [
7
], com-
prising 600 video clips extracted from feature lms at a rate of 25
frames per second, was used. Video clips in this dataset range from
摘要:

ToImproveIstoChange:TowardsImprovingMoodPredictionbyLearningChangesinEmotionSoujanyaNarayanaUniversityofCanberraACT,Australiasoujanya.narayana@canberra.edu.auRamanathanSubramanianUniversityofCanberraACT,Australiaram.subramanian@canberra.edu.auIbrahimRadwanUniversityofCanberraACT,Australiaibrahim.rad...

展开>> 收起<<
To Improve Is to Change Towards Improving Mood Prediction by Learning Changes in Emotion.pdf

共6页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:6 页 大小:3.79MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 6
客服
关注