MULTI-HEAD CROSS-ATTENTIONAL PPG AND MOTION SIGNAL FUSION FOR HEART
RATE ESTIMATION
Panagiotis Kasnesis§†,Lazaros Toumanidis§†,Alessio Burrello*,
Christos Chatzigeorgiou§†,Charalampos Z. Patrikakis§†,
§ThinGenious PC, Maroussi, Greece
†Department of Electrical and Electronics Engineering, University of West Attica, Greece
*Department of Electrical, Electronic and Information Engineering, University of Bologna, Italy
ABSTRACT
Nowadays, Hearth Rate (HR) monitoring is a key feature of
almost all wrist-worn devices exploiting photoplethysmogra-
phy (PPG) sensors. However, arm movements affect the per-
formance of PPG-based HR tracking. This issue is usually ad-
dressed by fusing the PPG signal with data produced by iner-
tial measurement units. Thus, deep learning algorithms have
been proposed, but they are considered too complex to deploy
on wearable devices and lack the explainability of results. In
this work, we present a new deep learning model, PULSE,
which exploits temporal convolutions and multi-head cross-
attention to improve sensor fusion’s effectiveness and achieve
a step towards explainability. We evaluate the performance
of PULSE on three publicly available datasets, reducing the
mean absolute error by 7.56% on the most extensive available
dataset, PPG-DaLiA. Finally, we demonstrate the explainabil-
ity of PULSE and the benefits of applying attention modules
to PPG and motion data.
Index Terms—Deep Learning, Sensor Fusion, Heart
Rate Monitoring, Attention, Photoplethysmography
1. INTRODUCTION
In recent years, wrist-worn devices (i.e., smartwatches) en-
able a 24h-monitoring of the subject’s vital conditions thanks
to miniaturized sensors, becoming increasingly popular in
personalized health care and medical IoT applications [1].
One of the most important indices to monitor is Heart Rate
(HR). Compared to first-generation monitoring devices,
which exploit a simple 1-3 leads ECG connected through
a chest strip, modern ones use photoplethysmographic (PPG)
sensors, allowing HR monitoring to be integrated into the
smartwatches [2]. However, a limitation of PPG-based HR
monitoring is given by the presence of motion artifacts (MA).
These are caused by variations of sensor position on the wrist
or ambient light leaking in the gap between the sensor and
the wrist. In literature, this problem has been first tackled uti-
lizing filtering approaches. They use the correlation between
acceleration data and the PPG signal to cancel the noise and
remove the MAs. Then, the HR is extrapolated from the
cleaned signal [3, 4]. The critical limitation of these ap-
proaches is the high number of free hyper-parameters, which
often limits their generalization.
Deep learning approaches have been proposed to improve
generalization, bringing promising results on different public
datasets [5, 6, 7, 8]. On the other hand, these models lack
explainability, since acceleration and PPG data are fused in a
black box. Until now, little attention has been posed to recent
Transformers, given the usual high number of parameters re-
quired to train them. These models are based on the so-called
Attention Modules, which correlate different tensors.
In this paper, we demonstrate that combining feature maps
of convolutions with attention modules leads to improved ac-
curacy in the PPG-based HR monitoring and allows to inter-
pret the connection between acceleration and PPG data. The
main contributions of this work are summarized as follows:
• We introduce a new state-of-the-art, yet lightweight (around
130M parameters), deep neural network to fuse PPG and
motion signals for precise heart rate estimation. The model
includes both temporal convolutional and multi-head cross-
attention modules.
• We evaluate the effectiveness of the produced model on
three publicly available datasets. On the largest one, PPG-
DaLiA [5], we improve the mean absolute error (MAE) to
4.03 beats per minute (BPM), outperforming the best state-
of-the-art model (a pure CNN) by 0.33 BPM.
• We demonstrate the explainability of the developed model
and the benefits of applying attention modules to PPG and
motion data by showcasing examples of attentional maps.
2. BACKGROUND
Temporal Convolutional Networks: TCNs are 1D-Convolutional
Neural Networks (CNNs), with the insertion of dilation in
convolutional layers [9, 10]. The dilation is a fixed gap d
inserted between input samples before being convolved with
arXiv:2210.11415v1 [eess.SP] 14 Oct 2022