MULTI-HEAD CROSS-ATTENTIONAL PPG AND MOTION SIGNAL FUSION FOR HEART RATE ESTIMATION Panagiotis KasnesisLazaros ToumanidisAlessio Burrello

2025-05-02 0 0 1.84MB 5 页 10玖币
侵权投诉
MULTI-HEAD CROSS-ATTENTIONAL PPG AND MOTION SIGNAL FUSION FOR HEART
RATE ESTIMATION
Panagiotis Kasnesis§†,Lazaros Toumanidis§†,Alessio Burrello*,
Christos Chatzigeorgiou§†,Charalampos Z. Patrikakis§†,
§ThinGenious PC, Maroussi, Greece
Department of Electrical and Electronics Engineering, University of West Attica, Greece
*Department of Electrical, Electronic and Information Engineering, University of Bologna, Italy
ABSTRACT
Nowadays, Hearth Rate (HR) monitoring is a key feature of
almost all wrist-worn devices exploiting photoplethysmogra-
phy (PPG) sensors. However, arm movements affect the per-
formance of PPG-based HR tracking. This issue is usually ad-
dressed by fusing the PPG signal with data produced by iner-
tial measurement units. Thus, deep learning algorithms have
been proposed, but they are considered too complex to deploy
on wearable devices and lack the explainability of results. In
this work, we present a new deep learning model, PULSE,
which exploits temporal convolutions and multi-head cross-
attention to improve sensor fusion’s effectiveness and achieve
a step towards explainability. We evaluate the performance
of PULSE on three publicly available datasets, reducing the
mean absolute error by 7.56% on the most extensive available
dataset, PPG-DaLiA. Finally, we demonstrate the explainabil-
ity of PULSE and the benefits of applying attention modules
to PPG and motion data.
Index TermsDeep Learning, Sensor Fusion, Heart
Rate Monitoring, Attention, Photoplethysmography
1. INTRODUCTION
In recent years, wrist-worn devices (i.e., smartwatches) en-
able a 24h-monitoring of the subject’s vital conditions thanks
to miniaturized sensors, becoming increasingly popular in
personalized health care and medical IoT applications [1].
One of the most important indices to monitor is Heart Rate
(HR). Compared to first-generation monitoring devices,
which exploit a simple 1-3 leads ECG connected through
a chest strip, modern ones use photoplethysmographic (PPG)
sensors, allowing HR monitoring to be integrated into the
smartwatches [2]. However, a limitation of PPG-based HR
monitoring is given by the presence of motion artifacts (MA).
These are caused by variations of sensor position on the wrist
or ambient light leaking in the gap between the sensor and
the wrist. In literature, this problem has been first tackled uti-
lizing filtering approaches. They use the correlation between
acceleration data and the PPG signal to cancel the noise and
remove the MAs. Then, the HR is extrapolated from the
cleaned signal [3, 4]. The critical limitation of these ap-
proaches is the high number of free hyper-parameters, which
often limits their generalization.
Deep learning approaches have been proposed to improve
generalization, bringing promising results on different public
datasets [5, 6, 7, 8]. On the other hand, these models lack
explainability, since acceleration and PPG data are fused in a
black box. Until now, little attention has been posed to recent
Transformers, given the usual high number of parameters re-
quired to train them. These models are based on the so-called
Attention Modules, which correlate different tensors.
In this paper, we demonstrate that combining feature maps
of convolutions with attention modules leads to improved ac-
curacy in the PPG-based HR monitoring and allows to inter-
pret the connection between acceleration and PPG data. The
main contributions of this work are summarized as follows:
We introduce a new state-of-the-art, yet lightweight (around
130M parameters), deep neural network to fuse PPG and
motion signals for precise heart rate estimation. The model
includes both temporal convolutional and multi-head cross-
attention modules.
We evaluate the effectiveness of the produced model on
three publicly available datasets. On the largest one, PPG-
DaLiA [5], we improve the mean absolute error (MAE) to
4.03 beats per minute (BPM), outperforming the best state-
of-the-art model (a pure CNN) by 0.33 BPM.
We demonstrate the explainability of the developed model
and the benefits of applying attention modules to PPG and
motion data by showcasing examples of attentional maps.
2. BACKGROUND
Temporal Convolutional Networks: TCNs are 1D-Convolutional
Neural Networks (CNNs), with the insertion of dilation in
convolutional layers [9, 10]. The dilation is a fixed gap d
inserted between input samples before being convolved with
arXiv:2210.11415v1 [eess.SP] 14 Oct 2022
摘要:

MULTI-HEADCROSS-ATTENTIONALPPGANDMOTIONSIGNALFUSIONFORHEARTRATEESTIMATIONPanagiotisKasnesis§†,LazarosToumanidis§†,AlessioBurrello*,ChristosChatzigeorgiou§†,CharalamposZ.Patrikakis§†,§ThinGeniousPC,Maroussi,Greece†DepartmentofElectricalandElectronicsEngineering,UniversityofWestAttica,Greece*Departmen...

展开>> 收起<<
MULTI-HEAD CROSS-ATTENTIONAL PPG AND MOTION SIGNAL FUSION FOR HEART RATE ESTIMATION Panagiotis KasnesisLazaros ToumanidisAlessio Burrello.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:1.84MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注