
Learning from the Best: Contrastive Representations Learning
Across Sensor Locations for Wearable Activity Recognition
Vitor Fortes Rey, Sungho Suh, Paul Lukowicz
German Reserch Center for Articial Intelligence (DFKI) and University of Kaiserslautern, Germany
{vitor.fortes_rey,sungho.suh,paul.lukowicz}@dfki.de
Source data
Encoder
Target data
Translator
Target data
Encoder Classifier
Classifier
Activity Label
3. Testing with target data
1. Training Representations with paired data by contrastive learning 2. Training Representations and Classifier by minimizing classification loss
Encoder
Source data
Activity Label
Translator
Target data
Contrastive loss
Contrastive loss Classification loss
Figure 1: Overall network architecture and steps of the proposed method.
ABSTRACT
We address the well-known wearable activity recognition prob-
lem of having to work with sensors that are non-optimal in terms
of information they provide but have to be used due to wearabil-
ity/usability concerns (e.g. the need to work with wrist-worn IMUs
because they are embedded in most smart watches). To mitigate
this problem we propose a method that facilitates the use of in-
formation from sensors that are only present during the training
process and are unavailable during the later use of the system. The
method transfers information from the source sensors to the latent
representation of the target sensor data through contrastive loss
that is combined with the classication loss during joint training
(Figure 1). We evaluate the method on the well-known PAMAP2
and Opportunity benchmarks for dierent combinations of source
and target sensors showing average (over all activities) F1 score
improvements of between 5% and 13% with the improvement on
individual activities, particularly well suited to benet from the
additional information going up to between 20% and 40%.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
ISWC ’22, September 11–15, 2022, Atlanta, USA and Cambridge, UK
©2022 Association for Computing Machinery.
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00
https://doi.org/10.1145/3544794.3558464
CCS CONCEPTS
•Computing methodologies →Learning latent representa-
tions.
KEYWORDS
contrastive learning, transformer, human activity recognition, self-
supervised learning
ACM Reference Format:
Vitor Fortes Rey, Sungho Suh, Paul Lukowicz. 2022. Learning from the
Best: Contrastive Representations Learning Across Sensor Locations for
Wearable Activity Recognition. In ISWC ’22: ACM International Symposium
on Wearable Computers, September 11–15, 2022, Atlanta, UK and Cambridge,
UK. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3544794.
3558464
1 INTRODUCTION
A well-known problem of wearable human activity recognition
(HAR) is the fact that sensor locations that are feasible for long-
term, everyday deployment often provide poor and/or noisy infor-
mation for the recognition of activities that may be relevant for
a particular application. Thus, for example, with the widespread
adoption of smart watches wrist-worn, IMUs are a broadly available,
unobtrusive sensing modality suitable for everyday use. However,
it is well-known that many wearable HAR tasks such as wrist-worn
IMUs are a poor source of information. For example, when looking
at a detailed analysis of modes of locomotion (walking up, down,
running, etc.) most/best information is contained in the motion of
the legs and the motion patterns of the trunk. Arms on the other
hand are often moved independently of the modes of locomotion
(e.g. gesticulating while talking while walking). Locomotion-related
arXiv:2210.01459v1 [cs.LG] 4 Oct 2022