Transfer learning on electromyography EMG tasks approaches and beyond Di Wu Jie Yang and Mohamad Sawan

2025-04-26 0 0 856.98KB 31 页 10玖币
侵权投诉
Transfer learning on electromyography (EMG) tasks:
approaches and beyond
Di Wu, Jie Yang, and Mohamad Sawan
Center of Excellence in Biomedical Research on Advanced Integrated-on-chips
Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, China
Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou,
310024, China
E-mail: yangjie@westlake.edu.cn, Sawan@westlake.edu.cn
Abstract. Objective. Machine learning on electromyography (EMG) has recently achieved
remarkable success on a variety of tasks, while such success relies heavily on the assumption
that the training and future data must be of the same data distribution. However, this
assumption may not hold in many real-world applications. Model calibration is required
via data re-collection and label annotation, which is generally very expensive and time-
consuming. To address this problem, transfer learning (TL), which aims to improve target
learners’ performance by transferring the knowledge from related source domains, is emerging
as a new paradigm to reduce the amount of calibration effort. Approach. In this survey,
we assess the eligibility of more than fifty published peer-reviewed representative transfer
learning approaches for EMG applications. Main results. Unlike previous surveys on purely
transfer learning or EMG-based machine learning, this survey aims to provide an insight into
the biological foundations of existing transfer learning methods on EMG-related analysis. In
specific, we first introduce the physiological structure of the muscles and the EMG generating
mechanism, and the recording of EMG to provide biological insights behind existing transfer
learning approaches. Further, we categorize existing research endeavors into data based,
model based, training scheme based, and adversarial based. Significance. This survey
systematically summarizes and categorizes existing transfer learning approaches for EMG
related machine learning applications. In addition, we discuss possible drawbacks of existing
works and point out the future direction of better EMG transfer learning algorithms to enhance
practicality for real-world applications.
Keywords: Transfer learning, electromyography (EMG), machine learning, meta learning,
domain-adversarial neural networks (DANN), random forest, model ensemble, fine-tuning,
gesture recognition.
1. Introduction
The human motor control system is a complex neural system that is crucial for daily human
activities. One way to study the human motor control system is to record the signal due
to muscle fiber contractions associated with human motor activities by means of either
arXiv:2210.06295v2 [eess.SP] 13 Oct 2022
2
inserting needle electrodes into the muscles or attaching electrodes onto the surface of the
skin. The signal obtained is referred to as electromyography (EMG). Given the location of
the electrodes, EMG is further divided into surface EMG (sEMG) and intramuscular EMG
(iEMG). Advancement in the analysis of EMG and machine learning has recently achieved
remarkable success enabling a wide variety of applications, including but not limited to
rehabilitation with prostheses [1], hand gesture recognition [2] and human-machine interfaces
(HMIs) [3].
The current success of applying deep learning onto EMG related tasks is largely confined
to the following two assumptions, which are usually infeasible when it comes to real-world
EMG related scenarios:
1) Sufficient amount of annotated training data. The growing capability and capacity
of deep neural networks (DNN) architectures are associated with million-scale labeled
data [4, 5]. Such high quality abundant labeled data are often limited, expensive, and
inaccessible in the domain of EMG analysis. On the one hand, EMG data annotation
requires expert knowledge. On the other hand, EMG data acquisition process is a
highly physical and time-consuming task that requires several days of collaboration from
multiple parties [6].
2) Training data and testing data are independent and identically distributed (i.i.d).
The performance of the model is largely affected by the distribution gap between the
training and testing datasets. The testing data might also refer to the data generated
during actual application usage after model deployment. Take hand gesture recognition,
for example. The model is only capable of giving accurate predictions with the exact
same positioning of the forearm of the test subject and the exact placement of the
electrodes.
As the distribution of data changes, models based on statistics need to be reconstructed
with newly collected training data. In many real-world applications, it is expensive and
impractical to recollect a large amount of training data and rebuild the models each time a
distribution change is observed. Transfer learning (TL), which emphasizes the transfer of
knowledge across domains, emerges as a promising machine learning solution for solving the
above problems. The notion of transfer learning is not new, Thorndike et al. [7] suggested that
the improvement over one task is beneficial to the efficiency of learning other tasks given the
similarity exists between these two tasks. In practice, a person knowing how to ride a bicycle
can learn to ride a motorcycle faster than others since both tasks require balance keeping.
However, transfer learning for EMG related tasks has only been gaining attention with the
recent development of both DNN and HMIs. Existing surveys provide an overview of DNN
for EMG-based human machine interfaces [8], and transfer learning in general for various
machine learning tasks [9]. This survey focuses on the intersection of machine learning for
EMG and transfer learning via EMG biological foundations, providing insights into a novel
and growing area of research. Besides the analysis of recent deep learning works, we make
an attempt to explain the relationships and differences between non-deep learning and the
deep models, for these works usually share similar intuitions and observations. Some of the
3
previous non-deep learning works contain more biological significance that can inspire further
DNN-based research in this field. To consolidate these recent advances, we propose a new
taxonomy for transfer learning on EMG tasks, and also provide a collection of predominant
benchmark datasets following our taxonomy.
The main contributions of this paper are :
Over fifty representative up-to-date transfer learning approaches on EMG analysis are
summarized with organized categorization, presenting a comprehensive overview to the
readers.
Delve deep into the generating mechanisms of EMG and bridge transfer learning
practices with the underlying biological foundation.
Point out the technical limitations of current research and discuss promising directions
on transfer learning on EMG analysis to propose further studies.
The remainder of this paper is organized as follows. We introduce in section 2 the basics
of transfer learning, generation and acquisition of EMG and EMG transfer learning scenarios.
In Section 3, we first provide the categorization of EMG transfer learning based on existing
works and then introduce in detail. We also give a summary of common used dataset in
Section 4. Lastly, we discuss existing methods and the future research direction of EMG
transfer learning.
2. Preliminaries
This section introduces the definitions of transfer learning, related concepts, and also the
basics of EMG, from how EMG signal is generated to how EMG signal is recorded. We also
summarize possible transfer scenarios in this section.
2.1. Transfer Learning
We first give the definitions of a “domain” and a “task”, respectively. Define Dto be a domain
which consists of a feature space Xand a marginal probability distribution P(X), where X
is a set of data samples X= [xi]n
i=1. In particular, if two domains have different feature
spaces or marginal probability distributions, they differ from each other. Given a domain
D={X , P (X)}, a task is then represented by T={Y, f(·)}where f(·)denotes the
objective prediction function and Yis the label space associated with X. From the probability
point of view, f(x)can also be regarded as conditional probability distribution P(y|x). Two
tasks are considered different if they have different label spaces of different conditional
probability distributions. Then, transfer learning can be formally defined as follows:
Definition 1 (Transfer Learning): Given a source learning task TSbased on a source
domain DS, transfer learning aims to help improve the learning of the target objective
prediction function fT(x)of the target task TSbased on the target domain DT, given that
DT6=DSor TS6=TT.
4
The above definition could be extended to multiple domains and tasks for both source and
target. In this survey, we only consider the case where there is one source domain DS, and one
target domain DT, as by far this is the most intensively studied transfer setup of the research
works in the literature. Based on different setups of the source and target domains and tasks,
transfer learning could be roughly categorized into inductive transfer learning,transductive
transfer learning and unsupervised transfer learning [10].
Definition 2 (Inductive Transfer Learning): Given a transfer learning task
(DS,TS,DT,TT, fT(x)). It is a inductive transfer learning task where the knowledge of (DS
and TSis used to improve the learning of the target objective prediction function fT(x)when
TS6=TT.
The target objective predictive function can be induced by using a few labeled data in the
target domain as the training data.
Definition 3 (Transductive Transfer Learning): Given a transfer learning task
(DS,TS,DT,TT, fT(x)). It is a transductive transfer learning task where the knowledge of
DSand TSis used to improve the learning of the target objective prediction function fT(x)
when DS6=DTand TS=TT.
For transductive transfer learning, the source and target tasks are the same, while the
source and target domain vary. Similar to the setting of transductive learning of traditional
machine learning [11], transductive transfer learning aims to make the best use of the given
unlabeled data in the target domain to adapt the objective predictive function learned in the
source domain, minimizing the expected error on the target domain. It is worth to notice that
domain adaptation is a special case where XS=XT,YS=YT,PS(y|X)6=PT(y|X)and/or
PS(X)6=PT(X).
Definition 4 (Unsupervised Transfer Learning): Given a transfer learning task
(DS,TS,DT,TT, fT(x)). It is an unsupervised transfer learning task where the knowledge
of DSand TSis used to improve the learning of the target objective prediction function fT(x)
with YSand YTnot observed.
Based on the above definition, no data annotation is accessible in both the source and
target domain during training. There has been little research conducted on this setting to date,
given its fully unsupervised nature in both domains.
2.2. EMG Basics
Motor Unit Action Potential. Amotor unit (MU) is defined as one motor neuron and the
muscle fibers that it innervates. During the contraction of a normal muscle, the muscle fibers
of a motor unit are activated by its associated motor neuron. The membrane depolarization of
the muscle fiber is accompanied by ions movement and thus generates an electromagnetic field
in the vicinity of the muscle fiber. The detected potential or voltage within the electromagnetic
field is referred to as the fiber action potential. The amplitude of the fiber action potential is
related to the diameter of the corresponding muscle fiber and the distance to the recording
electrode. It is worth noticing that MU, by definition, refers to the anatomical motor unit
where the functional motor unit is of more research interest when it comes to real-world
5
Muscle Fiber
Skin
Muscle Fiber
Skin
Muscle Fiber
Skin
Muscle Fiber
Skin
sEMG
iEMG
(a) Bi-polar sEMG (b) Mono-polar sEMG
e1 e2 e1
e1 e2 e1
(c) Bi-polar iEMG (d) Mono-polar iEMG
Figure 1. Demonstration of EMG acquisition. The sEMG acquisition configuration is shown
above the dotted line, with iEMG acquisition configuration shown below the dotted line. The
triangle represents an amplifier. For the bi-polar setup as in (a) and (c), two electrodes are
placed on the skin surface or inserted into muscle fibers penetrating the skin surface. (b) and
(d) show the case of a mono-polar setup with one electrode attached to the skin or muscle fiber
and the other electrode connected to the ground or a reference point with no EMG (bones).
applications. The functional motor unit can be defined as a group of muscle fibers whose
action potentials occur within a very short time (two milliseconds). Intuitively, one could
consider a functional motor unit as a group of muscle fibers that contract for one unified
functionality. From this point on, MU refers to a functional motor unit unless otherwise
specified. A Motor Unit Action Potential (MUAP) is defined as the waveform consisting
of the superimposed (both temporally and spatially) action potentials from each individual
muscle fiber of the motor unit. The amplitude and shape of the MUAP is a unique indicator
of the properties of the MU (functionality, fiber arrangement, fiber diameter, etc.). MUs are
repeatedly activated so that muscle contraction is sustained for stable motor movement. The
repeated activation of MUs generates a sequence of MUAPs forming a Motor Unit Action
Potential Train (MUAPT).
Signal Recording. Based on the number of electrodes used during the recording of MUAPT,
the recording techniques could be divided into mono-polar and bi-polar configurations. As
shown in Figure 1, based on whether the electrodes are inserted directly into the muscles or
摘要:

Transferlearningonelectromyography(EMG)tasks:approachesandbeyondDiWu,JieYang,andMohamadSawanCenterofExcellenceinBiomedicalResearchonAdvancedIntegrated-on-chipsNeurotechnologies,SchoolofEngineering,WestlakeUniversity,Hangzhou,310024,ChinaInstituteofAdvancedTechnology,WestlakeInstituteforAdvancedStudy...

展开>> 收起<<
Transfer learning on electromyography EMG tasks approaches and beyond Di Wu Jie Yang and Mohamad Sawan.pdf

共31页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:31 页 大小:856.98KB 格式:PDF 时间:2025-04-26

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 31
客服
关注