MOTION MATTERS A NOVEL MOTION MODELING FOR CROSS-VIEW GAIT FEATURE LEARNING Jingqi LiJiaqi GaoYuzhen ZhangHongming ShanJunping Zhangy

2025-05-02 0 0 320.02KB 5 页 10玖币
侵权投诉
MOTION MATTERS: A NOVEL MOTION MODELING FOR CROSS-VIEW GAIT FEATURE
LEARNING
Jingqi Li?Jiaqi Gao?Yuzhen Zhang?Hongming Shan]Junping Zhang?
?Shanghai Key Lab of Intelligent Information Processing, School of Computer Science
]Institute of Science and Technology for Brain-inspired Intelligence
Fudan University, Shanghai 200433, China
ABSTRACT
As a unique biometric that can be perceived at a distance, gait
has broad applications in person authentication, social secu-
rity and so on. Existing gait recognition methods suffer from
changes in viewpoint and clothing and barely consider ex-
tracting diverse motion features, a fundamental characteristic
in gaits, from gait sequences. This paper proposes a novel mo-
tion modeling method to extract the discriminative and robust
representation. Specifically, we first extract the motion fea-
tures from the encoded motion sequences in the shallow layer.
Then we continuously enhance the motion feature in deep lay-
ers. This motion modeling approach is independent of main-
stream work in building network architectures. As a result,
one can apply this motion modeling method to any backbone
to improve gait recognition performance. In this paper, we
combine motion modeling with one commonly used back-
bone (GaitGL) as GaitGL-M to illustrate motion modeling.
Extensive experimental results on two commonly-used cross-
view gait datasets demonstrate the superior performance of
GaitGL-M over existing state-of-the-art methods.
Index Termsmotion modeling, plug-and-play
1. INTRODUCTION
Silhouette, a standard modality for appearance-based gait
recognition, is a binary map generated by segmenting the
individual and background. However, the silhouettes among
different individuals only have subtle variances when the
body shapes are similar, inducing the nondiscriminative of
the appearance-dependent gait feature. On the contrary, the
walking speed and gait cycle are distinguished even though
the body shapes look similar among these individuals. Addi-
tionally, the silhouettes of one individual visually differ when
the clothing or viewpoint varies, revealing the vulnerability
of the appearance-dependent gait feature. Nevertheless, this
person’s motion information, such as speed and gait cycle,
remains consistent. Fortunately, this motion information is
reflected in the frame-to-frame changes in the sequence of
: Corresponding author
silhouettes, which can be explored to obtain discriminative
and robust gait features.
Recent works mainly aggregate the sequences in different
stages. Template-based methods [2–5] compress all silhou-
ettes into one gait template before extracting features, sac-
rificing the essential temporal information. Set-based meth-
ods [6, 7] rather aggregate after the feature extraction stage
by pooling. More recently, many new works [1, 8–11] further
aggregate the feature sequence in the feature extraction stage
using temporal convolution. However, it is hard to extract the
motion information through temporal aggregation. As one
recent work claimed [12], only relying on temporal convolu-
tion is not enough to ensure the uniqueness of the extracted
gait feature, let alone the temporal pooling. But, it is notice-
able that its theoretical analysis proves that the relationship
between adjacent frames can provide the distinguishability of
features.
Motivated by these observations, we propose a novel mo-
tion modeling for gait recognition, through utilizing the mo-
tion information inherent in silhouette sequence and enhanc-
ing the motion information in gait representation. Unlike the
prior work that employs local self-similarities as the motion
information [12], we define the motion information as the
holistic temporal changes of all body parts. Our motion mod-
eling method mainly comprises a Silhouette-level Motion ex-
tractor (SiMo), which facilitates silhouette motion encoding,
and a Feature-level Motion enhancement (FeMo), which pre-
serves feature-level motion details. This motion modeling
method is applicable to any existing backbone. To better illus-
trate its usage, we plug the SiMo and FeMo into GaitGL [1],
named GaitGL-M. Additionally, the performance of plugging
these two modules into GaitSet [6] is presented in experi-
ments (see Table 3).
The contributions of this paper are summarized as fol-
lows. 1) We propose a novel motion modeling method to
extract the discriminative and robust gait representation.
Moreover, this method is independent of network architec-
ture. Thus one can plug it into any existing backbone. 2)
We propose two plug-and-play modules in motion modeling,
including a silhouette-level motion extractor and feature-
arXiv:2210.11817v2 [cs.CV] 19 Jan 2023
摘要:

MOTIONMATTERS:ANOVELMOTIONMODELINGFORCROSS-VIEWGAITFEATURELEARNINGJingqiLi?JiaqiGao?YuzhenZhang?HongmingShan]JunpingZhang?y?ShanghaiKeyLabofIntelligentInformationProcessing,SchoolofComputerScience]InstituteofScienceandTechnologyforBrain-inspiredIntelligenceFudanUniversity,Shanghai200433,ChinaABSTRAC...

展开>> 收起<<
MOTION MATTERS A NOVEL MOTION MODELING FOR CROSS-VIEW GAIT FEATURE LEARNING Jingqi LiJiaqi GaoYuzhen ZhangHongming ShanJunping Zhangy.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:320.02KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注