2nd Place Solution to ECCV 2022 Challenge Transformer-based Action recognition in hand-object interacting scenarios

2025-05-02 0 0 483.54KB 5 页 10玖币
侵权投诉
2nd Place Solution to ECCV 2022 Challenge:
Transformer-based Action recognition in
hand-object interacting scenarios
Hoseong Cho and Seungryul Baek
Ulsan National Institute of Science and Technology (UNIST), South Korea
{hoseong, srbaek}@unist.ac.kr
Abstract. This report describes the 2nd place solution to the ECCV
2022 Human Body, Hands, and Activities (HBHA) from Egocentric and
Multi-view Cameras Challenge: Action Recognition. This challenge aims
to recognize hand-object interaction in an egocentric view. We propose
a framework that estimates keypoints of two hands and an object with
a Transformer-based keypoint estimator and recognizes actions based on
the estimated keypoints. We achieved a top-1 accuracy of 87.19% on the
testset.
1 Introduction
In augmented reality (AR), virtual reality (VR) and human-computer interac-
tion, egocentric perception of humans is a crucial component. Since previous
works have primarily focused on single hands [2,3,13] and object interaction sce-
narios [4,15], most of the datasets [7,11,12] included only one-hand and an object
interaction. In addition, while recent progress has been made in video compre-
hension and action recognition, most datasets are focusing on actions captured in
the third viewpoint. The H2O dataset [14] provides interaction of two hands and
an object in multi-views including the egocentric viewpoint. This challenge aims
to recognize the interaction of hand and object. In the action recognition, the
dynamics of hand and object keypoints contain considerable information [9,18].
Therefore, we propose a framework that predicts the keypoints of two hands and
an object for each single frame and uses the result as an clue to the temporal
module. Recently in the field of computer vision, the Transformer shows signif-
icant performance in various tasks. We adopt Transformer-based architecture
for both keypoints estimator and action classifier and the proposed architecture
achieved the top-1 accuracy of 87.19% on H2O dataset.
2 Methods
In this section, we introduce our proposed action recognition pipleine in Figure 1
and explain details for solving problems.
arXiv:2210.11387v1 [cs.CV] 20 Oct 2022
摘要:

2ndPlaceSolutiontoECCV2022Challenge:Transformer-basedActionrecognitioninhand-objectinteractingscenariosHoseongChoandSeungryulBaekUlsanNationalInstituteofScienceandTechnology(UNIST),SouthKorea{hoseong,srbaek}@unist.ac.krAbstract.Thisreportdescribesthe2ndplacesolutiontotheECCV2022HumanBody,Hands,andAc...

展开>> 收起<<
2nd Place Solution to ECCV 2022 Challenge Transformer-based Action recognition in hand-object interacting scenarios.pdf

共5页,预览1页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:5 页 大小:483.54KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 5
客服
关注