
PARALLEL AUGMENTATION AND DUAL ENHANCEMENT
FOR OCCLUDED PERSON RE-IDENTIFICATION
Zi Wang1, Huaibo Huang2, Aihua Zheng3, Chenglong Li3, Ran He 2
1School of Computer Science and Technology, Anhui University 2MAIS & CRIPAC, CASIA
3IMIS Laboratory of Anhui Province, Anhui Provincial Key Laboratory of MCC, Anhui University
ABSTRACT
Occluded person re-identification (Re-ID), the task of search-
ing for the same person’s images in occluded environments,
has attracted lots of attention in the past decades. Recent ap-
proaches concentrate on improving performance on occluded
data by data/feature augmentation or using extra models to
predict occlusions. However, they ignore the imbalance prob-
lem in this task and can not fully utilize the information from
the training data. To alleviate these two issues, we propose
a simple yet effective method with Parallel Augmentation
and Dual Enhancement (PADE), which is robust on both
occluded and non-occluded data and does not require any
auxiliary clues. First, we design a parallel augmentation
mechanism (PAM) to generate more suitable occluded data
to mitigate the negative effects of unbalanced data. Second,
we propose the global and local dual enhancement strategy
(DES) to promote the context information and details. Ex-
perimental results on three widely used occluded datasets and
two non-occluded datasets validate the effectiveness of our
method. The code is available at PADE (GitHub).
Index Terms—Person Re-identification, Data Augmen-
tation, Feature Enhancement
1. INTRODUCTION
Occluded person Re-ID, which incorporates the data obscured
by various obstacles, has recently gained popularity. And the
occlusions are uncommon in the training set [1,2] while abun-
dant in the test set (especially in query), as illustrated in Fig. 1
(a). Training with such unbalanced data increases the chal-
lenge for the network while testing on unknown data. Efforts
in data and feature augmentation are emerging to eliminate
the imbalance between training and testing. Most methods
[3,4,5,6] employ standard data augmentation such as ran-
dom flipping, random deleting, random cropping, and so on.
Furthermore, FED [7] provides feature augmentation strate-
gies to improve the network’s adaptability to occluded data.
The widely used data/feature augmentation mechanisms take
one image/feature as the input and output only one changed
Corresponding author: Aihua Zheng (ahzheng214@foxmail.com)
Fig. 1. (a) & (b): Imbalance problem. (c) & (d): Global and
local information have their advantages, respectively.
image/feature to the subsequent network for training. How-
ever, as illustrated in Fig.1(b), practically all occlusions oc-
cur in the query, and the gallery images almost have no ob-
structions in occluded Re-ID datasets [8,9]. The methods
mentioned that focus on data/feature augmentations ignore
the unbalanced occlusion between query and gallery. To in-
crease the robustness of the network on both the non-occluded
data (in the gallery) and the occluded data (in the query), we
propose a data augmentation method called the Parallel Aug-
mentation Mechanism (PAM). Our PAM consists of three in-
dependent components: Base Augmentation (BA), Erasing
Augmentation (EA), and Cropping Augmentation (CA). In
our parallel augmentation mechanism, EA only implements
the erase operation, and CA only crops the original image.
We will obtain an image triplet after the PAM, as shown in
Fig. 2(left). Then the ViT-based feature extractor takes the
image triplet as the input.
Additionally, both details and context information are cru-
cial for the Re-ID task. As illustrated in Fig. 1(c), we can
simply identify ID1 and ID2 by local details while finding it
hard to distinguish them based on their outward appearance.
[10,11,12] propose using additional clues by leveraging fore-
ground segmentation and pose estimation models. [4,13,14]
propose to split the global feature into several parts and use
finer features with detailed information for training. In some
cases, the global information becomes more crucial when the
body is hindered by unknown impediments or the details are
arXiv:2210.05438v3 [cs.CV] 10 Jan 2024