Pointly-Supervised Panoptic Segmentation Junsong Fan13 Zhaoxiang Zhang123 and Tieniu Tan12 1Center for Research on Intelligent Perception and Computing

2025-05-02 0 0 4.21MB 18 页 10玖币
侵权投诉
Pointly-Supervised Panoptic Segmentation
Junsong Fan1,3, Zhaoxiang Zhang*1,2,3, and Tieniu Tan1,2
1Center for Research on Intelligent Perception and Computing,
Institute of Automation, Chinese Academy of Sciences, Beijing, China
2University of Chinese Academy of Sciences, Beijing, China
3Centre for Artificial Intelligence and Robotics, HKISI CAS, HongKong, China
{fanjunsong2016@,zhaoxiang.zhang@,tnt@nlpr.}ia.ac.cn
Abstract. In this paper, we propose a new approach to applying point-
level annotations for weakly-supervised panoptic segmentation. Instead
of the dense pixel-level labels used by fully supervised methods, point-
level labels only provide a single point for each target as supervision, sig-
nificantly reducing the annotation burden. We formulate the problem in
an end-to-end framework by simultaneously generating panoptic pseudo-
masks from point-level labels and learning from them. To tackle the core
challenge, i.e., panoptic pseudo-mask generation, we propose a principled
approach to parsing pixels by minimizing pixel-to-point traversing costs,
which model semantic similarity, low-level texture cues, and high-level
manifold knowledge to discriminate panoptic targets. We conduct exper-
iments on the Pascal VOC and the MS COCO datasets to demonstrate
the approach’s effectiveness and show state-of-the-art performance in the
weakly-supervised panoptic segmentation problem. Codes are available
at https://github.com/BraveGroup/PSPS.git.
Keywords: weakly-supervised learning, panoptic segmentation
1 Introduction
Panoptic segmentation [23] aims at fully parsing all the pixels into nonoverlap-
ping masks for both thing instances and stuff classes. It combines the semantic
segmentation and the instance segmentation tasks simultaneously. Classical deep
learning approaches require precise dense pixel-level labels to solve this problem.
However, acquiring exact pixel- and instance-level annotations on large-scale
datasets is very time-consuming, hindering the popularization and generaliza-
tion of the approaches in new practical applications.
To alleviate the annotation burden for segmentation models, researchers re-
cently proposed weakly-supervised learning (WSL) [4,52,51], which focuses on
leveraging coarse labels to train dense pixel-level segmentation tasks. Typically,
the weak supervision includes image-level [14,16,15], point-level [2,38], scribble-
level [47,31], and bounding box-level labels [9], etc. These approaches tackle
either semantic segmentation [36], instance segmentation [1,21], or panoptic seg-
mentation [41,27] tasks. Among them, the weakly-supervised panoptic segmen-
tation (WSPS) problem is the most challenging since it requires both semantic
arXiv:2210.13950v1 [cs.CV] 25 Oct 2022
2 J. Fan et al.
and instance discrimination with only weak supervision. As a result, the WSPS
got less attention in previous works, and its performance is far from satisfactory.
The seminal work by Li et al. [27] manages to address the WSPS problem using
bounding-box level labels. Later, JTSM [41] proposes to apply only image-level
labels for the WSPS problem. Recently, PanopticFCN [29] tackles this problem
by connecting multiple point labels into polygons. The performances of these
approaches differ significantly with the different weak annotations.
In this paper, we propose a new WSPS paradigm to use only a single point
for each target as the supervision, as illustrated in Fig. 1. Recall that the core of
weakly-supervised segmentation is to release the annotation burden while still
obtaining decent performance. In other words, balance the cost of annotation and
the model performance. We are motivated to use the point-level labels because,
on the one hand, the annotation time of point-level labels is only marginally
above the image-level labels [2], saving much cost compared with the box-level
or polygon labels. On the other hand, point labels can provide minimum spa-
tial information to localize and discriminate different panoptic targets for the
segmentation models.
A natural idea to estimate panoptic masks from point-level labels is to as-
sign each pixel in the image to one of the points according to some principles.
To this end, we propose tackling this problem by minimizing the pixel-to-point
traversing cost, measured by the neighboring pixel affinities. There are two basic
requirements to correctly assign pixels to point labels: semantic class matching
and instance discrimination. The former ensures that the pixels are parsed with
the correct class labels, and the latter is responsible for distinguishing different
instances in the thing classes. Therefore, we consider three criteria to model the
affinities: semantic similarity, low-level image cues, and high-level instance dis-
crimination knowledge. Using these criteria, we model the pixel-to-point travers-
ing costs and solve the assignment problem by finding the shortest path.
We base our approach on the transformer models [46,11,39], which have re-
cently shown impressive results in computer vision tasks [8,30,44,3,56]. Specifi-
cally, our approach contains a group of semantic query tokens to parse semantic
segmentation results and a group of panoptic query tokens responsible for the
panoptic segmentation task [30]. In addition to the regular panoptic segmen-
tation model, our approach contains a label generation model, which produces
dense panoptic pseudo-masks depending on the point-level labels and the crite-
ria above. The whole approach is end-to-end. After training, only the panoptic
segmentation branch is kept for testing. Thus, it does not incur additional com-
putation or memory overhead for usage. We conduct thorough experiments to
analyze the proposed approach and the properties of the point-level labels. Mean-
while, we demonstrate new cutting-edge performance with the WSPS problem
on the Pascal VOC [13] and the MS COCO [32] datasets.
In summary, the main contributions of this work are:
We propose a new paradigm for the WSPS problem, which utilizes a single
point for each target as supervision for training.
Pointly-Supervised Panoptic Segmentation 3
image point label panoptic segmentation
Fig. 1. Illustration of the proposed pointly-supervised panoptic segmentation. From
left to right: input images, point labels as supervision, and panoptic segmentation
predictions. The point labels provide a single point annotation for each target, including
both thing instances and stuff classes, which are used at training time only. Please see
Sec. 3for details. Best viewed in color.
A novel approach to estimating dense panoptic pseudo-masks by minimizing
the pixel-to-point traversing distance is proposed.
We implement the approach in an end-to-end framework with transformers,
conduct analytical experiments to study the model and the point-level labels,
and demonstrate state-of-the-art performance on the Pascal VOC and the
MS COCO datasets.
2 Related Works
2.1 Panoptic Segmentation
The panoptic segmentation [23] task simultaneously incorporates semantic seg-
mentation and instance segmentation, where each pixel is uniquely assigned
to one of the stuff classes or one of the thing instances. This problem can
be tackled by combining the semantic and instance segmentation results in a
post-processing manner [23]. Later works such as JSIS [10] adopt a unified net-
work combining a semantic segmentation branch and an instance segmentation
branch. After that, many approaches have been proposed for improvement by
using feature pyramids [22], automatic architecture searching [49], and unifying
the pipeline [28], etc.
Recently, transformer-based approaches have shown impressive results across
the NLP [46,11,39] and the CV [3,12,56,34] applications. The seminal work
DETR [3] provides a clear and elegant solution for object detection and seg-
mentation. The following work DeformableDETR [56] improves it by using the
deformable transformers to reduce the computation burden and accelerate the
convergence. K-Net [54] adopts an iterative refinement procedure to enhance
the attention masks gradually. MaskFormer [8] proposes to separate the mask
prediction and the classification process. Panoptic SegFormer [30] embraces a
similar idea and adopts an auxiliary localization target to ease the model train-
ing. Our panoptic segmentation approach is based on these works, and we focus
on alleviating the annotation burden by exploiting point-level annotations.
4 J. Fan et al.
2.2 Weakly-Supervised Segmentation
Weakly supervised segmentation [4,52] aims to alleviate the annotation bur-
den for segmentation tasks by using weak labels for training. According to the
type of tasks, it concerns semantic segmentation [16,36,24,48,26,17], instance
segmentation [1,45], and panoptic segmentation [27,41] problems. According to
the kinds of supervision, these approaches use image-level [16,36,24,48,26,41],
point-level [2,38], scribble-level [47,31], or box-level [43,45,9] labels for training.
Among them, image-level label-based approaches are the most prevalent. These
approaches generally rely on the CAM [55,40] to extract spatial information from
classification models, which are trained by the image-level labels. Though great
progress has been achieved by these approaches on the semantic segmentation
task, it is generally hard to distinguish different instances of the same class with
only image-level labels, especially on large-scale datasets with many overlapping
instances. Li et al. [27] proposes to address this problem by additionally using
bounding-box annotations, which however takes much more time to annotate.
PanopticFCN [29] alternatively proposes to use coarse polygons to supervise the
panoptic segmentation model, which are obtained by connecting multiple point
annotations for each target. PSIS [7] proposes to address the instance segmen-
tation problem by using sparsely sampled foreground and background points in
each bounding box. Though these approaches achieve better results, their an-
notation burden is significantly heavier than image-level labels. In this paper,
we try to use a new form of weak annotation for panoptic segmentation, i.e., a
single point for each target. We demonstrate that this supervision can achieve
competitive performance compared with previous approaches while significantly
reducing the annotation burden.
2.3 Point-Level Labels in Visual Tasks
Recently point-level annotation has drawn interest in a broad range of com-
puter vision tasks. Beside the works concerning the detection and segmenta-
tion tasks [7,29,38,2], some works adopt point-level labels to train crowd count-
ing [50,33] models. SPTS [37] proposes to use points for the text spotting prob-
lem. Chen et al. [5] propose addressing weakly-supervised detection problems
using point labels. Besides, point labels also play an essential role in interactive
segmentation models, where users provide interactive hints through point-level
clicks [53,35,42]. To the best of our knowledge, there are still no approaches to
training panoptic segmentation models using only a single point per target.
3 Approach
In this section, we elaborate on the details of the proposed approach. Fig. 2
illustrates the overall framework, which can be decomposed into two major com-
ponents, a label generation model and a panoptic segmentation model. These
two components share the same backbone and the transformer encoder [56]. The
摘要:

Pointly-SupervisedPanopticSegmentationJunsongFan1,3,ZhaoxiangZhang*1,2,3,andTieniuTan1,21CenterforResearchonIntelligentPerceptionandComputing,InstituteofAutomation,ChineseAcademyofSciences,Beijing,China2UniversityofChineseAcademyofSciences,Beijing,China3CentreforArtificialIntelligenceandRobotics,HKI...

展开>> 收起<<
Pointly-Supervised Panoptic Segmentation Junsong Fan13 Zhaoxiang Zhang123 and Tieniu Tan12 1Center for Research on Intelligent Perception and Computing.pdf

共18页,预览4页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:18 页 大小:4.21MB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 18
客服
关注