Zero-shot stance detection based on cross-domain feature enhancement by contrastive learning Xuechen ZhaoJiaying ZouZhong ZhangFeng XieBin Zhou Lei Tian

2025-04-29 0 0 1.67MB 9 页 10玖币
侵权投诉
Zero-shot stance detection based on cross-domain
feature enhancement by contrastive learning
Xuechen ZhaoJiaying ZouZhong ZhangFeng XieBin Zhou∗ † Lei Tian
Abstract
Zero-shot stance detection is challenging because it requires
detecting the stance of previously unseen targets in the infer-
ence phase. The ability to learn transferable target-invariant
features is critical for zero-shot stance detection. In this
work, we propose a stance detection approach that can effi-
ciently adapt to unseen targets, the core of which is to cap-
ture target-invariant syntactic expression patterns as trans-
ferable knowledge. Specifically, we first augment the data by
masking the topic words of sentences, and then feed the aug-
mented data to an unsupervised contrastive learning mod-
ule to capture transferable features. Then, to fit a specific
target, we encode the raw texts as target-specific features.
Finally, we adopt an attention mechanism, which combines
syntactic expression patterns with target-specific features to
obtain enhanced features for predicting previously unseen
targets. Experiments demonstrate that our model outper-
forms competitive baselines on four benchmark datasets.
1 Introduction
The goal of stance detection is to automatically identify
the attitude or stance (e.g., Favor, Against, or Neutral)
expressed in a text towards a specific target or topic1
[3, 23, 16]. The traditional target-specific stance detec-
tion assumes that the training and testing data belonged
to the same target [18]. However, due to the continuous
emergence of unseen targets, collecting data on all tar-
gets for training is infeasible in practice. Moreover, it is
expensive to obtain high-quality labels for a new target
[23]. Therefore, the study of zero-shot stance detection
for unseen targets goes beyond the target-specific task
and can help to predict stance more flexibly.
For the zero-shot stance detection task, some ex-
isting approaches try to improve the model’s predic-
tive ability for unseen targets by employing attention
mechanisms [1, 33] or fusing external knowledge [22].
School of Computer, National University of Defense Technol-
ogy, Changsha, China. {zhaoxuechen, zoujiaying20, zhangzhong,
xiefeng, binzhou, leitian129}@nudt.edu.cn
Corresponding Author, Key Lab. of Software Engineering for
Complex Systems, Changsha, China.
1In the paper, we will use the terms: target and topic
interchangeably.
However, transferring knowledge directly from a spe-
cific target to an unseen target is often limited in its
predictive effectiveness due to the coupling of target-
specific features. [2, 32] use adversarial learning to
guide the model to learn target-invariant features via
discriminators, which may lead to degraded prediction
performance in the unbalanced distribution of targets.
[19] capture the target-invariant features by identifying
the stance feature categories and supervised contrastive
learning, so their model achieves a better generalization
capability. However, the data needs to be tagged with
soft labels by pretext tasks, which increases the com-
plexity of the model and brings some noise to the data.
Target-specific features are directly related to a specific
target, while target-invariant features are generic and
transferable, regardless of their targets. Consequently,
it is crucial to distinguish these two features when pre-
dicting the stance of a text on unseen targets.
Both linguistic and psychological fields divide lan-
guage into two aspects of representation: 1) syntactic
representation and 2) semantic representation. The for-
mer reflects the form of languages, such as word mor-
phology and sentence structure, and is the external rep-
resentation of language; the latter is the concept and
proposition, denoting the meaning referred to by the
form of language, which is abstract and is the inter-
nal representation of language [15]. Meanwhile, Event-
related Potentials (ERP) interaction theory [6] suggests
that text semantics is a fusion of syntactic and seman-
tic representations, where syntax and semantics inter-
act to complete the process of sentence comprehen-
sion and expression jointly. As shown in Table 1, it
is possible to use the same or similar syntactic expres-
sion patterns even for sentences with different targets,
i.e., although the targets of Example 1 and Example 2
are distinct, they both use rhetorical question expres-
sion patterns, so these syntactic expression patterns are
target-invariant. The target-invariant syntactic expres-
sion patterns and the target-specific features jointly de-
termine the sentence’s meaning. Inspired by this, we
acquire syntactic expression patterns, which are natu-
rally target-invariant and have an important impact on
semantics. Furthermore, these syntactic expression pat-
arXiv:2210.03380v1 [cs.CL] 7 Oct 2022
Example 1: Target: Climate Change is a Real Concern Stance: Favor
Sentence: Today Europe is breaking heat records, while Asia is breaking the lowest temperature records!
Should we not be concerned?
Masked: Today Europe is [MASK], while Asia is [MASK]! Should we not be [MASK]?
Example 2: Target: Feminist Movement Stance: Favor
Sentence: When they say men look at women like a piece of meat, what do they even mean, they want
to cook & eat her?
Masked: When they say men look at [MASK] like [MASK] what do they even mean, they want to
[MASK]?
Table 1: Examples of syntactic expression patterns.
terns can combine with target-specific features to effec-
tively predict the text stances.
More concretely, we propose a Feature Enhanced
Zero-shot Stance Detection Model via Contrastive
Learning (FECL). First, we capture syntactic expres-
sion patterns via contrastive learning as transferable fea-
tures. Second, based on supervised learning, we fully
use the labeled data to learn semantic information as
target-specific features. At last, a feature fusion mod-
ule fuses the target-invariant and the target-specific fea-
tures to achieve the cross-target prediction capability.
The main contributions of this paper can be sum-
marized as follows:
We explore a novel self-supervised feature learning
scheme. The scheme augments the raw texts
by masking their topic words and then adopts
contrastive learning to capture syntactic expression
patterns as a bridge for knowledge transferring.
We model the text stance expression into two parts:
syntactic and semantic expression, and adopt an
attention-based feature fusion mechanism that con-
siders both target-invariant and target-specific fea-
tures of texts. This mechanism improves the qual-
ity of feature representation and allows the model
to handle prediction tasks such as zero-shot, cross-
target, etc.
Extensive experiments on four benchmark datasets
show that the proposed model performs well on the
zero-shot stance detection task. We also extend the
model to few-shot and cross-target stance detection
tasks, demonstrating the superiority and general-
ization of our approach.
2 Related Work
Stance Detection. Stance detection aims to ex-
amine the attitude of a text towards a given target
[13]. It is widely applied in argument mining [27], fake
news/rumor detection [17], fact-checking [28], and epi-
demic trend prediction [11]. Recently, with the rapid de-
velopment of social media, various unseen targets have
emerged, which also brings new challenges to stance de-
tection. Therefore, cross-target and zero-shot stance de-
tection have received extensive attention. Cross-target
stance detection is training on source targets and adapt-
ing to unseen but correlated targets [20]. While zero-
shot stance detection aims to train on multiple tar-
gets with labels and then automatically identify an un-
seen target [19]. Their core ideas are learning target-
invariant features and transferring them to unseen tar-
gets. [3] modeled the features of unseen targets via Bi-
Condition LSTM. [33] extracted shared features based
on a self-attention neural network. [2] applied a target-
specific stance dataset [23] to zero-shot stance detec-
tion and used adversarial learning to capture target-
invariant features. [22] proposed a Bert-based [9] com-
monsense knowledge-enhanced graph model to zero-
shot stance detection. These works try to extract trans-
ferable features from source targets and apply them to
unseen targets but ignore the most basic syntactic ex-
pression patterns, which is a natural target-invariant
and an essential factor that affects semantics and can
effectively detect stance with the features of specific tar-
gets.
Contrastive Learning. Contrastive learning is a
self-supervised representation learning method initially
proposed in computer vision. It aims to learn distinct
representations by pulling semantically close neighbors
together and pushing non-neighbors away [12]. Recent
studies have attempted to aid or enhance the perfor-
mance of natural language processing tasks by incor-
porating contrastive learning [24, 10, 5]. Contrastive
learning has proven comparable to supervised learning
in many domains and effectively improves the quality
of feature learning. However, most traditional stance
detection algorithms only use supervised stance label
information and inadequately utilize the richer informa-
tion in the unlabeled text. Some researches [19, 21] be-
gan to apply contrastive learning to stance detection to
improve the feature representation ability of the model.
Therefore, our proposed model uses both the existing
摘要:

Zero-shotstancedetectionbasedoncross-domainfeatureenhancementbycontrastivelearningXuechenZhao*JiayingZou*ZhongZhang*FengXie*BinZhou*„LeiTian*AbstractZero-shotstancedetectionischallengingbecauseitrequiresdetectingthestanceofpreviouslyunseentargetsintheinfer-encephase.Theabilitytolearntransferabletarg...

展开>> 收起<<
Zero-shot stance detection based on cross-domain feature enhancement by contrastive learning Xuechen ZhaoJiaying ZouZhong ZhangFeng XieBin Zhou Lei Tian.pdf

共9页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:9 页 大小:1.67MB 格式:PDF 时间:2025-04-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 9
客服
关注