Parameter-Efficient Tuning with Special Token Adaptation Xiaocong Yangy James Y. Huangz Wenxuan Zhouzand Muhao Chenz yTsinghua UniversityzUniversity of Southern California

2025-05-02 0 0 981.22KB 8 页 10玖币
侵权投诉
Parameter-Efficient Tuning with Special Token Adaptation
Xiaocong Yang
, James Y. Huang, Wenxuan Zhouand Muhao Chen
Tsinghua University; University of Southern California
yangxc.18@sem.tsinghua.edu.cn;{huangjam,zhouwenx,muhaoche}@usc.edu
Abstract
Parameter-efficient tuning aims at updating
only a small subset of parameters when adapt-
ing a pretrained model to downstream tasks.
In this work, we introduce PASTA, in which
we only modify the special token representa-
tions (e.g., [SEP] and [CLS] in BERT) be-
fore the self-attention module at each layer in
Transformer-based models. PASTA achieves
comparable performance to full finetuning in
natural language understanding tasks includ-
ing text classification and NER with up to only
0.029% of total parameters trained. Our work
not only provides a simple yet effective way of
parameter-efficient tuning, which has a wide
range of practical applications when deploying
finetuned models for multiple tasks, but also
demonstrates the pivotal role of special tokens
in pretrained language models.1
1 Introduction
Built upon a pretrained language model (PLM; De-
vlin et al. 2019;Liu et al. 2019;Yang et al. 2019;
Chowdhery et al. 2022), many of the recent NLP
systems are developed based on task-specific fine-
tuning. In this way, the PLM effectively leverages
the task-agnostic knowledge captured during self-
supervised pretraining and adapts itself to down-
stream tasks. However, full finetuning poses a
challenge to model deployment under multi-task,
memory-limited scenarios, where we need to train
and store a separate full-sized model for each sub-
stantially distinct task. As an alternative, parameter-
efficient tuning (Ding et al.,2022) aims at only up-
dating a small number of parameters when adapting
PLMs to downstream tasks while making most of
the model parameters fixed and shared among tasks,
thus reducing memory usage.
In this paper, we propose
PA
rameter-efficient
tuning with
S
pecial
T
oken
A
daptation (PASTA),
Work done when visiting USC.
1
Our code is publicly available at:
https://github.
com/luka-group/PASTA/
L5-H3 L5-H5 L5-H6 L5-H7
L20-H3 L20-H5 L20-H8 L20-H15
Figure 1: Examples of vertical attention heads in the 5-
th and 20-th layer of BERT-large with a random sample
from CoLA (Warstadt et al.,2019) as input. Heads in
the first row and second row assign most of maximal
attention weights to [CLS] and [SEP] respectively.
See Appx. §Cfor the full attention map.
where we only add trainable vectors to hidden rep-
resentations of special tokens
2
at each layer before
the multi-head attention module in Transformer-
based PLMs. Our work is motivated by the role
of special tokens in PLMs. First, special tokens
such as
[CLS]
collect information from the whole
input sequence and are typically regarded as the
global text representation (Devlin et al.,2019). For
sentence-level tasks such as GLUE (Wang et al.,
2018), a common practice is to add a new classifier
head based on the
[CLS]
representation in the last
model layer. Thus, if trained properly, by updating
the
[CLS]
representations, we can approximate
the result of the information collection process in
PLMs. Second, many attention heads in PLMs fol-
low a vertical pattern
3
, where the attention scores
are mostly allocated to either the
[CLS]
or
[SEP]
token (Clark et al.,2019;Kovaleva et al.,2019), as
2
WLOG, we use the notation of special tokens
[CLS]
and
[SEP]
in BERT for the convenience of expression, while the
method applies in the same way to other paradigms such as
<S> and </S> in RoBERTa (Liu et al.,2019).
3
Following Voita et al. (2019) and Yao et al. (2021), an
attention head is regarded as vertical if at least 90% tokens
assign maximal attention scores to either [CLS] or [SEP].
arXiv:2210.04382v2 [cs.CL] 14 Feb 2023
illustrated in Fig. 1. Therefore, updates to special
tokens can also be disseminated to other tokens dur-
ing the forward pass through the vertical attention
heads (Elhage et al.,2021), enabling the PLMs to
adapt to both sentential and lexical tasks.
By tuning as few as up to 0.029% of the total
parameters, PASTA achieves competitive perfor-
mance on par with full finetuning and BitFit (Za-
ken et al.,2022) on GLUE (§4.2). It also outper-
forms P-tuning v2 (Liu et al.,2022) by 0.6% on
CoNLL2003 with 20
×
fewer additional parame-
ters (§4.3). The ablation study shows that we can
further reduce trainable parameters to 0.009% with
only a slight performance drop (§4.4), showing the
merit of adapting special token representations.
2 Related Work
A recent survey (Ding et al.,2022) categorizes three
types of parameter-efficient tuning methods. Ad-
dition methods (Houlsby et al.,2019;Lester et al.,
2021;Liu et al.,2022) introduce a small number
of additional trainable parameters while keeping
those in the PLM unchanged. Specification meth-
ods (Zaken et al.,2022;Guo et al.,2021;Zhao
et al.,2020) update a portion of parameters in the
PLM while keeping others frozen. Reparameteri-
zation methods (Aghajanyan et al.,2021;Hu et al.,
2021;Qin et al.,2021) modify PLMs’ structures to
parameter-efficient forms. Our method belongs to
the addition-based methods and follows the basic
settings of P-tuning v2 (Liu et al.,2022), where
newly initialized hidden representations of tokens
are inserted into each Transformer layer. Different
from most prompt tuning methods that introduce
new tokens, we add the introduced vectors to the
hidden states of special tokens and keep the se-
quence length unchanged.
Previous works use probing tasks (Wu et al.,
2020) and pruning methods (Prasanna et al.,2020)
to study the roles of different modules inside BERT.
It has been shown that functional specialization
exists in BERT self-attention heads (Clark et al.,
2019), and vertical attention heads
3
take up a large
portion (Yao et al.,2021). Kovaleva et al. (2019)
find that vertical attention heads are almost ex-
clusively associated with attention to
[SEP]
or
[CLS]
tokens, and Clark et al. (2019) conclude
that heads in early layers often attend to
[CLS]
while in middle layers attend to
[SEP]
. In this
work, we demonstrate that adapting hidden repre-
sentations of special tokens is sufficient to bring the
Multi-Head Attention
h([CLS]) Sentence1
Frozen
Trainable
e(𝑣𝟏
𝒍)
h([SEP]) Sentence2 h([SEP])
e(𝑣𝟐
𝑙)e(𝑣𝟑
𝑙)
Add & Norm
Feed-Forward Network
Add & Norm
Figure 2: Architecture of PASTA layer in Transformer.
Skip-connections in Transformers are not shown for
brevity. At layer lwe add a trainable vector e(vl
p)
Rdto the hidden representation of the p-th special to-
ken in the input sequence, and freeze the weights of the
PLM.
performance of PLMs to the level of full finetuning.
3 PASTA
Given a large PLM, our goal is to develop a
parameter-efficient tuning method that only up-
dates a small set of parameters when adapting to a
downstream task. To this end, we propose a simple
yet effective method called PASTA, in which we
train a hidden vector for every special token at each
Transformer layer, along with a task-specific clas-
sifier, while freezing the parameters of the PLM.
3.1 Special Token Adaptation
The special token adaption is illustrated in Fig. 2.
Although these adaptations are not directly applied
to non-special tokens, changes in special token hid-
den states can be effectively disseminated to other
tokens via self-attention during forward passes,
thanks to the prevalence of vertical attention heads
3
in PLMs.
Specifically, denote the inputs to the
l
-th Trans-
former layer as
Hl={hl
i}N
i=1,hl
iRd
, where
N
is the number of input tokens,
d
is the hidden size,
PASTA modifies the inputs as follows:
Hl
mod ={hl
i+ml
i}N
i=1,
Hl+1 =Trml(Hl
mod),
where
Trml
is the
l
-th Transformer layer,
ml
i
Rd
is our special token adaptation defined as fol-
lows:
ml
i=(0if token iis not a special token
e(vl
p)if token iis the p-th special token
摘要:

Parameter-EfcientTuningwithSpecialTokenAdaptationXiaocongYangy,JamesY.Huangz,WenxuanZhouzandMuhaoChenzyTsinghuaUniversity;zUniversityofSouthernCaliforniayangxc.18@sem.tsinghua.edu.cn;{huangjam,zhouwenx,muhaoche}@usc.eduAbstractParameter-efcienttuningaimsatupdatingonlyasmallsubsetofparameterswhena...

展开>> 收起<<
Parameter-Efficient Tuning with Special Token Adaptation Xiaocong Yangy James Y. Huangz Wenxuan Zhouzand Muhao Chenz yTsinghua UniversityzUniversity of Southern California.pdf

共8页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:8 页 大小:981.22KB 格式:PDF 时间:2025-05-02

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 8
客服
关注