
illustrated in Fig. 1. Therefore, updates to special
tokens can also be disseminated to other tokens dur-
ing the forward pass through the vertical attention
heads (Elhage et al.,2021), enabling the PLMs to
adapt to both sentential and lexical tasks.
By tuning as few as up to 0.029% of the total
parameters, PASTA achieves competitive perfor-
mance on par with full finetuning and BitFit (Za-
ken et al.,2022) on GLUE (§4.2). It also outper-
forms P-tuning v2 (Liu et al.,2022) by 0.6% on
CoNLL2003 with 20
×
fewer additional parame-
ters (§4.3). The ablation study shows that we can
further reduce trainable parameters to 0.009% with
only a slight performance drop (§4.4), showing the
merit of adapting special token representations.
2 Related Work
A recent survey (Ding et al.,2022) categorizes three
types of parameter-efficient tuning methods. Ad-
dition methods (Houlsby et al.,2019;Lester et al.,
2021;Liu et al.,2022) introduce a small number
of additional trainable parameters while keeping
those in the PLM unchanged. Specification meth-
ods (Zaken et al.,2022;Guo et al.,2021;Zhao
et al.,2020) update a portion of parameters in the
PLM while keeping others frozen. Reparameteri-
zation methods (Aghajanyan et al.,2021;Hu et al.,
2021;Qin et al.,2021) modify PLMs’ structures to
parameter-efficient forms. Our method belongs to
the addition-based methods and follows the basic
settings of P-tuning v2 (Liu et al.,2022), where
newly initialized hidden representations of tokens
are inserted into each Transformer layer. Different
from most prompt tuning methods that introduce
new tokens, we add the introduced vectors to the
hidden states of special tokens and keep the se-
quence length unchanged.
Previous works use probing tasks (Wu et al.,
2020) and pruning methods (Prasanna et al.,2020)
to study the roles of different modules inside BERT.
It has been shown that functional specialization
exists in BERT self-attention heads (Clark et al.,
2019), and vertical attention heads
3
take up a large
portion (Yao et al.,2021). Kovaleva et al. (2019)
find that vertical attention heads are almost ex-
clusively associated with attention to
[SEP]
or
[CLS]
tokens, and Clark et al. (2019) conclude
that heads in early layers often attend to
[CLS]
while in middle layers attend to
[SEP]
. In this
work, we demonstrate that adapting hidden repre-
sentations of special tokens is sufficient to bring the
Multi-Head Attention
h([CLS]) Sentence1
Frozen
Trainable
e(𝑣𝟏
𝒍)
h([SEP]) Sentence2 h([SEP])
e(𝑣𝟐
𝑙)e(𝑣𝟑
𝑙)
Add & Norm
Feed-Forward Network
Add & Norm
Figure 2: Architecture of PASTA layer in Transformer.
Skip-connections in Transformers are not shown for
brevity. At layer lwe add a trainable vector e(vl
p)∈
Rdto the hidden representation of the p-th special to-
ken in the input sequence, and freeze the weights of the
PLM.
performance of PLMs to the level of full finetuning.
3 PASTA
Given a large PLM, our goal is to develop a
parameter-efficient tuning method that only up-
dates a small set of parameters when adapting to a
downstream task. To this end, we propose a simple
yet effective method called PASTA, in which we
train a hidden vector for every special token at each
Transformer layer, along with a task-specific clas-
sifier, while freezing the parameters of the PLM.
3.1 Special Token Adaptation
The special token adaption is illustrated in Fig. 2.
Although these adaptations are not directly applied
to non-special tokens, changes in special token hid-
den states can be effectively disseminated to other
tokens via self-attention during forward passes,
thanks to the prevalence of vertical attention heads
3
in PLMs.
Specifically, denote the inputs to the
l
-th Trans-
former layer as
Hl={hl
i}N
i=1,hl
i∈Rd
, where
N
is the number of input tokens,
d
is the hidden size,
PASTA modifies the inputs as follows:
Hl
mod ={hl
i+ml
i}N
i=1,
Hl+1 =Trml(Hl
mod),
where
Trml
is the
l
-th Transformer layer,
ml
i∈
Rd
is our special token adaptation defined as fol-
lows:
ml
i=(0if token iis not a special token
e(vl
p)if token iis the p-th special token