
Causal Intervention-based Prompt Debiasing for Event Argument Extraction
Jiaju Lin1,Jie Zhou2,Qin Chen1
1School of Computer Science and Technology, East China Normal University
2School of Computer Science, Fudan University
jiaju lin@stu.ecnu.edu.cn, jie zhou@fudan.edu.cn, qchen@cs.ecnu.edu.cn,
Abstract
Prompt-based methods have become increasingly
popular among information extraction tasks, es-
pecially in low-data scenarios. By formatting a
finetune task into a pre-training objective, prompt-
based methods resolve the data scarce problem ef-
fectively. However, seldom do previous research
investigate the discrepancy among different prompt
formulating strategies. In this work, we compare
two kinds of prompts, name-based prompt and
ontology-base prompt, and reveal how ontology-
base prompt methods exceed its counterpart in
zero-shot event argument extraction (EAE) . Fur-
thermore, we analyse the potential risk in ontology-
base prompts via a causal view and propose a de-
bias method by causal intervention. Experiments
on two benchmarks demonstrate that modified by
our debias method, the baseline model becomes
both more effective and robust, with significant im-
provement in the resistance to adversarial attacks.
1 Introduction
Event argument extraction (EAE) plays an independent role
in natural language processing. It has become widely de-
ployed in downstream tasks like natural language understand-
ing and decision making[Zhang et al., 2022a]. During the
past few years, a pretrain-finetuning paradigm of large lan-
guage models has achieved great success. However, it is un-
avoidable to train a new task-specific head for every new ap-
pearing type with plenty of labeled instances, that leads to
‘data hungry’ and sets barriers to real-word implementation.
Fortunately, a novel paradigm, namely ‘prompt’, provides
a promising approach to address the data scarce problem
[Liu et al., 2021b]. By formatting a finetune task into a
pre-training objective, prompt-based methods have become
the best-performed baselines especially in low-data scenar-
ios. Nonetheless, how to design a proper prompt is still an
open problem. Although automatic prompts generating meth-
ods have experiences a great surge in the last years, manual
prompts still dominate information extraction area. Current
state-of-the-art prompt-based methods [Li et al., 2021]are
mainly based on manual designed prompts. Besides, previ-
ous works [Ma et al., 2022a]also verify the ineffectiveness of
auto-prompts in EAE. Based on these observations, we won-
der which is the better prompt design strategy and how this
prompt facilitates extraction.
In this paper, to answer the above questions, we divide
current manual prompts into two categories: 1) name-based
prompts which are formed by concatenating names of the
arguments belonging to an event type. 2) ontology-based
prompts which are derived from the event ontology, the de-
scription of an event type in natural language. We carry
out quantitative analysis on predictions of these two prompts
and find that, compared with name-based prompts, ontology-
based prompts can provide additional syntactic structure
information to facilitate extraction. By filtering the im-
proper potential arguments, ontology-based prompts improve
model’s overall performance. Nevertheless, every coin has
its two sides. The hidden risks are introduced along with the
beneficial information. We theoretically identify the spurious
correlation caused by ontology-based prompts from a causal
view. Based on the structural causal model, we find that the
model trained with ontology-based prompts may have bias on
entities that share the common syntactic role with the argu-
ment name in the prompt, e.g. both the entity in the sentence
and argument name in the prompt are subject.
We further propose to conduct causal intervention on the
state-of-the-art method. Via backdoor adjustments, inter-
vened model rectifies the confounder bias stem from the sim-
ilar syntax. Experiments are performed on two well-know
benchmarks, namely RAMS and WikiEvents. The enhance-
ments in performance demonstrate the effectiveness of our
proposed approach. Moreover, we evaluate the robustness of
our method via exposing the model to adversarial attacks and
noise in training. The results show that modified by our ad-
justments, the model becomes more robust than ever before.
Our contributions are threefold:
• We propose a causal intervention-based prompt debi-
asing model for event argument extraction based on
bias found by investigating how ontology-based prompts
work in zero-shot event argument extraction task.
• We rethink the prompt-based event argument extraction
in a causal view to analyze the causal effect among dif-
ferent elements and reduce the biases via backdoor ad-
justments.
• Extensive experiments on the cutting-edge method and
arXiv:2210.01561v1 [cs.CL] 4 Oct 2022