On the Interpolation of Contextualized Term-based Ranking with BM25 for Query-by-Example Retrieval

2025-04-24 0 0 2.68MB 10 页 10玖币
侵权投诉
On the Interpolation of Contextualized Term-based Ranking
with BM25 for ery-by-Example Retrieval
Amin Abolghasemi
m.a.abolghasemi@liacs.leidenuniv.nl
Leiden University
Leiden, Netherlands
Arian Askari
a.askari@liacs.leidenuniv.nl
Leiden University
Leiden, Netherlands
Suzan Verberne
s.verberne@liacs.leidenuniv.nl
Leiden University
Leiden, Netherlands
ABSTRACT
Term-based ranking with pre-trained transformer-based language
models has recently gained attention as they bring the contextu-
alization power of transformer models into the highly ecient
term-based retrieval. In this work, we examine the generalizability
of two of these deep contextualized term-based models in the con-
text of query-by-example (QBE) retrieval in which a seed document
acts as the query to nd relevant documents. In this setting — where
queries are much longer than common keyword queries — BERT
inference at query time is problematic as it involves quadratic com-
plexity. We investigate TILDE and TILDEv2, both of which leverage
BERT tokenizer as their query encoder. With this approach, there
is no need for BERT inference at query time, and also the query
can be of any length. Our extensive evaluation on the four QBE
tasks of SciDocs benchmark shows that in a query-by-example
retrieval setting TILDE and TILDEv2 are still less eective than
a cross-encoder BERT ranker. However, we observe that BM25
could show a competitive ranking quality compared to TILDE and
TILDEv2 which is in contrast to the ndings about the relative
performance of these three models on retrieval for short queries
reported in prior work. This result raises the question about the
use of contextualized term-based ranking models being benecial
in QBE setting. We follow-up on our ndings by studying the score
interpolation between the relevance score from TILDE (TILDEv2)
and BM25. We conclude that these two contextualized term-based
ranking models capture dierent relevance signals than BM25 and
combining the dierent term-based rankers results in statistically
signicant improvements in QBE retrieval. Our work sheds light
on the challenges of retrieval settings dierent from the common
evaluation benchmarks. It could be of value as future work to study
other contextualized term-based ranking models in QBE settings.
CCS CONCEPTS
Information systems Retrieval models and ranking
;
Eval-
uation of retrieval results.
KEYWORDS
Query-by-example retrieval, term-based retrieval, Transformer mod-
els, BERT-based ranking
This work is licensed under a Creative Commons Attribution
International 4.0 License.
ICTIR ’22, July 11–12, 2022, Madrid, Spain.
©2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9412-3/22/07.
https://doi.org/10.1145/3539813.3545133
ACM Reference Format:
Amin Abolghasemi, Arian Askari, and Suzan Verberne. 2022. On the In-
terpolation of Contextualized Term-based Ranking with BM25 for Query-
by-Example Retrieval. In Proceedings of the 2022 ACM SIGIR International
Conference on the Theory of Information Retrieval (ICTIR ’22), July 11–12,
2022, Madrid, Spain. ACM, New York, NY, USA, 10 pages. https://doi.org/10.
1145/3539813.3545133
1 INTRODUCTION
Query-by-Example (QBE) retrieval is an Information Retrieval (IR)
setting in which a seed document
1
acts as the query to represent
the user’s information need and the retrieval engine searches over
a collection of the same type of documents [
1
,
19
,
20
,
29
]. This
retrieval setup is typical in professional, domain-specic tasks such
as legal case law retrieval [
1
,
2
], patent prior art search [
11
,
24
,
25
],
and scientic literature search [
1
,
19
,
20
]. While using a document
as a query could become challenging due to its length and complex
semantic structure, prior work has shown that traditional term-
based retrieval models like BM25 [
27
] are highly eective when
used in QBE retrieval [1, 2, 28].
Recently, deep contextualized term-based retrieval models have
gained attention as they bring the contextualization power of the
pre-trained transformer-based language models into the highly
ecient term-based retrieval. Examples of such models are Deep-
Impact [
18
], SPLADE [
10
], SPLADEv2 [
9
], TILDE [
34
], TILDEv2
[
33
], COIL [
12
], and uniCOIL [
16
]. Here, we specically investi-
gate TILDE, which is a term independent likelihood model, and
its follow-up TILDEv2 which is a deep contextualized lexical exact
matching model.
TILDE and TILDEv2, which are introduced as term-based re-
ranking models, follow a recent paradigm in term-based retrieval
where term importance is pre-computed with scalar term weights.
Besides, to predict the relevance score, both of these models use the
BERT tokenizer as their query encoder which means that they do
not need to perform any BERT inference at query time to encode the
query. However, leveraging tokenizer-based encoding of the query
trades o the query representation and therefore eectiveness with
higher eciency at inference time [
33
]. While the eectiveness of
these models is evaluated on tasks and benchmarks where we have
short queries, e.g., MSMARCO Passage Ranking [
21
] and the TREC
DL Track [
7
], in this paper, we evaluate them in the aforementioned
QBE retrieval setting where queries are much longer than common
keyword queries. In this regard, we address the following research
questions:
RQ1
How eective are TILDE and TILDEv2 in query-by-example
retrieval?
1
Throughout this paper, we use the term “document” to refer to a unit of retrieval [
17
]
arXiv:2210.05512v1 [cs.IR] 11 Oct 2022
ICTIR ’22, July 11–12, 2022, Madrid, Spain. Amin Abolghasemi, Arian Askari, and Suzan Verberne
A specic direction in answering RQ1 is to investigate the rank-
ing quality of TILDE and TILDEv2 in comparison with the eective
cross-encoder BERT ranker [
1
,
22
], which is described in section
2.4. We are interested in this direction for two reasons. First, the
cross-encoder BERT ranker exhibits quadratic complexity in both
space and time with respect to the input length [
17
] and this is ag-
gravated in QBE where we have long queries. TILDE and TILDEv2,
however, do not need any BERT inference at query time. Second,
due to the maximum input length of BERT, cross-encoder BERT
ranker, which uses the concatenation of the query and the docu-
ment, might not cover the whole query and document tokens in a
QBE setting, whereas in TILDE and TILDEv2, the query can be of
any length and documents are covered up to the maximum length
of BERT.
Additionally, since TILDEv2 pre-computes the term weights only
for those tokens existing in the documents, one risk is that it might
aggravate the vocabulary mismatch problem. A typical approach to
address this issue is to use document expansion methods. Zhuang
and Zuccon
[33]
use TILDE as their document expansion model
for TILDEv2. We adopt that approach for our task and further
investigate the impact of token-based document expansion with
TILDE on the ranking quality of TILDEv2 in a QBE retrieval setting.
Apart from comparing TILDE and TILDEv2 to the cross-encoder
BERT ranker, we also make a comparison to traditional lexical
matching models (BM25 and Probabilistic Language models), which
have been shown as strong baselines on QBE tasks in prior work [
2
,
28]:
RQ2
What is the eectiveness of traditional lexical matching mod-
els with varying tokenization strategies in comparison to
TILDE and TILDEv2?
To answer RQ2 we will investigate the eect of using the BERT
tokenizer [
8
] as pre-processing for traditional term-based retrieval
models. By doing so, we are aligning the index vocabulary of tradi-
tional models with that of TILDE and TILDEv2, which could make
our comparison more fair.
We will see in the Section 4 that BM25 shows a competitive
ranking quality in comparison to TILDE and TILDEv2 in our QBE
benchmark. Because of the similar quality on average, we are in-
terested to see if the relevance signals of TILDE and TILDEv2 are
dierent from that of BM25, to nd out if the methods are comple-
mentary to each other. To this aim, we will investigate the following
research question:
RQ3
To what extent do TILDE and TILDEv2 encode a dierent
relevance signal from BM25?
To address the question above, as it is described in details in
Section 3.3, we will analyze the eect of the interpolation of the
scores of TILDE and TILDEv2 with BM25.
Since TILDE and TILDEv2 are introduced as re-ranking models,
we use four dierent tasks from the SciDocs evaluation benchmark
[
5
] as a domain-specic QBE benchmark. This benchmark uses
scientic paper abstracts as the query and documents. The retrieval
setting in these tasks suits as a re-ranking setup because of the
number of documents to be ranked for each query. Since that we
are working in a domain-specic evaluation setting, we will also
address the following research question:
RQ4
To what extent does a highly tailored domain-specic pre-
trained BERT model aect the eectiveness of TILDE and
TILDEv2 in comparison to a BERTbase model?
In summary our main contributions in this work are three-fold:
We show that two recent transformer-based lexical models
(TILDE and TILDEv2) are less eective in Query-by-Example
retrieval than was expected based on results reported for ad
hoc retrieval. This indicates that QBE retrieval is structurally
dierent from other IR settings and requires special attention
for methods development;
We show that the relevance signals of TILDE and TILDEv2
can be complementary to that of BM25 as interpolation of the
methods leads to an improvement in ranking eectiveness;
We also investigate interpolations of BM25 with TILDE and
TILDEv2 in an ideal setting where the optimal interpolation
weight is known a priori, and by doing so, we show that
more stratied approaches for the interpolation could result
in higher gains from the interpolation of BM25 with TILDE
and TILDEv2.
In section 2 we describe the retrieval models used in this work.
In section 3 we provide details about our methods and experiments
and in section 4 we analyze the results and discuss the answers to
our research questions. Section 5 is dedicated to to further analysis
of the results, and nally, in Section 6 we provide the conclusion.
The code used in this paper is available at: https://github.com/
aminvenv/lexica
2 BACKGROUND: RETRIEVAL MODELS
In this section, we briey introduce the retrieval models that we
implement and evaluate in our experiments.
2.1 Traditional lexical matching models
BM25. For BM25 [
27
], we use the implementation by Elastic-
search
2
with the parameters
𝑘=
2
.
75, and
𝑏=
1, which was tuned
over the validation set.
Probabilistic Language Models. For language modeling (LM) based
retrieval [
4
,
13
,
26
], we use the built-in similarity functions of Elas-
ticsearch for the implementation of language model with Jelinek
Mercer (JM) smoothing [32].
2.2 Term Independent Likelihood Model:
TILDE
TILDE is a tokenizer-based term-based retrieval model which fol-
lows a term independence assumption and formulates the likelihood
of a query as follows:
TILDE-QL(𝑞|𝑑)=
|𝑞|
𝑖
𝑙𝑜𝑔(𝑃𝜃(𝑞𝑖|𝑑)) (1)
in which
𝑞
is the query, and
𝑑
is the document. As Figure 1 shows,
to compute the relevance score, the text of a document
𝑑
is fed
as the input for BERT and the log probability for each token is
estimated by using a language modeling head on top of the BERT
[CLS] token output. In other words, we are pre-computing the
2https://github.com/elastic/elasticsearch
摘要:

OntheInterpolationofContextualizedTerm-basedRankingwithBM25forQuery-by-ExampleRetrievalAminAbolghasemim.a.abolghasemi@liacs.leidenuniv.nlLeidenUniversityLeiden,NetherlandsArianAskaria.askari@liacs.leidenuniv.nlLeidenUniversityLeiden,NetherlandsSuzanVerbernes.verberne@liacs.leidenuniv.nlLeidenUnivers...

展开>> 收起<<
On the Interpolation of Contextualized Term-based Ranking with BM25 for Query-by-Example Retrieval.pdf

共10页,预览2页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:10 页 大小:2.68MB 格式:PDF 时间:2025-04-24

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 10
客服
关注