
ICTIR ’22, July 11–12, 2022, Madrid, Spain. Amin Abolghasemi, Arian Askari, and Suzan Verberne
A specic direction in answering RQ1 is to investigate the rank-
ing quality of TILDE and TILDEv2 in comparison with the eective
cross-encoder BERT ranker [
1
,
22
], which is described in section
2.4. We are interested in this direction for two reasons. First, the
cross-encoder BERT ranker exhibits quadratic complexity in both
space and time with respect to the input length [
17
] and this is ag-
gravated in QBE where we have long queries. TILDE and TILDEv2,
however, do not need any BERT inference at query time. Second,
due to the maximum input length of BERT, cross-encoder BERT
ranker, which uses the concatenation of the query and the docu-
ment, might not cover the whole query and document tokens in a
QBE setting, whereas in TILDE and TILDEv2, the query can be of
any length and documents are covered up to the maximum length
of BERT.
Additionally, since TILDEv2 pre-computes the term weights only
for those tokens existing in the documents, one risk is that it might
aggravate the vocabulary mismatch problem. A typical approach to
address this issue is to use document expansion methods. Zhuang
and Zuccon
[33]
use TILDE as their document expansion model
for TILDEv2. We adopt that approach for our task and further
investigate the impact of token-based document expansion with
TILDE on the ranking quality of TILDEv2 in a QBE retrieval setting.
Apart from comparing TILDE and TILDEv2 to the cross-encoder
BERT ranker, we also make a comparison to traditional lexical
matching models (BM25 and Probabilistic Language models), which
have been shown as strong baselines on QBE tasks in prior work [
2
,
28]:
RQ2
What is the eectiveness of traditional lexical matching mod-
els with varying tokenization strategies in comparison to
TILDE and TILDEv2?
To answer RQ2 we will investigate the eect of using the BERT
tokenizer [
8
] as pre-processing for traditional term-based retrieval
models. By doing so, we are aligning the index vocabulary of tradi-
tional models with that of TILDE and TILDEv2, which could make
our comparison more fair.
We will see in the Section 4 that BM25 shows a competitive
ranking quality in comparison to TILDE and TILDEv2 in our QBE
benchmark. Because of the similar quality on average, we are in-
terested to see if the relevance signals of TILDE and TILDEv2 are
dierent from that of BM25, to nd out if the methods are comple-
mentary to each other. To this aim, we will investigate the following
research question:
RQ3
To what extent do TILDE and TILDEv2 encode a dierent
relevance signal from BM25?
To address the question above, as it is described in details in
Section 3.3, we will analyze the eect of the interpolation of the
scores of TILDE and TILDEv2 with BM25.
Since TILDE and TILDEv2 are introduced as re-ranking models,
we use four dierent tasks from the SciDocs evaluation benchmark
[
5
] as a domain-specic QBE benchmark. This benchmark uses
scientic paper abstracts as the query and documents. The retrieval
setting in these tasks suits as a re-ranking setup because of the
number of documents to be ranked for each query. Since that we
are working in a domain-specic evaluation setting, we will also
address the following research question:
RQ4
To what extent does a highly tailored domain-specic pre-
trained BERT model aect the eectiveness of TILDE and
TILDEv2 in comparison to a BERTbase model?
In summary our main contributions in this work are three-fold:
•
We show that two recent transformer-based lexical models
(TILDE and TILDEv2) are less eective in Query-by-Example
retrieval than was expected based on results reported for ad
hoc retrieval. This indicates that QBE retrieval is structurally
dierent from other IR settings and requires special attention
for methods development;
•
We show that the relevance signals of TILDE and TILDEv2
can be complementary to that of BM25 as interpolation of the
methods leads to an improvement in ranking eectiveness;
•
We also investigate interpolations of BM25 with TILDE and
TILDEv2 in an ideal setting where the optimal interpolation
weight is known a priori, and by doing so, we show that
more stratied approaches for the interpolation could result
in higher gains from the interpolation of BM25 with TILDE
and TILDEv2.
In section 2 we describe the retrieval models used in this work.
In section 3 we provide details about our methods and experiments
and in section 4 we analyze the results and discuss the answers to
our research questions. Section 5 is dedicated to to further analysis
of the results, and nally, in Section 6 we provide the conclusion.
The code used in this paper is available at: https://github.com/
aminvenv/lexica
2 BACKGROUND: RETRIEVAL MODELS
In this section, we briey introduce the retrieval models that we
implement and evaluate in our experiments.
2.1 Traditional lexical matching models
BM25. For BM25 [
27
], we use the implementation by Elastic-
search
2
with the parameters
𝑘=
2
.
75, and
𝑏=
1, which was tuned
over the validation set.
Probabilistic Language Models. For language modeling (LM) based
retrieval [
4
,
13
,
26
], we use the built-in similarity functions of Elas-
ticsearch for the implementation of language model with Jelinek
Mercer (JM) smoothing [32].
2.2 Term Independent Likelihood Model:
TILDE
TILDE is a tokenizer-based term-based retrieval model which fol-
lows a term independence assumption and formulates the likelihood
of a query as follows:
TILDE-QL(𝑞|𝑑)=
|𝑞|
𝑖
𝑙𝑜𝑔(𝑃𝜃(𝑞𝑖|𝑑)) (1)
in which
𝑞
is the query, and
𝑑
is the document. As Figure 1 shows,
to compute the relevance score, the text of a document
𝑑
is fed
as the input for BERT and the log probability for each token is
estimated by using a language modeling head on top of the BERT
[CLS] token output. In other words, we are pre-computing the
2https://github.com/elastic/elasticsearch