
SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis
Jiaxin Pei ♣Vítor Silva†Maarten Bos †Yozen Liu†
Leonardo Neves†David Jurgens♣Francesco Barbieri†
♣School of Information, University of Michigan, Ann Arbor, MI, USA
†Snap Inc, Santa Monica, CA, USA
♣{pedropei, jurgens}@umich.edu
†{fbarbieri, vsilvasousa, maarten, yliu2, lneves}@snap.com
Abstract
We propose MINT, a new Multilingual
intimacy analysis dataset covering 13,372
tweets in 10 languages including English,
French, Spanish, Italian, Portuguese, Korean,
Dutch, Chinese, Hindi, and Arabic. We bench-
marked a list of popular multilingual pre-
trained language models. The dataset is re-
leased along with the SemEval 2023 Task 9:
Multilingual Tweet Intimacy Analysis.
1 Introduction
Intimacy has long been viewed as a primary di-
mension of human relationships and interpersonal
interactions (Maslow,1981;Sullivan,2013;Prager,
1995). Existing studies suggest that intimacy is an
essential component of language and can be mod-
eled with computational methods (Pei and Jurgens,
2020). Textual intimacy is an important social as-
pect of language, and automatically analyzing it
can help to reveal important social norms in vari-
ous contexts (Pei and Jurgens,2020). Recognizing
intimacy can also serve as an important benchmark
to test the ability of computational models to under-
stand social information (Hovy and Yang,2021).
Despite the importance of intimacy in language,
resources on textual intimacy analysis remain rare.
Pei and Jurgens (2020) annotated the first textual in-
timacy dataset containing 2,397 English questions
collected mostly from social media posts and fic-
tional dialogues. However, phrases following the
question structure are used for interrogative situa-
tions, and models trained over it may not generalize
well over texts in other forms of languages.
To further promote computational modeling of
textual intimacy, we annotated a new multilingual
textual intimacy dataset named MINT. MINT cov-
ers tweets in 6 languages as the training data, in-
cluding English, Spanish, French, Portuguese, Ital-
ian, and Chinese, covering major languages used in
The Americas, Europe, and Asia. A total of 12,000
tweets are annotated for the 6 languages. To test
the model generalizability under the zero-shot set-
tings, we also annotated small test sets for Dutch,
Korean, Hindi, and Arabic (500 tweets for each).
We benchmarked a series of large multilingual
pre-trained language models including XLM-T
(Barbieri et al.,2021), XLM-R (Conneau et al.,
2019), BERT (Devlin et al.,2018), DistillBERT
(Sanh et al.,2019) and MiniLM (Wang et al.,2020).
We found that distilled models generally perform
worse than other normal models, while the XLM-
R model trained over the twitter dataset (XLM-
T) performs the best on 7 languages. While the
pre-trained language models are able to achieve
promising performance, zero-shot prediction of un-
seen languages remains challenging especially for
Korean and Hindi.
2 Data
We choose Twitter as the source of our dataset as
Twitter is a public media platform that naturally
includes multilingual text data, and from our analy-
sis, a fair amount of intimate texts. In this section,
we introduce the data collection and annotation
process for MINT.
2.1 Sampling
We use tweets sampled from 2018 to 2022. We
use the lang_id key in the tweet object to select
English and Chinese tweets. For other languages,
we use fastText (Joulin et al.,2016b,a) for language
identification
1
and assign language labels when
the model confidence is larger than 0.8. All the
mentions of unverified users are replaced with a
special token “@user” during pre-processing to
remove noise from random and very infrequent
usernames.
We fine-tune XLM-T, a multilingual RoBERTa
model adapted to the Twitter domain (Barbieri
1https://fasttext.cc/docs/en/
language-identification.html
arXiv:2210.01108v2 [cs.CL] 3 Feb 2023