CHI ’22, April 30–May 6, 2022, New Orleans, LA N. Boonprakong, B. Tag, T. Dingler
partisans [
25
] or those who are older than 65 years [
3
] are prone to believing fake news. Roozenbeek et al
.
[31]
found that higher numeracy skills and better trust in science are indicators of lower susceptibility to
misinformation.
On the other hand, some research has investigated why people share unveried information online. Motivated
by [
8
,
28
], Karami et al
. [17]
listed ve motivational factors in spreading fake news: uncertainty, anxiety, lack
of control, relationship enhancement, and social rank. Laato et al
. [18]
found that trust in online information
and information overload are strong indicators of sharing unveried information. Chen et al
. [5]
suggested that
people share misinformation because of social factors, i.e., self-expression and socialization. In addition, Avram
et al
. [1]
found that social engagement metrics (i.e., the number of likes or shares) increased the tendency to
share misinformative contents.
Based on the above-mentioned psychological ndings, some research have put these ndings into practice by
building computing systems that objectively identify potential misinformation spreaders based on their online
and oine behaviours [
13
,
17
,
36
]. Unlike previous approaches that rely on self-report measures, these approaches
quantify how people behave with information and produce insights about their specic behaviours regarding
to misinformation. Through behavioural and psycho-physiological measures, these approaches can explicitly
identify when and which users are prone to believe or spread misinformation.
Dierent individuals have dierent backgrounds and dierent levels of cognitive capacity. Therefore, some
information consumers may be more prone than others to misinformation. Echoing the ndings in Geeng et al
.
[12]
that dierent design interventions may be eective for dierent users, the objective approaches to identify
vulnerable users would also provide a a guide on which misinformation intervention should be deployed to which
group of users. Ultimately, interventions would be made more eective as they address the right group of users.
In this position paper, we discuss empirical approaches to identifying people who may be susceptible to
misinformation: detecting cognitive biases and proling vulnerable users. We also highlight their implications of
identifying more cohorts of vulnerable users and prompting interventions to address the right group of users.
Lastly, we elaborate on challenges and future avenues to investigate vulnerabilities to misinformation. We intend
to generate a fruitful discussion in this workshop about the need to explore potential victims of misinformation,
identify their vulnerabilities, and employ the right interventions to address the right user cohorts.
2 APPROACHES TO IDENTIFYING VULNERABILITIES TO MISINFORMATION
Various studies have investigated why people believe and share misinformation on the Internet. However, most
of the studies addressed such questions based on self-report measures, e.g., questionnaires, which are prone to
many limitations such as subjectiveness. On the other hand, some research has proposed objective approaches
to identifying users who may be susceptible to misinformation. Combining prior psychological ndings and
behavioural measures, these research eorts explicitly identify which and when users tend to believe and share
misinformation. In this paper, we discuss two promising approaches: detecting cognitive bias detection and
proling social media users.
2.1 Cognitive Bias Detection
Humans possess limited cognitive capacity and employ cognitive biases as mental shortcuts, which can lead
to irrational judgements. One prominent example is “conrmation bias” (also known as selective exposure) as
a tendency to seek solely information to supports one’s perspectives or expectations, while ignoring other
dissenting information [
24
]. Taking the advantage of social media platforms that inform users with contents
that reinforce their viewpoints, conrmation bias is easily seen on social media platforms as many users tend to
consume and spread content items that match their belief without checking its veracity [39].
2