can be seen in Section 4.
Subsequently, these biases impact how social ties are formed and, ultimately, the shape of the social network. For example,
in online social networks, homophily often manifests through triadic closures
12
where friends in social networks tend to
form new connections that close triangles or triads. Understanding individuals’ and groups’ biases will help understand the
network’s structure and dynamics and how information and misinformation spread on the network depending on its level of
diversity. For example, depending on the biases and the node-specific diversity of the connections it forms, one may have a
system that may be more or less susceptible to widespread dissemination as it would in a Mixed Membership Stochastic Block
Model (MMSBM)
13
. A Mixed Membership Stochastic Block Model is a Bayesian community detection method that segments
communities into blocks but allows community members to mix with other communities. Assumptions in an MMSBM include
a list of probabilities that determine the likelihood of communities interacting. We explore these topics in more detail in Section
2.1.
Previous work has demonstrated that homophily bias towards content aligned with one’s political affiliation can impact
one’s ability to detect misinformation.
14,15
Traberg et al. show that political affiliation can impact a person’s ability to detect
misinformation about political content.
14
They found that viewers misclassified misinformation as being true more often when
the source of information aligned with their political affiliation. Political homophily bias, in this case, made them feel as though
the source was more credible than it was.
In this paper, we investigate the accuracy of deepfake detection based on multiple homophily biases in age, gender, and race.
We also explore other bias types, such as heterophily bias, priming, and prior knowledge bias impacting deepfake detection.
Misinformation is information that imitates real information but does not reflect the genuine truth.
16
Misinformation
has become a widespread societal issue that has drawn considerable recent attention. It circulates physically and virtually
on social media sites
17
and interacts with socio-semantic assortativity. In contrast, assortative social clusters will also tend
to be semantically homogeneous.
18
For instance, misinformation promoting political ideology might spread more easily in
social clusters based on shared demographics, further exacerbating political polarization and potentially influencing electoral
outcomes
19
. This has sparked concerns about the weaponization of manipulated videos for malicious ends, especially in the
political realm
19
. Those with higher political interests are more likely to share deepfakes inadvertently, and those with lower
cognitive ability are also more likely to share deepfakes inadvertently. The relationship between political interest and deepfakes
sharing is moderated by network size20.
Motivations vary broadly to explain why people disseminate misinformation, which we refer to as disinformation when
specifically intended to deceive. Motivations include 1) purposefully trying to deceive people by seeding distrust in information,
2) believing the information to be accurate and spreading it mistakenly, and 3) spreading misinformation for monetary gain. In
this paper, we will primarily focus on deepfakes as misinformation meaning the potential of a deepfake viewer getting duped
and sharing a deepfake video. Disinformation is spreading misinformation with the intent to deceive. In this paper, we do not
assume that all deepfakes are disinformation since we do not consider the intent of the creator. A deepfake could be made to
entertain or showcase technology. We instead focus on deepfakes as misinformation meaning the potential of a deepfake viewer
getting duped and sharing a deepfake video, regardless of intent.
There are many contexts where online misinformation is of concern. Examples include misinformation around political
elections and announcements (political harms)
21
; such deepfake videos can, in theory, alter political figures to say just about
anything, raising a series of political and civic concerns
21
; misinformation on vaccinations during global pandemics (health-
related harms)
22,23
; false speculation to disrupt economies or speculative markets
24
; distrust in news media and journalism
(harms to news media)
4,25
. People are more likely to feel uncertain than to be misled by deepfakes, but this resulting uncertainty,
in turn, reduces trust in news on social media
26
; false information in critical informational periods such as humanitarian or
environmental crises
27
; and propagation of hate speech online
3
which spreads harmful false content and stereotypes about
groups (harms related to hate speech).
Correction of misinformation: There are currently many ways to try to detect and mitigate the harms of misinformation
online.
28
On one end of the spectrum are automated detection techniques that focus on the classification of content or on
observing anomaly detection in the network structure context of the information or propagation patterns.
29,30
Conversely,
crowd-sourced correction of misinformation leverages other users to reach a consensus or simply estimate the veracity of the
content.
31–33
We will look at the latter form of correction in an online social network to investigate the role group correction
plays in slowing the dissemination of diverse misinformation at scale.
Connection with deepfakes: The potential harms of misinformation can be amplified by computer-generated videos used
to give fake authority to the information. Imagine, for instance, harmful messages about an epidemic conveyed through the
computer-generated persona of a public health official. Unfortunately, deepfake detection remains a challenging problem, and
the state-of-the-art techniques currently involve human judgment.5
Deepfakes are artificial images or videos in which the persona in the video is generated synthetically. Deepfakes can be
seen as false depictions of a person(a) that mimics a person(a) but does not reflect the truth. Deepfakes should not be confused
3/19