
Twier Users’ Behavioral Response to Toxic Replies
Ana Aleksandric
University of Texas at Arlington
Arlington, TX, United States
Sayak Saha Roy
University of Texas at Arlington
Arlington, TX, United States
Shirin Nilizadeh
University of Texas at Arlington
Arlington, TX, United States
ABSTRACT
Online toxic attacks, such as harassment, trolling, and hate speech
have been linked to an increase in oine violence and negative
psychological eects on victims. In this paper, we studied the im-
pact of toxicity on users’ online behavior. We collected a sample
of 79.8k Twitter conversations. Then, through a longitudinal study,
for nine weeks, we tracked and compared the behavioral reactions
of authors, who were toxicity victims, with those who were not.
We found that toxicity victims show a combination of the following
behavioral reactions: avoidance,revenge,countermeasures, and nego-
tiation. We performed statistical tests to understand the signicance
of the contribution of toxic replies toward user behaviors while
considering confounding factors, such as the structure of conversa-
tions and the user accounts’ visibility, identiability, and activity
level. Interestingly, we found that compared to other random au-
thors, victims are more likely to engage in conversations, reply in
a toxic way, and unfollow toxicity instigators. Even if the toxicity
is directed at other participants, the root authors are more likely
to engage in the conversations and reply in a toxic way. However,
victims who have veried accounts are less likely to participate
in conversations or respond by posting toxic comments. In addi-
tion, replies are more likely to be removed in conversations with a
larger percentage of toxic nested replies and toxic replies directed
at other users. Our results can assist further studies in developing
more eective detection and intervention methods for reducing the
negative consequences of toxicity on social media.
CCS CONCEPTS
•Social and professional topics →User characteristics.
KEYWORDS
social media, toxicity attacks, user online behavior, longitudinal
study
ACM Reference Format:
Ana Aleksandric, Sayak Saha Roy, and Shirin Nilizadeh. 2023. Twitter Users’
Behavioral Response to Toxic Replies. In Proceedings of The Web Confer-
ence(WWW) ’23 (The Web Conference (WWW) ’23). ACM, New York, NY,
USA, 11 pages. https://doi.org/XXXXXXX.XXXXXXX
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
The Web Conference (WWW) ’23, April 30–May 4, 2023, Austin, Texas
©2023 Association for Computing Machinery.
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. .. $15.00
https://doi.org/XXXXXXX.XXXXXXX
1 INTRODUCTION
These days, social media is rampant with toxic attacks such as of-
fensive language, trolling, and using hate speech. The primary goal
of these attacks is to silence, insult or demoralize people, especially
those belonging to already marginalized groups [
1
,
21
,
40
,
56
]. These
attacks are usually targeted, e.g., as part of a smear campaign to dam-
age or call into question someone’s reputation [
26
,
32
,
41
,
48
,
69
].
They can also be coordinated using other communication mediums
and implemented by many users [8, 12, 56, 70].
Psychological research has studied the negative eects of online
harassment, cyberbullying, and trolling on individuals’ psycho-
logical states and well-being [
25
,
29
,
35
], showing that they often
cause overwhelming and stressful situations for the victims [
1
,
3
,
36
,
43
,
54
]. These studies found that victims are more prone to show
self-harming behaviors as well as suer from depression and anxi-
ety [
2
,
18
,
28
]. To counter online toxic attacks, many social media
platforms have started implementing content moderation mecha-
nisms [
19
,
60
] that block accounts [
6
,
22
] and remove content [
9
].
However, it is debatable whether they provide sucient mitigation
to the potential psychological damage caused to the victim as a
result of the online toxicity [24, 34].
To the best of our knowledge, no work has conducted a longitu-
dinal data-driven study to examine the impact of toxic content on
victims’ online behavior. In this paper, we study how the victims
respond to toxic content in terms of their behavioral actions on
Twitter. Our goal is to identify key factors which accurately repre-
sent the scale at which toxic replies impact victims. A recent study
on online cyberbullying [
16
] targeting undergraduate and high
school students found that victims primarily engaged in four types
of behavioral reactions towards such attacks: avoidance,revenge,
employ countermeasures, and negotiation. We use this as a frame-
work for creating meaningful groupings of behavioral responses
to toxic content. For example, we examine if the victims try to
avoid further encounters with toxic content, e.g., by removing their
posts or even deleting their accounts (which are also signs of being
silenced), or if they tend to negotiate by posting comments in con-
versations, and even take revenge by responding in a toxic way, or
employ countermeasures by ignoring such content but unfollowing
the toxicity instigators. In our analysis, we consider factors that
might have an impact on users’ social media behavior, including
the number of toxic replies in conversations, the structure of con-
versations, the location of the toxic content in the conversations
tree, and the social relationship between conversations’ partici-
pants (i.e., if they follow each other). We also explore the eects
of account-specic attributes on users’ decisions, including their
online visibility, identiability, and activity level.
For our analysis, we collected a random sample of 79.8k Twitter
conversations from August 14th to September 28th, 2021. We used
this data to identify users involved in these conversations, identify
conversations with toxic replies, and nally collect longitudinal
arXiv:2210.13420v1 [cs.SI] 24 Oct 2022