Twitter Users Behavioral Response to Toxic Replies

2025-05-06 0 0 1.61MB 11 页 10玖币
侵权投诉
Twier Users’ Behavioral Response to Toxic Replies
Ana Aleksandric
University of Texas at Arlington
Arlington, TX, United States
Sayak Saha Roy
University of Texas at Arlington
Arlington, TX, United States
Shirin Nilizadeh
University of Texas at Arlington
Arlington, TX, United States
ABSTRACT
Online toxic attacks, such as harassment, trolling, and hate speech
have been linked to an increase in oine violence and negative
psychological eects on victims. In this paper, we studied the im-
pact of toxicity on users’ online behavior. We collected a sample
of 79.8k Twitter conversations. Then, through a longitudinal study,
for nine weeks, we tracked and compared the behavioral reactions
of authors, who were toxicity victims, with those who were not.
We found that toxicity victims show a combination of the following
behavioral reactions: avoidance,revenge,countermeasures, and nego-
tiation. We performed statistical tests to understand the signicance
of the contribution of toxic replies toward user behaviors while
considering confounding factors, such as the structure of conversa-
tions and the user accounts’ visibility, identiability, and activity
level. Interestingly, we found that compared to other random au-
thors, victims are more likely to engage in conversations, reply in
a toxic way, and unfollow toxicity instigators. Even if the toxicity
is directed at other participants, the root authors are more likely
to engage in the conversations and reply in a toxic way. However,
victims who have veried accounts are less likely to participate
in conversations or respond by posting toxic comments. In addi-
tion, replies are more likely to be removed in conversations with a
larger percentage of toxic nested replies and toxic replies directed
at other users. Our results can assist further studies in developing
more eective detection and intervention methods for reducing the
negative consequences of toxicity on social media.
CCS CONCEPTS
Social and professional topics User characteristics.
KEYWORDS
social media, toxicity attacks, user online behavior, longitudinal
study
ACM Reference Format:
Ana Aleksandric, Sayak Saha Roy, and Shirin Nilizadeh. 2023. Twitter Users’
Behavioral Response to Toxic Replies. In Proceedings of The Web Confer-
ence(WWW) ’23 (The Web Conference (WWW) ’23). ACM, New York, NY,
USA, 11 pages. https://doi.org/XXXXXXX.XXXXXXX
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
The Web Conference (WWW) ’23, April 30–May 4, 2023, Austin, Texas
©2023 Association for Computing Machinery.
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. .. $15.00
https://doi.org/XXXXXXX.XXXXXXX
1 INTRODUCTION
These days, social media is rampant with toxic attacks such as of-
fensive language, trolling, and using hate speech. The primary goal
of these attacks is to silence, insult or demoralize people, especially
those belonging to already marginalized groups [
1
,
21
,
40
,
56
]. These
attacks are usually targeted, e.g., as part of a smear campaign to dam-
age or call into question someone’s reputation [
26
,
32
,
41
,
48
,
69
].
They can also be coordinated using other communication mediums
and implemented by many users [8, 12, 56, 70].
Psychological research has studied the negative eects of online
harassment, cyberbullying, and trolling on individuals’ psycho-
logical states and well-being [
25
,
29
,
35
], showing that they often
cause overwhelming and stressful situations for the victims [
1
,
3
,
36
,
43
,
54
]. These studies found that victims are more prone to show
self-harming behaviors as well as suer from depression and anxi-
ety [
2
,
18
,
28
]. To counter online toxic attacks, many social media
platforms have started implementing content moderation mecha-
nisms [
19
,
60
] that block accounts [
6
,
22
] and remove content [
9
].
However, it is debatable whether they provide sucient mitigation
to the potential psychological damage caused to the victim as a
result of the online toxicity [24, 34].
To the best of our knowledge, no work has conducted a longitu-
dinal data-driven study to examine the impact of toxic content on
victims’ online behavior. In this paper, we study how the victims
respond to toxic content in terms of their behavioral actions on
Twitter. Our goal is to identify key factors which accurately repre-
sent the scale at which toxic replies impact victims. A recent study
on online cyberbullying [
16
] targeting undergraduate and high
school students found that victims primarily engaged in four types
of behavioral reactions towards such attacks: avoidance,revenge,
employ countermeasures, and negotiation. We use this as a frame-
work for creating meaningful groupings of behavioral responses
to toxic content. For example, we examine if the victims try to
avoid further encounters with toxic content, e.g., by removing their
posts or even deleting their accounts (which are also signs of being
silenced), or if they tend to negotiate by posting comments in con-
versations, and even take revenge by responding in a toxic way, or
employ countermeasures by ignoring such content but unfollowing
the toxicity instigators. In our analysis, we consider factors that
might have an impact on users’ social media behavior, including
the number of toxic replies in conversations, the structure of con-
versations, the location of the toxic content in the conversations
tree, and the social relationship between conversations’ partici-
pants (i.e., if they follow each other). We also explore the eects
of account-specic attributes on users’ decisions, including their
online visibility, identiability, and activity level.
For our analysis, we collected a random sample of 79.8k Twitter
conversations from August 14th to September 28th, 2021. We used
this data to identify users involved in these conversations, identify
conversations with toxic replies, and nally collect longitudinal
arXiv:2210.13420v1 [cs.SI] 24 Oct 2022
The Web Conference (WWW) ’23, April 30–May 4, 2023, Austin, Texas Ana Aleksandric, Sayak Saha Roy, and Shirin Nilizadeh
data on users’ online behavior. We represented the structure of a
conversation using a reply tree, which encodes the relationships be-
tween tweets, where two tweets are connected if one is a reply to the
other. We used Google Perspective [
47
], a natural language-based
AI tool used to identify toxicity in text, to detect conversations that
received toxic replies. Finally, we analyzed and characterized the
behavioral reactions of root authors receiving toxic replies, who
we call toxicity victims, and compared their behavior with those of
root authors receiving no toxic replies, who we call random authors
and are our control group. We formulated the following hypotheses
and performed appropriate statistical tests to understand the sig-
nicance of the contribution of toxic replies toward user behaviors:
H1
: Toxicity victims are more likely to deactivate their accounts
compared to random authors.
H2
: Toxicity victims are more likely
to switch their accounts to private mode.
H3
: Toxicity victims are
more likely to engage in conversations.
H4
: Toxicity victims are
more likely to engage in conversations if receiving toxic replies
from a larger number of toxicity instigators.
H5
: Toxicity victims
are more likely to respond back in a toxic way.
H6
: Toxicity victims
are more likely to respond back in a toxic way if receiving toxic
replies from a larger number of toxicity instigators.
H7
: Toxicity
victims are more likely to delete their original posts compared to
random authors.
H8
: Replies in conversations with toxic replies are
more likely to be deleted compared to conversations without toxic
replies.
H9
: Toxicity victims are more likely to unfollow toxicity
instigators compared to random authors.
Our study yields multiple important ndings. We observed that
dierent users respond dierently to toxic content and identied
groups of users who show similar reaction patterns. For example,
in terms of disregard and avoidance, we demonstrate that 30.96% of
victims ignored toxic content and did not show any reactions. In
terms of negotiation, we found toxicity victims compared to others
are more likely to engage in the conversations (60.3% vs. 47.4%),
and 12.4% of victims employed countermeasures by unfollowing
toxicity instigators. Analyzing the contributing factors to users’
behavior, our results suggest that users who receive toxic direct
replies are less likely to engage in the conversation, while users
who receive nested toxic replies engage more in conversations with
toxic replies and tend to have toxic responses. These show that
the location of toxic content in the conversation can aect victim’s
reactions. Interestingly, veried accounts are less likely to engage
in conversations with toxic replies, indicating that the social status
of users can have an impact on how they perceive toxic content
and react to them. Finally, identiable accounts are more likely to
engage in conversations by responding in a toxic way.
2 RELATED WORK
Generalized online abuse encompasses several unhealthy online
behaviors, including attacks of racism against minority communi-
ties [
21
,
40
,
56
], misogynistic hatred [
38
,
46
] and toxic masculin-
ity [
53
], which are aimed at groups of vulnerable individuals. In
recent years, online abuse have received a lot of attention. Here,
we focus on studies that characterize online abuse and investigate
its psychological impact of them on humans. To the best of our
knowledge, there is no work that has studied the consequences of
receiving toxic replies on users’ online behavior.
Prevalence of Social Media Victimization.
Since abuse and
toxicity are often carried out with the intention to humiliate or ma-
nipulate targeted individuals [
10
,
17
,
46
,
61
], social media is often
considered to be the chief outlet for housing such attacks due to
high visibility [
13
,
65
,
66
] and more opportunities to remain anony-
mous [
11
,
51
,
72
]. Online abuse and toxicity can be inuenced by
several socioeconomic factors. Prior work has examined the as-
sociation between abuse and on-the-ground “trigger” events, e.g.,
terrorist attacks, and political events [
27
,
31
,
44
,
64
]. The prevalence
and characteristics of hate speech have also been studied on spe-
cic web communities, such as r/Gab [
67
,
68
], 4chan’s Politically
Incorrect board (/pol/) [
30
], Twitter [
7
,
14
,
71
], and Whisper [
52
].
Some works have shown that online abuse and toxic comments are
normalized in several communities [
4
,
38
,
46
]. There have also been
a few eorts to understand the characteristic dierences between
hate targets and hate instigators [14, 15, 37, 50].
The Psychological Impacts of Online Abuse.
Established lit-
erature on the psychological impacts of online abuse is mostly
focused on cyberbullying. A study determined that middle and high
school students who had been cyber-bullied were more prone to
exhibit self-harming behavior as well as suer from depression,
anxiety, and lower self esteem [
18
]. Similar studies [
2
,
5
,
33
,
39
]
found the same patterns of psychological distress in young adoles-
cents [
28
] and adults [
63
], due to cyberbullying attacks. Victims
of online harassment might also respond by acceptance and self-
blame [
62
], with those having lower psychological endurance being
more vulnerable to emotional outbursts. Some studies explored the
eects of trolling on victims [
1
], and identied the circumstances
when victims are more likely to respond to trolls [55].
3 DATA COLLECTION
Figure 1 shows the pipeline used for collecting and processing our
datasets. Broadly, we aimed to (a) obtain a random set of conversa-
tions, and (b) track the twitter activities of root authors.
Daily Collection of a Random Sample of Twitter Conver-
sations:
We used the Twitter API [
59
] to collect a 1% random
sample of tweets for 46 consecutive days, starting from August
14th till September 28th, 2021, and extracted English tweets which
are not retweets or replies belonging to other conversations. For
each initial tweet, we waited at least two days before collecting the
replies of the conversation. For example, if we collected a random
sample of tweets on September 1st, we would start collecting replies
for each of these tweets on September 3rd. This is to give enough
time so that the initial tweet can turn into a conversation. However,
we discarded many tweets that did not receive any comment or had
been deleted by the time we attempted to collect their replies. We
also removed the conversations where the replies for the tweets
were all posted by the author of the initial tweet.
Conversations, Reply Trees and Some Denitions:
We used
Twarc [
58
], a Python wrapper for the ocial Twitter API to obtain
the entire conversation for each initial tweet, including its direct
replies and nested replies (replies to replies). As it is shown in
Figure 2, we represent each conversation as a reply tree, where one
tweet is child of another tweet when one is a reply to the other. The
initial tweets represent the roots of reply trees. We call the author
of the root tweet as root author. We also dene direct replies as the
Twier Users’ Behavioral Response to Toxic Replies The Web Conference (WWW) ’23, April 30–May 4, 2023, Austin, Texas
For 46 days:
(1) Obtain a random
sample of tweets
(3) Collect authors’
friend & follower lists
(2) Remove retweets,
replies, &
non-English posts
Day 1 Day 2
From day 2, everyday
check:
(1) If tweets exist
(2) If authors’
accounts are
deactivated or
private
Day 3
(1) Collect replies
and replies to replies
to tweets obtained
in the first day
(2) Discard
conversations
without replies or
other users
Day 49
At the end of data
collection
(1) Collect authors’
friend & follower
lists for the second
time
(2) Use Perspective
API to detect toxicity
of conversations
From day 4, everyday
check:
(1) If replies exist
Day 4
Figure 1: Data collection process
rst level replies in a conversation, which is the set of replies to
the root tweet. Nested replies refer to other levels of replies in a
conversation other than the rst layer, i.e., replies to replies.
As it is shown in Figure 2, each conversation as a reply tree has
the following properties: Size, which indicates the number of tweets
in the conversation; Depth, which is the depth of the conversation’s
deepest node; Width, which is the maximum number of nodes at
any depth in the tree.
Pre-processing and Filtering:
Getting the replies for tweets
of each day can take a long time, up to couple of hours. Therefore,
the rst tweets collected for the certain day would have much
less time to receive any comments than last tweets obtained from
that day. In order to solve this problem, we only kept replies sent
in the rst 48 hours after the root tweet was posted. Also, some
tweets contained only links, images, and videos instead of text.
Since our approach of detecting toxic tweets is text-based, such
conversations were removed from the dataset. Moreover, certain
reply trees were missing some replies due to the errors received
during their data collection. Since these errors can have multiple
reasons including, replies being deleted or hidden by their authors,
or root authors, we removed trees containing such errors from
our dataset. Furthermore, we noticed that not all the root authors
in our sample are unique, therefore to avoid duplication in our
statistical analysis, we randomly selected a single conversation
from each root author.
Our Conversation Dataset:
Finally, our
dataset consists of 79,799 conversations with 528,041 tweets, posted
by 328,390 unique users, out of which 79,799 are root authors.
Tracking activity of root authors:
For each root author and
root tweet, we kept track of the following activities: account deac-
tivation,account privatization, and tweet deletion, as well as root
authors’ lists of followers and friends. In detail, as it is shown in
Figure 1, from the second day of our data collection, we checked on
a daily basis if the root tweets and authors still exist using Twitter
API. Similarly, from the fourth day of our data collection, after
collecting the replies to a root tweet, every day we were checking
if the replies still exist. When a certain tweet or account is not
accessible, API returns helpful error codes, e.g., Sorry, you are not
authorized to see this status,User not found, and No status found with
that ID [
49
]. These error codes indicate that the user account has
become private, been deactivated and the tweet has been deleted,
respectively. Note that we cannot certainly identify who deleted
a particular tweet. For instance, the root tweet can be deleted by
the root author, or in case the deleted tweet is a comment, it can
be deleted by the user who posted that specic comment or by the
root author. Even though this process was repeated daily during the
entire process of data collection, the data collection failed on some
days. More precisely, in the analysis, we focus on the presence of
the tweets and accounts three days after the corresponding root
tweets were collected, but the data about their presence is miss-
ing in 7.6% of the conversations, with very similar distributions
in conversations with and without toxic replies. Therefore, in the
statistical models related to tweet deletion, account deactivation,
and account privatization, we omit these conversations.
As illustrated in Figure 1, we collected the followers and friends
lists of all root authors every day, once the daily random sample is
obtained. Thus, the lists collected at this time represent a snapshot of
the authors’ friends/followers lists before their tweets turned into a
conversation. Furthermore, around 49 days after the rst day when
root tweets were collected, we again collected the list of friends
and followers for all root authors. This is to analyze the impact of
toxicity on users’ online relationships. Also, note that in the analysis
we report the distribution of the percent of toxicity instigators that
the victims unfollowed. As toxicity instigators do not exist in other
conversations, comparing the unfollowing ratio between victims
and other random users was not applicable. We could not collect the
relationships, i.e., follower and friends’ lists, right after obtaining
the conversations, because of the number of API calls we could
issue every day. This delay in collecting relationships imposes some
limitations as users might end followership and friendships due
to other events during this time. However, in our analysis, we
compare these variables of toxicity victims and random authors
and seeing a dierence can be an indicator of the impact of toxicity
on Twitter relationships. From our unfriend analysis, we had to
discard 2,488 conversations, where 403 and 2,085 are conversations
with and without toxic replies respectively because we were not
able to obtain the friends’ list for their root authors due to the
authors either making their accounts private or deactivating them.
Identifying Conversations with Toxic Replies:
We used Google’s
Perspective API [
47
] to detect toxic replies in our dataset. This AI-
based tool investigates whether a provided text contains language
indicating abusive or inappropriate attributes such as Severe tox-
icity,Profanity,Sexually expletives,Threats,Insults and assigns a
摘要:

TwitterUsers’BehavioralResponsetoToxicRepliesAnaAleksandricUniversityofTexasatArlingtonArlington,TX,UnitedStatesSayakSahaRoyUniversityofTexasatArlingtonArlington,TX,UnitedStatesShirinNilizadehUniversityofTexasatArlingtonArlington,TX,UnitedStatesABSTRACTOnlinetoxicattacks,suchasharassment,trolling,an...

展开>> 收起<<
Twitter Users Behavioral Response to Toxic Replies.pdf

共11页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!

相关推荐

分类:图书资源 价格:10玖币 属性:11 页 大小:1.61MB 格式:PDF 时间:2025-05-06

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 11
客服
关注