Learning from Viral Content Krishna DasarathaKevin He First version August 20 2022

2025-04-29 0 0 1013.61KB 60 页 10玖币
侵权投诉
Learning from Viral Content
Krishna DasarathaKevin He
First version: August 20, 2022
This version: August 3, 2023
Abstract
We study learning on social media with an equilibrium model of users interacting with shared
news stories. Rational users arrive sequentially, observe an original story (i.e., a private signal)
and a sample of predecessors’ stories in a news feed, and then decide which stories to share.
The observed sample of stories depends on what predecessors share as well as the sampling
algorithm generating news feeds. We focus on how often this algorithm selects more viral
(i.e., widely shared) stories. Showing users viral stories can increase information aggregation,
but it can also generate steady states where most shared stories are wrong. These misleading
steady states self-perpetuate, as users who observe wrong stories develop wrong beliefs, and
thus rationally continue to share them. Finally, we describe several consequences for platform
design and robustness.
Keywords: social learning, selective equilibrium sharing, social media, platform design, en-
dogenous virality
We thank Leonie Baumann, Michel Benaïm, Aislinn Bohren, Tommaso Denti, Glenn Ellison, Mira Frick, Drew
Fudenberg, Co-Pierre Georg, Ben Golub, Ryota Iijima, Bart Lipman, George Mailath, Suraj Malladi, Chiara Margaria,
Meg Meyer, Evan Sadler, Philipp Strack, Heidi Thysen, Fernando Vega-Redondo, Rakesh Vohra, Yu Fu Wong, and
numerous seminar participants for valuable comments and discussions. Byunghoon Kim, Stephan Xie, and Tyera
Zweygardt provided excellent research assistance. We gratefully acknowledge financial support from NSF Grants
SES-2214950 and SES-2215256.
Boston University. Email: krishnadasaratha@gmail.com
University of Pennsylvania. Email: hesichao@gmail.com
arXiv:2210.01267v2 [econ.TH] 4 Aug 2023
1 Introduction
In recent years, viral content on social media platforms has become a major source of news and
information for many people. What content users consume often depends on the news feeds created
by platforms like Twitter, Facebook, and Reddit. Which stories go viral and which disappear is
jointly determined by the algorithms generating these feeds and users’ actions on the platforms
(e.g., sharing, retweeting, or upvoting stories).
How does the design of the news feed affect how users learn on such platforms? Consider a
platform deciding how much to push widely shared (or highly upvoted) content into users’ news
feeds. On the one hand, a news feed that primarily shows users widely shared stories can create
a social version of the confirmation bias: incorrect but initially popular stories spread widely and
determine people’s beliefs, even though they are contradicted by most of the information that arrives
later. One might expect such feedback loops with naive users, but we show they can also arise in
an equilibrium model with rational users. The idea is that when stories supporting an incorrect
position are shared more, subsequent users tend to see these incorrect stories in their news feeds
due to the stories’ popularity, and hence form incorrect beliefs through Bayesian updating. If users
derive utility from sharing accurate content and thus share stories that agrees with their beliefs,
they will rationally share these false stories and further increase their popularity. Users have less
exposure to the true stories: even if these stories are more numerous, they are shared less than the
false stories and therefore shown less by the news-feed algorithm.
But on the other hand, selecting news stories based on their popularity may help aggregate more
information. Seeing a particular story in a news feed that selects widely shared content gives a user
more information than the realization of a single signal. The popularity of this story also tells the
user about the past sharing decisions of their predecessors, and thus lets the user draw inferences
about the many stories that these predecessors saw in their news feeds. In some circumstances,
seeing just a few stories in a news feed that emphasizes viral content can lead to strong Bayesian
beliefs about the state of nature. This can happen even if individual stories are imprecise signals
about the state, since sophisticated users can use the selection of these stories to infer much more
about sharing on the platform.
This work examines the trade-offs in choosing how much to feature viral content in a news feed
1
and studies how this design choice affects social learning on the platform. There is an active public
discussion about how news feeds shape society’s beliefs. Some commentators have blamed the
wide spread of misinformation about issues ranging from public health to politics on social media
platforms pushing viral but inaccurate content into users’ feeds. We contribute to this discussion
by developing an equilibrium model of people learning from news feeds and sharing news stories
on a platform. We characterize learning outcomes under different news-feed designs, taking into
account rational users’ responses to different designs and to other people’s equilibrium sharing
patterns. The model also allows provides insights about specific applied questions, such as how the
platform will optimally design its news feed to maximize its objective and the robustness of the
platform to manipulation by a malicious attacker.
In our model, a large number of users arrive in turn and learn about a binary state. Each user
receives a conditionally independent binary signal about the state (which we call a news story)
and observes a sample of stories from predecessors (which we call a news feed). These stories
are sampled using a news-feed algorithm that interpolates between choosing a uniform sample of
the past stories and choosing each story with probability proportional to its popularity (i.e., the
number of times it has been shared). Users are Bayesians and know the news-feed algorithm, so
they appropriately account for selection in the stories they see.1Users then choose which of these
news-feed stories to share. We assume users prefer to share stories that match the true state, given
their endogenous beliefs. This simple utility specification, which one might think is conducive to
learning, can nevertheless generate rich learning dynamics including persistent learning failures.
The platform’s design choice in our model is a virality weight λthat captures the weight placed
on popularity when generating news feeds: higher λcorresponds to a news-feed algorithm that shows
more viral stories. The evolution of content on the platform is described by a stochastic process in
[0,1] we call viral accuracy, which measures the relative popularity of the stories that match the
true state in each period. We show viral accuracy almost surely converges to a (random) steady-
state value, which depends on the randomness in signal realizations and in news-feed sampling. In
equilibrium, there is always an informative steady state where most stories in news feeds match
the state. But when the virality weight is high enough, there can also be a misleading steady
1An alternative approach would be to assume users are naive and fail to account for this selection. Many of the
main forces we highlight in our equilibrium framework would also appear in this behavioral model.
2
state in equilibrium, where most stories in news feeds do not match the state (so viral accuracy
is less than 1
2). At a misleading steady state, users tend to see false stories, and therefore believe
in the wrong state and share these false stories. The misleading steady states correspond to the
socially-generated confirmation bias described above.
These misleading steady states emerge when λcrosses a threshold, which we call the critical
virality weight λ. Misleading steady states exist in equilibrium when virality weight is at or
above this threshold, but not below it. A key finding is that this emergence is discontinuous: at
the threshold virality level λwhere the misleading steady state first appears, the probability of
learning converging to this bad steady state is strictly positive. As a consequence, the accuracy
of content on the platform jumps downward at this threshold. Below the critical virality weight,
however, the unique informative steady state becomes monotonically more accurate as λincreases.
This result formalizes the intuition mentioned above that a more viral news feed helps aggregate
more information. A platform choosing λtherefore faces a trade-off between facilitating more
information aggregation and preventing the possibility of a misleading steady state in equilibrium.
Since misleading steady states only appear when virality weight exceeds the threshold λ, com-
parative statics of this threshold with respect to other parameters tell us which platform features
make it more susceptible to misleading steady states. Platforms are more susceptible when news
stories are not very precise, when news feeds are large, and when users share many stories. That
is, misleading steady states arise on platforms that let users consume and interact with too much
social information relative to the quality of their private information from other sources.
We give two consequences of our results for platform design. First, we ask what virality weight
a platform would choose to maximize a broad class of objectives, including users’ equilibrium utility
from sharing stories on the platform. Because of the discontinuous change in the set of equilibrium
steady states around the critical virality weight λ, the optimal choice of λeither converges to this
threshold or lies strictly above it as the number of users grows. We then discuss when a platform
is robust to malicious attackers who manipulate its content. If a platform chooses λsufficiently
below the threshold λ, a large amount of manipulation is required to produce a misleading steady
state. We provide a simple explicit lower bound on this amount, which we interpret as a robustness
guarantee.
At a technical level, our paper applies mathematics techniques on stochastic approximation to
3
an equilibrium model where agents respond optimally to the evolution of a stochastic process. The
same techniques have been used in economics to study dynamics under behavioral heuristics (e.g.,
Benaïm and Weibull (2003) in evolutionary game theory or Arieli, Babichenko, and Mueller-Frank
(2022) in naive social learning). Even for a fixed strategy, the system we study can often converge
to multiple steady states and there is no closed-form expression for the probabililty of reaching a
given steady state. Understanding outcomes under equilibrium sharing rules is even more complex.
To make progress despite this complexity, we show that outcomes under a specific simple strategy
(sharing stories that match a majority of one’s observations) tell us about the equilibrium outcomes
(which cannot be characterized directly). In particular, a misleading steady state exists when users
choose equilibrium sharing strategies if and only if one exists when users follow this simple strategy.
1.1 Related Literature
We first discuss how our model relates to a recent literature on learning from shared signals. Several
papers have looked at different models of news sharing or signal sharing. As we discuss in detail
below, the existing work focuses on the dissemination of a single signal, or on settings where signals
are shared once with network neighbors but not subsequently re-shared. Our model differs on
these two dimensions. First, we consider a platform where many signals about the same state
circulate simultaneously. These signals interact: a user’s social information consists of the multiple
stories that they see in their news feed, so the probability that they share a given story depends
on whether the other stories corroborate it or contradict it. Second, we allow signals to be shared
widely through a central platform algorithm that generates news feeds for all users. A signal can
become popular due to early agents’ sharing decisions and get pushed into a later agent’s news feed,
and this later agent can re-share the same signal. The combination of these two model features
generates the social version of confirmation bias that we outlined earlier.
Bowen, Dmitriev, and Galperti (2023) study a model where signals are selectively shared at most
once with network neighbors, but agents are misspecified and partially neglect this selection. This
bias leads to mislearning, and it also generates polarization in social networks with echo chambers.
By contrast, we focus on rational agents who make endogenous sharing decisions in equilibrium.
Bowen et al. (2023) note that:
4
摘要:

LearningfromViralContent∗KrishnaDasaratha†KevinHe‡Firstversion:August20,2022Thisversion:August3,2023AbstractWestudylearningonsocialmediawithanequilibriummodelofusersinteractingwithsharednewsstories.Rationalusersarrivesequentially,observeanoriginalstory(i.e.,aprivatesignal)andasampleofpredecessors’st...

展开>> 收起<<
Learning from Viral Content Krishna DasarathaKevin He First version August 20 2022.pdf

共60页,预览5页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:60 页 大小:1013.61KB 格式:PDF 时间:2025-04-29

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 60
客服
关注