strong view count policy and has not been afraid of enforcing it. In December
2012, the platform deleted 2 billion views from the channels of record companies
such as Universal and Sony [
22
] [
26
] [
2
] [
19
]. Over the years, countless youtubers
have suffered sudden and drastic cuts to their views (and many have complained
about it, often through YouTube videos). According to YouTube’s policies [24]
[
35
] [
25
], these interventions aim at preserving a “meaningful human interaction
on the platform” and to oppose “anything that artificially increases the number
of views, likes, comments or other metric either through the use of automatic
systems or by serving up videos to unsuspecting viewers” [24].
Despite the media interest in the phenomenon [
27
] [
38
], not much research
has been carried out on the implementation of this policy. The general lack of
studies on the subject is partly motivated by the fact that since 2016 YouTube
has largely restricted access to its data through the API, making the researchers’
task more difficult. Up to our knowledge, the only previous work concerning
views count corrections is that of Marciel et al.[
32
] in 2016. This paper studies
the phenomenon of views corrections in relation to video monetization, to identify
possible frauds, drawing on research carried out on ads frauds in other social
media [
14
] [
34
]. In their work, Marciel et al. created some sample YouTube
channels and inflated their views though bots. Strikingly, they found that
“YouTube monetizes (almost) all the fake views” generated by the authors, while
it “detects them more accurately when videos are not monetized”.
Although we consider this investigation into the correlation between moneti-
zation and views correction a first useful step toward understanding YouTube’s
policy, we believe that some other pressing questions should be addressed by
the scientific community. For instance, can fake views have an impact on the
success of a video and be used to manipulate YouTube’s attention cycle? It is
well known that, on social media, future visibility is highly dependent on past
popularity, as trending contents tend to be favored by human influencers [
42
] and
recommendation algorithms [
23
], both of which are highly sensitive to trendiness
metrics [
49
]. In YouTube in particular, the recommendation engine represents
the most important source of views [
56
] and, as admitted by its developers,
“in addition to the first-order effect of simply recommending new videos that
users want to watch, [has] a critical secondary phenomenon of bootstrapping
and propagating viral content” [
18
]. Quite deliberately, YouTube’s algorithm
creates a positive feedback that skews visibility according to a rich-get-richer
dynamic [
5
] [
36
] [
46
]. As acknowledged by YouTube engineers: “models trained
using data generated from the current system will be biased, causing a feedback
loop effect. How to effectively and efficiently learn to reduce such biases is an
open question” [55].
This is where fake views come into play. Indeed, if the correction of ille-
gitimate views happens too late, these views have the potential to weight in
the cycle of trendiness [
12
] and unfairly propel their targets. If YouTube fake
views correction is significantly slower than its recommendation dynamics, then
artificially promoted videos risk to be favored by human and algorithmic rec-
ommendations, and thus reach larger audiences and collect extra real views. If,
before being deleted, fake views are able to trigger a cascade effect that increases
2