
Auditing YouTube’s Recommendation Algorithm for
Misinformation Filter Bubbles
IVAN SRBA,Kempelen Institute of Intelligent Technologies, Slovakia
ROBERT MORO,Kempelen Institute of Intelligent Technologies, Slovakia
MATUS TOMLEIN,Kempelen Institute of Intelligent Technologies, Slovakia
BRANISLAV PECHER∗,Faculty of Information Technology, Brno University of Technology, Czechia
JAKUB SIMKO,Kempelen Institute of Intelligent Technologies, Slovakia
ELENA STEFANCOVA,Kempelen Institute of Intelligent Technologies, Slovakia
MICHAL KOMPAN†,Kempelen Institute of Intelligent Technologies, Slovakia
ANDREA HRCKOVA,Kempelen Institute of Intelligent Technologies, Slovakia
JURAJ PODROUZEK,Kempelen Institute of Intelligent Technologies, Slovakia
ADRIAN GAVORNIK,Kempelen Institute of Intelligent Technologies, Slovakia
MARIA BIELIKOVA‡,Kempelen Institute of Intelligent Technologies, Slovakia
In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast
a user can get into a misinformation lter bubble, but also what it takes to “burst the bubble”, i.e., revert the
bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting
as YouTube users) delve into misinformation lter bubbles by watching misinformation promoting content.
Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation
debunking content. We record search results, home page results, and recommendations for the watched videos.
Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of
misinformation. The labeled data was used to train a machine learning model classifying videos into three
classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the
remaining videos that would not be feasible to annotate manually.
Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics
for a range of audited topics. Our key nding is that even though lter bubbles do not appear in some situations,
when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests
dierently from topic to topic). We also observe a sudden decrease of misinformation lter bubble eect when
misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong
contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do
not observe signicant improvements in the overall quantity of recommended misinformation content.
∗Also with Kempelen Institute of Intelligent Technologies.
†Also with slovak.AI.
‡Also with slovak.AI.
Authors’ addresses: Ivan Srba, Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia, ivan.srba@kinit.sk;
Robert Moro, Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia, robert.moro@kinit.sk; Matus Tomlein,
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia, matus.tomlein@kinit.sk; Branislav Pecher, Faculty of
Information Technology, Brno University of Technology, Brno, Czechia, branislav.pecher@kinit.sk; Jakub Simko, Kempelen
Institute of Intelligent Technologies, Bratislava, Slovakia, jakub.simko@kinit.sk; Elena Stefancova, Kempelen Institute of
Intelligent Technologies, Bratislava, Slovakia, elena.stefancova@kinit.sk; Michal Kompan, Kempelen Institute of Intelligent
Technologies, Bratislava, Slovakia, michal.kompan@kinit.sk; Andrea Hrckova, Kempelen Institute of Intelligent Technologies,
Bratislava, Slovakia, andrea.hrckova@kinit.sk; Juraj Podrouzek, Kempelen Institute of Intelligent Technologies, Bratislava,
Slovakia, juraj.podrouzek@kinit.sk; Adrian Gavornik, Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia,
adrian.gavornik@intern.kinit.sk; Maria Bielikova, Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia,
maria.bielikova@kinit.sk.
©2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. This work has just
been accepted to ACM Transactions on Recommender Systems (ACM TORS) , https://doi.org/10.1145/3568392.
ACM Transactions on Recommender Systems (ACM TORS), Vol. 0, No. 0, Article 0. Publication date: 2022.
arXiv:2210.10085v1 [cs.IR] 18 Oct 2022