Like many NZ teens in the 2010s, I was pretty captured by the world of YouTube. I would spend hours scrolling and watching this ‘free’ content, watching Button Poetry, Zoella, and Sims Building videos. It was one of the first environments that allowed me to freely explore content that I chose, with no snatching of a TV remote or prying parental eyes. I was free to indulge my own interests in a private space. However, some of this content, in hindsight, had a pretty damaging impact on my mental health. YouTube, working off of their processes of data collection, started to supply me with very specific diet-based content.
The platform had deemed me a perfect target for the pretty and persuasive ideologies of the high-carb, low-fat vegan community. And YouTube was right: I was interested. The thumbnails of these videos often pictured thin white women posed in bikinis and activewear, next to their massive smoothie bowls and kūmara chips. For my deeply unhealthy mentality at the time, this type of content was intensely alluring. They were not qualified to provide any of this advice, but my recommendations were full of these videos promoting handfuls of harmful ideas about diet, health, and body image.
Unsurprisingly, constant consumption of this rhetoric was deeply harmful for my mental health and damaged my already-shaky relationship to food. The extreme diet change also damaged my physical health, with my blood tests showing some cause for concern. I’m still working to undo the damage of this diet and have to continuously challenge the harmful ideologies that this content praised in my daily life. Part of this work has included an attempt to identify the factors that put me in this position. There’s a long list, but one particularly worrying element is the role YouTube itself may have played.
The term ‘filter bubble’ was first coined by Eli Pariser, with the release of his book The Filter Bubble: What the Internet Is Hiding from You in 2011. A filter bubble, in Pariser’s explanation, is the personal ecosystem of information that users experience due to the catering of algorithms. There are other phrases that are used in similar ways, including ‘echo chamber.’ Often, these terms are employed with reference to explicitly political content and growing polarisation. Pariser utilises differing results of a Google search of ‘BP’ to explain the concept: one user finds investment news for British Petroleum and another finds information about the tragic Deepwater Horizon oil spill. He explains that the results are determined by algorithms that present information based on the user’s past browsing and interaction with different links.
Colloquially, we seem to be getting more and more familiar and comfortable talking about The Algorithm. It’s not uncommon to see TikTok commenters, for example, lamenting the avenue that they feel stuck in (you guessed it, straight TikTok is alive and well). In a survey with 21 participants taken by Craccum, 85% of respondents reported that they felt as if their algorithms and social media had led them towards a particular perspective, type of content, or community. 10% said maybe, and 5% said no. There was a huge variety in the type of content that they identified as potentially leading to a particular perspective, viewpoint or community.
One respondent detailed their noticing of a shift in the past, explaining, “I was really into true crime and the paranormal, but this led into recommendations of Freemason and Illuminati conspiracies, to downright reptilian theories, and even very anti-Semitic content.”
Another noted a potential algorithmic path they noticed, stating, “Somehow, fangirling communities led me towards feminism through suggested pins on Pinterest. Facebook groups have also been suggested to me based on my interests, that have gone to the more extreme ends (eg. anti-capitalist, abolitionist vegan).” In these cases, the respondents noted that they felt their engagements with content they were genuinely interested in had led to recommendations for more extreme content.
Other respondents found themselves noticing targeted advertising. One stated, “I’m Queer and I have seen on my Facebook Ad Interests that it has detected I’m interested in ‘same sex relationships’ and ‘LGBT Community’ — I often get ads now for, like, Gay Men’s party clothing? But have also had ads for organisations like the New Zealand AIDS Foundation, Ending HIV, etc.”
Another wondered about the information that was supplied to their ad targeting: “For at least the last half-year, my Audible ad recommendations on Facebook have been coming up with titles like ‘How Not to Die Alone,’ ‘Difficult Mothers, Adult Daughters,’ and ‘Stop Picking Your Skin.’ I do have a background of domestic abuse, particularly from my mum, and I have depression, so the really niche ones are pretty uncomfortable. For other platforms, it’s not too noticeable.”
One respondent felt that these types of recommendations were especially notable when they were freshly generating a new set of data for the algorithmic filtering systems: “Whenever I reset my watch/search history or have started using a different Google account, I can tell that things I have watched, sought out, or even just hovered over in the case of Facebook, have an effect.”
In this survey, respondents were also asked whether they came across content that made them uncomfortable, and what platforms they had seen this content on. Facebook, TikTok, and YouTube were the top three platforms where respondents noted seeing this content. The content included disturbing and graphic visuals, comments and discriminatory content, where people were sexist, racist, homophobic, or xenophobic, examples of ‘body-checking,’ anti-vaccination posts, conspiracy theories, and “hateful political shit.”
Of course, so much of this information, including my own account, is anecdotal. They are retellings of potential links and educated guesses about how recommendation systems work, and, of course, confirmation bias is mixed in with our own perspectives. Unsurprisingly, researching how filter bubbles work, and the extent to which they affect users, is really difficult. It’s incredibly tricky to account for any potential impacts beyond the individual accounts we have, as our algorithms are so incredibly personalised, so complex, and hidden from public view. The algorithmic systems are also constantly updated and changed, so long term observational studies can be tricky to apply. There have been attempts: one 2020 study from researchers at Virginia Tech has found evidence for a conspiracy theory filter bubble on YouTube. They determined that, once a user has developed a particular watch history, the personalisation attributes of the platform affects the amount of misinformation in the recommendation systems.
However, as researchers wade through massive amounts of data to try to understand the potential impacts of algorithmic recommendation systems, users are still being shown misinformation and disturbing content. In the everyday use of social media, there remains a need to combat and navigate potentially unpleasant and distressing content. There are many community-driven ways to address the need for change here, such as teaching basic media literacy, further regulation, and local community engagements about potential misinformation. The survey respondents also reported some behaviours that they used to try to avoid certain content and redirect their algorithmic recommendations.
A few respondents explained that they would try to make their algorithms understand through their limited interaction with the content. One stated, “I scrolled past and tried to show the algorithm that I was not interested.” Similarly, another explained, “I try to swipe as soon as I know so [the algorithm] knows I don’t want them.”
Others suggested they took a more direct approach: One said, “I click ‘Not Interested,’ or ‘Hide All Content,’ and if I see a video of someone getting seriously harmed (this has happened on Facebook), I report the video. I try my best to report fake news or videos which feel like hate speech.” Another stated, “I have blocked certain kinds of ads (e.g. political ads and medical misinformation) and I have avoided certain social media platforms because I have seen content that has made me uncomfortable that originated on there… And in particular, on Facebook, I will unfollow/block anyone or any recommended pages who post disturbing content/misinformation.” Another explained, “In TikTok’s case, I usually click the ‘Not Interested’ button so that the algorithm filters those kinds of videos out of my For You page.”
Some respondents also explained that they would just try to ignore disturbing content, or avoid certain social media platforms and forums where they believed unsavoury content was circulating.
There’s a real need for these platforms to take responsibility for the distressing content and misinformation that is circulating, and for the potential damages of the filter bubbles that their secretive algorithms create. Social media is responsible for the content that they profit from sharing. However, to make our interaction with social media safer, we can make use of those tools, blocking and reporting misinformation. If you’re finding yourself disturbed or concerned about the content you’re being shown on social media, or unsure about the validity of the information you’re seeing, reach out to friends and whānau, or to counselling services.
Your algorithms are not representative of who you are, and are likely just a ploy to keep you scrolling on the platforms for as long as possible. I mean, Google knows a bit about me, but it also thinks I’m an avid *gamer*, who speaks Italian and likes baseball. It’s wrong on all accounts.
Eslam Hussein, Prerna Juneja, and Tanushree Mitra. 2020. Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. Proc. ACM Hum.-Comput. Interact. 4, CSCW1, Article 48 (May 2020), 27 pages. https://doi.org/10.1145/3392854