Social Media Algorithms and Their Impact on Teen Boys: A Growing Concern
- V. E. K. Madhushani
- Sep 6, 2024
- 4 min read
Vithanage Erandi Kawshalya Madhushani Jade Times Staff
V.E.K. Madhushani is a Jadetimes news reporter covering Technology .

How Algorithms Expose Teenagers to Harmful Content and the Struggle for Safer Online Spaces
In 2022, Cai, a 16 year old, was scrolling through his social media feeds when he encountered disturbing content. What started as a harmless video of a cute dog quickly escalated to clips of violence, misogynistic rants, and graphic images. He couldn't help but wonder why was this content being shown to him?
During the same period, Andrew Kaung, an analyst focused on user safety, was investigating the algorithms at TikTok, where he worked from December 2020 to June 2022. Along with a colleague, Andrew examined the type of content being recommended to UK teenagers, including 16 year olds like Cai. Andrew, who previously worked at Meta (which owns Instagram), found that TikTok's algorithms were serving violent and inappropriate content to young boys, while girls were shown vastly different posts based on their interests.
Social media companies like TikTok and Meta rely on AI tools to remove harmful content and flag it for human review. However, these systems are not foolproof, and many videos slip through the cracks. According to Andrew, during his tenure at TikTok, videos that weren't immediately flagged by AI or reported by users were only reviewed manually if they surpassed a certain view threshold at one point, set at 10,000 views. This policy raised concerns about younger users being exposed to harmful material before it was flagged for review.
TikTok asserts that 99% of content removed for violating its rules is taken down by AI or human moderators before reaching 10,000 views. Meta claims to offer over 50 tools and features to ensure teens have positive and age appropriate experiences on its platforms. Despite these assurances, Cai's experience reflects a different reality. Even after using tools on Instagram and TikTok to signal disinterest in violent or misogynistic content, he continued to be bombarded with such posts.
Cai's interests include the Ultimate Fighting Championship (UFC), and he admits to occasionally engaging with videos from controversial influencers. However, the violent and extreme content pushed his way was not something he actively sought out. "It stains your brain," Cai says, describing how these disturbing images linger in his mind throughout the day. He noticed that while teenage girls his age were often recommended content related to music and makeup, his feeds were filled with violence and harmful ideologies.
The algorithms' influence extends beyond individual experiences. Cai observed a friend becoming increasingly drawn to content from a controversial influencer, eventually adopting misogynistic views. "He took it too far," Cai recalls, noting the need to give his friend a reality check. Cai tried to manipulate the algorithms by disliking content and unliking posts, hoping it would alter the recommendations, but to little effect.
Understanding How Algorithms Work
Andrew explains that TikTok's algorithms are designed to maximize engagement, regardless of whether that engagement is positive or negative. When users sign up, they specify their interests, which help shape the initial content they see. However, the algorithms also take into account the preferences of similar users, which can inadvertently direct young boys towards violent content if others with similar profiles have engaged with it.
TikTok's algorithms use reinforcement learning, a method where AI learns by trial and error to predict what will keep users watching. Andrew highlighted a major issue: the teams training these algorithms often didn't know the exact nature of the content being recommended. Instead, they relied on abstract data like viewer numbers and engagement trends.
In 2022, Andrew and a colleague suggested that TikTok update its moderation system to clearly label harmful videos and employ more specialized moderators. However, their proposals were initially rejected. TikTok has since stated that it has increased its number of specialist moderators and categorizes harmful content into specific queues for review.
The Struggle for Change
From his perspective within TikTok and Meta, Andrew found it challenging to advocate for necessary changes. "We are asking a private company whose interest is to promote their products to moderate themselves, which is like asking a tiger not to eat you," he remarks. He believes that children's and teenagers' lives would improve if they reduced their smartphone use, but Cai disagrees. For him, the solution lies in making social media platforms more responsive to user feedback about content preferences.
In the UK, new legislation set to take effect in 2025 will require social media companies to verify users' ages and prevent harmful content from being recommended to minors. Almudena Lara, an online safety policy development director, noted that while harmful content affecting young women has been highlighted, the algorithms promoting violence and hate towards teenage boys have received less attention.
Social media platforms like TikTok and Meta continue to emphasize their commitment to user safety, claiming to use innovative technology and extensive safety measures. However, Cai and others like him still feel that these platforms prioritize profit over user well-being. As regulatory bodies like Ofcom prepare to enforce stricter measures, the hope is that companies will finally address the pressing need for safer online spaces for all users, especially vulnerable teenagers.
Comments