When Cai was 16 years old, he noticed a change in the videos that were being recommended to him on social media platforms. What began with a cute dog video soon turned into disturbing footage of violent fights, misogynistic content, and even someone being hit by a car. Unfortunately, he is not alone in being exposed to harmful content on social media.
According to former TikTok analyst Andrew Kaung, who worked on user safety at the company until June 2022, teenage boys are being shown violent and pornographic content that promotes misogynistic views. In contrast, teenage girls are suggested content based on their interests. Social media companies use artificial intelligence tools to remove harmful content and flag inappropriate videos for review by human moderators. However, the AI tools cannot catch everything, and it is evident that harmful content is still slipping through the cracks.
Kaung explains that algorithms use reinforcement learning to maximise engagement by showing users videos that they are likely to watch, like, and comment on. Unfortunately, these algorithms do not always differentiate between harmful and beneficial content.
Cai tried to use Instagram and TikTok to say that he wasn’t interested in violent and misogynistic content, but he continued to be recommended those kinds of videos. Most companies allow those aged 13 or above to sign up, and although TikTok claims that it removes around 99% of harmful content, concerns remain.
Despite ongoing efforts to improve moderation and algorithmic content recommendations, many social media companies have recommended harmful content unintentionally to kids and teenagers. UK regulator Ofcom has said that social media companies need to make changes to decrease the amount of harmful content being recommended to young people
Read the full article from The BBC here: Read More