Auto Amazon Links: No products found. Blocked by captcha.
Meta, the company behind Instagram, is currently looking into issues surrounding artificial intelligence-generated accounts on its platform that sexualize disabled individuals. This investigation follows a BBC report uncovering numerous profiles featuring AI-created images of women with disabilities such as Down syndrome or vitiligo.
Many of these accounts display fake photos and videos of women with visible disabilities—including missing limbs, scarring, or wheelchair use—in provocative poses and scant clothing. Remarkably, some of these profiles have quickly gained extensive followings; one account presenting itself as conjoined twins reached around 400,000 followers since its creation in December 2025.
Kamran Mallick, CEO of Disability Rights UK, described the situation as horrific, condemning the accounts for fetishizing, mocking, or exploiting disabled identities. He emphasized that this technology is being misused to dehumanize disabled people, turning their real-life experiences into digital caricatures for others’ profit and entertainment. Similarly, a spokesperson from Gemini Untwined, a charity supporting surgeries for rare newborn conditions, criticized the depiction of conjoined twins as entertainment, calling it morally unacceptable and insensitive to the challenges these children and their families face.
Researcher Dr. Amy Gaeta from the University of Cambridge highlighted the widespread availability of generative AI tools, many of which lack sufficient restrictions to prevent harmful content creation. She pointed out that some programs, even those with rules against explicit content, can be circumvented easily. Gaeta noted, “Sometimes, without my prompting or intent, hypersexualized images of disabled people will be generated,” revealing biases in the datasets used to train these AI systems. Meanwhile, regulatory bodies like Ofcom are monitoring AI developments and emphasizing that online safety laws require tech firms to combat illegal and harmful content, including abusive material based on protected characteristics such as disability. The Equality and Human Rights Commission has also expressed concern, calling for strong regulatory powers to safeguard people from such digital harms.
Meta confirmed it is investigating the problematic accounts and removing content that encourages sexual exploitation or targets individuals due to protected traits. Disability advocates warn that these AI creations stem from real disabled people’s images—often used without permission—and that the platforms’ insufficient moderation contributes to ongoing objectification and harassment. Alison Kerry of Scope stressed that these AI images represent “discrimination dressed up as content,” fueled by unregulated comments and interactions. Dr. Gaeta further criticized the effectiveness of platforms’ moderation measures, noting they can be bypassed by determined users, and stressed the need for greater accountability from big tech in addressing both ableism and misogyny
Read the full article from The BBC here: Read More
Auto Amazon Links: No products found. Blocked by captcha.