UK to ban deepfake AI 'nudification' apps

UK to ban deepfake AI 'nudification' apps

The UK government has announced plans to ban “nudification” apps, which use artificial intelligence to create images or videos that falsely depict individuals without clothes. This move is part of a broader initiative aimed at reducing violence against women and girls by targeting online misogyny. Under the new legislation, it will become illegal to produce or distribute AI technologies designed to generate such manipulated images.

These new offenses will extend existing laws related to sexually explicit deepfakes and the abuse of intimate images. Technology Secretary Liz Kendall emphasized the importance of protecting women and girls both online and offline. She stated, “We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes.” The creation of non-consensual explicit deepfake imagery is already prohibited by the Online Safety Act, and the government intends to hold accountable those who profit from or facilitate the use of nudifying applications.

Nudification apps employ generative AI to realistically simulate the removal of clothing in photos or videos, raising serious concerns among experts about the harms they can cause. These tools are particularly troubling given their potential misuse in creating child sexual abuse material (CSAM). Earlier this year, Dame Rachel de Souza, England’s Children’s Commissioner, called for a complete ban on these apps, stressing that while making such images is illegal, the technology enabling their creation should also be outlawed. To strengthen efforts against intimate image abuse, the government will collaborate with technology companies, including the UK-based firm SafeToNet. This company has developed AI software that can detect and block sexual content, as well as disable cameras when such content is being recorded.

Child protection groups have long urged the government to take action on this issue, with the Internet Watch Foundation (IWF) highlighting that nearly one-fifth of young reporters using its confidential helpline have experienced manipulated imagery. IWF’s chief executive, Kerry Smith, welcomed the ban, remarking that “these so-called nudification apps… have no reason to exist as a product” and that such technology increases risks for real children by fueling the circulation of harmful images. While the NSPCC also supports the government’s move, it expressed disappointment that the plans did not include stronger device-level protections to prevent the sharing of explicit content. The government further pledged measures to block children from taking, sharing, or viewing nude images on their devices and aims to outlaw AI tools designed to create or distribute CSAM

Read the full article from The BBC here: Read More