Last week, explicit images of Taylor Swift began circulating on social media platform X, created by AI technology. In response, the platform has temporarily blocked users from searching for the singer’s name. Instead, a message appears informing users that “something went wrong” and “don’t fret. It’s not your fault.”
The head of business operations for X, Joe Benarroch, released a statement through Variety, stating that the block is only a “temporary action” taken “with an abundance of caution” for the safety of the public. Because the AI-generated images were able to circulate on both X and messaging platform Telegram, the company is taking a “zero-tolerance policy” towards this kind of content, and will be actively removing all identified images and taking action against the accounts responsible for posting them.
This incident has not gone unnoticed by the American government, with the White House reacting to the situation by calling it “alarming”. White House press secretary Karine Jean-Pierre expressed concern about the impact such images have, particularly on women and girls. She suggested implementing stronger legislation addressing the issue of AI abuse on social media. This proposal was backed by US Representative Joe Morelle, who has been involved in the proposed Preventing Deepfakes of Intimate Images Act, which aims to make the distribution of deepfake pornography without consent illegal.
While Taylor Swift has yet to respond publicly to the incident, sources have reported that her team is considering legal action against the site that published the images. This response highlights the ongoing issue of AI-generated deepfakes and their potential harm to an individual’s reputation and privacy. As AI technology continues to advance, it is becoming increasingly important to have measures in place to prevent the misuse of AI-generated material
Read the full article on NME here: Read More