The Taylor Swift scandal prompts action from US politicians and the tech community
Microsoft has updated its generative artificial intelligence (AI) tool after the phony, sexually graphic photos of Taylor Swift went viral on social media, and US politicians have suggested new regulations.
The "Swift effect" is genuine; it seems that policymakers and tech businesses alike can be persuaded to implement AI safeguards by it alone.
The industry has always expressed deep concerns about the creative potential for generative AI image-creation tools being misused, but until now has never taken specific action. It seems it takes a celebrity to see that happen.
A pornographic AI-generated image of Swift, shared by a user on X, was viewed a staggering 47 million times last week. The image spread to another platform, Telegram.
The images are and will probably remain out there. But they at least highlighted a problem of non-consensual deepfake pornography spreading on social media and elsewhere uncontrollably.
Even the White House, mindful of the fact that millions of ‘swifties’ can become an important voting bloc in the US presidential election later this year, is commenting. Its press secretary Karine Jean-Pierre called the fake images “alarming.”
Now, X says it is actively curbing the spread of the Swift images, although the platform has already lifted the ban on searches for the popstar.
Additionally, Microsoft has closed the loophole in its Designer AI image generator that could create explicit images of celebrities such as Swift, 404 Media reported.
Previously, users could get around simple name blocks by deliberately misspelling prompts. But now, it’s entirely impossible to generate images of celebrities – even though, surely, the cat-and-mouse game between malicious actors and companies will continue.
Perhaps more importantly, a group of US senators have now introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, legislation that would “hold accountable those responsible for the proliferation of nonconsensual, sexually-explicit deepfake images and videos.”
Creators of such images would be subject to civil action lawsuits over digital forgery and entitle the victim with financial damages as relief.
Deepfake pornography has grown into something of an epidemic. There were almost 280,000 clearnet synthetic, non-consensual exploitative videos in 2023, as per a recent report on deepfakes and the rise of nonconsensual synthetic adult content.
The total duration of these videos was 1,249 days and the number of views topped 4.2 billion, the report found.