Stefani Reynolds/AFP via Getty Images
A new crop of deepfake videos and images is causing a stir — a periodic phenomenon that seems to be happening more frequently, as several bills focused on deepfakes remain in Congress.
The issue made headlines this week, as bogus pornographic images purporting to show pop superstar Taylor Swift proliferated on X (formerly known as Twitter), Telegram and elsewhere. Many postings were removed, but not before some of them racked up millions of views.
The assault on Swift’s famous image serves as a reminder of how deepfakes have become easier to make in recent years. A number of apps can swap a person’s face onto other media with high fidelity, and the latest iterations promise to use AI to generate even more convincing images and video.
Deepfakes often target young women
Many deepfake apps are marketed as a way for regular people to make funny videos and memes. But many end results don’t match that pitch. As Caroline Quirk wrote in the Princeton Legal Journal last year, “since this technology has become more widely available, 90-95% of deepfake videos are now nonconsensual pornographic videos and, of those videos, 90% target women—mostly underage.”
Deepfake porn was recently used against female high school students in New Jersey and in Washington state.
At their core, such deepfakes are an attack on privacy, according to law professor Danielle Citron.
“It is morphing women’s faces into porn, stealing their identities, coercing sexual expression, and giving them an identity that they did not choose,” Citron said last month on a podcast from the University of Virginia, where she teaches and writes about privacy, free expression and civil rights at the university’s law school.
Citron notes that deepfake images and video are merely new forms of lies — something humanity has been dealing with for millennia. The problem, she says, is that these lies are being presented in video form, which tends to strike people on a visceral level. And in the best deepfakes, the lies are shrouded by sophisticated technology that’s extremely hard to detect.
We’ve seen moments like these coming. In recent years, deepfake videos showing “Tom Cruise” in a variety of unlikely settings have racked up hundreds of millions of views on TikTok and elsewhere. That project, created by cameraman and visual effects artist Chris Umé and Cruise impersonator Miles Fisher, is fairly benign compared to many other deepfake campaigns, and the videos carry a watermark label reading “#deeptomcruise,” nodding at their non-official status.
Deepfakes pose a growing challenge, with little regulation
The risk of damage from deepfakes is far-ranging, from the appropriation of women’s faces to make explicit sex videos, to the use of celebrities in unapproved promotions, to the use of manipulated images in political disinformation campaigns.
The risks were highlighted years ago — notably in 2017, when researchers used what they called “a visual form of lip-syncing” to generate several very realistic videos of former President Barack Obama speaking.
In that experiment, the researchers paired authentic audio of Obama talking with computer-manipulated video. But it had an unnerving effect, as it showed the potential power of a video that could put words into the mouth of one of the most powerful people on the planet.
Here’s how a Reddit commenter on a deepfake video last year described the predicament: “I think everyone is about to be scammed: Older people who think everything they see is real and younger people who’ve seen so many deepfakes they won’t believe anything they see is real.”
As Citron, the UVA law professor, said last month, “I think law needs to be reintroduced into the calculus, because right now the ‘internet,’ and I’m using air quotes, right, is often viewed as, like, the Wild West.”
So far, the strongest U.S. restrictions on the use of deepfakes are seen not at the federal level but in states including California, Virginia and Hawaii, which ban nonconsensual deepfake pornography.
But as the Brennan Center for Justice reports, those and other state laws have varying standards and focus on different content modes. At the federal level, the center said last month, at least eight bills seek to regulate deepfakes and similar “synthetic media.”
In addition to revenge porn and other crimes, many laws and proposals aim to put special limits and requirements on videos related to political campaigns and elections. But some companies are acting on their own — such as last year, when Google, and then Meta, announced they would require political ads to carry a label if they were made with AI.
And then there are the scams
In the past month, visitors to YouTube, Facebook and other platforms have seen video ads purporting to show Jennifer Aniston offering a so-good-it’s-delusional deal on Apple laptops.
“If you’re watching this video, you’re part of a fortunate group of 10,000 people who have the chance to obtain the Macbook Pro for just $2,” the ersatz Aniston says in the ad. “I’m Jennifer Aniston,” the video falsely states, urging people to click a link to claim their new computer.
A common goal for such scams is to trick people into signing up for expensive subscriptions online, as the website Malware Tips reported during a similar, recent ploy.
Last October, actor Tom Hanks warned people that an AI was using his image, seemingly to sell dental insurance online.
“I have nothing to do with it,” Hanks said in an Instagram post.
Soon after, CBS Mornings co-anchor Gayle King sounded the alarm over a video purporting to show her touting weight-loss gummies.
“Please don’t be fooled by these AI videos,” she said.