TECHNOLOGY

If You Think Fake News Is Bad, Fake Video Is Coming

By Benny Evangelista
San Francisco Chronicle

WWR Article Summary (tl;dr) In some of the new fake video technology, programs can scan videos and still photos of one person and paint that person’s features onto another person in a separate video.

San Francisco Chronicle

One video appears to show “Wonder Woman” star Gal Gadot performing in a pornographic scene.

Another depicts what the love child of President Trump and German Chancellor Angela Merkel might look like.
These “deepfake” videos — sometimes disturbing, sometimes entertaining creations of reality-distorting, face-swapping technology — are proliferating.

And in a social-media crazed world where people have trouble discerning what is and isn’t fake news, some computer scientists worry that such videos herald the escalation of a larger existential threat to the fabric of democracy, especially if used for malevolent purposes. In coming years, it may be hard to tell whether a video is real or fake.

“I’m worried about the death by a thousand cuts to our sense of reality as it gets easier and easier to mimic it, and the impact that will have in neutering checks on actual crime and corruption, even at the highest levels,” said Aviv Ovadya, chief technologist for the University of Michigan’s Center for Social Media Responsibility.
“This is a way that democracies fail.”

Granted, the sky remains firmly in place even though a few doctored celebrity porn videos began appearing late last year on the San Francisco social news site Reddit, as first reported by the tech news site Motherboard.

But the development demonstrated how media-altering technologies are no longer solely in the hands of professionals at movie visual effects studios. Since people can create fake videos on their home computers, anyone will, in effect, be able to turn legitimate photos, audio recordings and videos into false, potentially damaging instruments of propaganda and social discord.

What if, for example, a video surfaces showing the president in bed with Russian prostitutes, or another politician shouting a racial epithet?

“You’re going to have trouble trusting people on the phone, you’re going to have trouble trusting video,” said Jack Clark, strategy and communications director for OpenAI, a nonprofit San Francisco artificial intelligence research company that helped produce a report last month on malevolent uses of AI. “The problems are obvious. The solutions are not obvious.”

Peter Eckersley, the Electronic Frontier Foundation’s chief computer scientist who helped author the report, called deepfakes the first “wave of the future where fabricated videos will inevitably be used for political purposes. So it’s time to start figuring out how to defend ourselves against that risk, how to defend democracy against those risks.”

The term deepfakes, a blend of “deep learning” and “fake,” came into use after an anonymous Reddit member, who went by the screen name deepfakesapp, created the original video-merging program. Another Reddit member then released an improved version, called FakeApp.

The programs scan videos and still photos of one person and paint that person’s features onto another person in a separate video. Using artificial intelligence technology, the programs can replace faces down to the movements of eyes, mouths and heads.

It’s an evolution of the way Adobe Photoshop, created 30 years ago, can alter still images. In fact, one popular online pastime predating deepfakes is a series of memes and GIFs depicting actor Nicolas Cage’s face Photoshopped into everything from Harry Potter to Michelangelo’s “The Creation of Adam.” Deepfake videos took the meme to a new level, with Cage becoming Lois Lane, Luke Skywalker and Forrest Gump.

The technology was used to place the faces of celebrities such as Gadot, Daisy Ridley, Emma Watson and Taylor Swift onto the bodies of porn stars. Deepfakes became more notorious when users began swapping in the faces of friends and exes.

The uproar caused Reddit, Twitter and other sites like Pornhub, Discord and Gfycat to ban the offending content and discussion groups that had formed around deepfakes.

The bans haven’t stopped the technology. The program can be downloaded from a site called FakeApp, while a website called the Deepfake Society that curates the best of those videos has more than 1 million views since it launched in February. That site doesn’t allow pornography but has videos like one showing Trump and North Korean leader Kim Jong Un as each other.

When reached through the contact information box on the site, a man who said he was from Los Angeles called back; he declined to give his name because he fears the stigma surrounding deepfake pornography could jeopardize his web programming job. He said he is a conservative Republican and started the site because he sees deepfakes as entertaining, especially one depicting Trump as the bully Biff Tannen in “Back to the Future Part II.”

But the malicious implications are “absolutely terrifying,” he said. “You can put any politician doing anything anywhere. Even if it is fake and it gets out, it’s going to ruin somebody. Most people don’t see a report and go out and do their own research. They just take it at face value.”

Sven Charleer, a computer science researcher at the university KU Leuven of Belgium, said critics are overreacting to the technology, which can also be used for good purposes. To demonstrate, Charleer lovingly swapped in his wife Elke’s face to replace actress Anne Hathaway. On his blog, he’s posted clips showing his wife on “The Tonight Show Starring Jimmy Fallon” and in “Get Smart” with Steve Carell.

“We’re going to see some amazing things with this technology,” Charleer said. “People just have to be less gullible and more critical about things.”

Nevertheless, deepfakes raise issues that might require a change in laws, said Andrew Keen, a former Silicon Valley entrepreneur who has become a self-described technology skeptic.

“There’s going to have to be new ways of thinking about freedom of speech and what you can and cannot do,” said Keen, author of “How to Fix the Future,” published last month.

“This is a much more profound kind of identity theft,” he said. “At what point do we own our own image? Do I have a right to sue someone if they steal my image and present me in a way as someone I’m not, like a porn star or a dog?”

David Greene, Electronic Frontier Foundation senior staff attorney, said there was “nothing inherently illegal” about deepfake technology. Existing laws could cover problems such as “creating non-consensual pornography and false accounts of events,” but writing new laws could threaten “beneficial and benign uses” such as political commentary and parody, he said.

Melanie Howard, an advanced media and technology lawyer for Loeb & Loeb LP, said legal reform wasn’t enough and suggested that technologists develop “countermeasures to expose forgeries and fakes in these forms of media.”

But the EFF’s Eckersley called such technological solutions “a total pipe dream” that would, for example, require modifying every video camera and smartphone to provide evidence of where and when raw videos were recorded.

“There’s not going to be a magic shortcut for testing to see if video is real or audio is real,” he said. “There’s no question that it’s going to be hard to learn to tell the difference between things that are completely true, things that are mythical and things that are in the strange territory in between.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top