The founder of a major social network brags about being in control of the “stolen data” of billions – their lives, their secrets, and their families. A former president refers to the current head of state as “a total and complete dipshit.” A major politician appears to be drunk and clueless while attending Congress.
The only thing the above three scenarios have in common is that they never actually happened. They were videos created via machine learning, known as deepfakes. If that doesn’t make you uncomfortable, it should because we’re quickly entering an era where creating a convincing fake video is as simple as downloading an app.
“Deepfakes have the potential to differ in quality from previous efforts to superimpose faces onto other bodies,” writes Business Insider’s Benjamin Goggin. “A good deepfake, created by AI that has been trained on hours of footage, has been specifically generated for its context, with seamless mouth and head movements and appropriate coloration.”
To create a deepfake, the user must first feed training data into an algorithm, such as images and videos. This data is eventually used to create a three-dimensional model of a subject. Generally speaking, the more content you feed into the app, the more convincing the fake. That means that celebrities and public figures are probably the only ones who need to worry about deepfake technology, right?
Wrong. Recently, Samsung developed software that enables the creation of a video via a single photo. The videos aren’t exactly convincing, mind you – tech publication Cnet dubbed them ‘dumbfakes.’
Yet even an unconvincing fake video can go viral. Even an unconvincing photo can cause irreparable damage to someone’s reputation. And as technology like this develops, I don’t doubt that it will get better at filling in the blanks – that it will need progressively less data to generate a convincing fake.
In a paper accompanying a video of a talking Mona Lisa, the researches excitedly recounted all the possible applications for their breakthrough, including videoconferencing, video games, and special effects.
Evidently, they haven’t thought of all the other applications for their software. Sure, fake videos have some exciting applications for communication and entertainment. And sure, plenty of people will use deepfake technology for harmless stuff like replacing major Hollywood actors with Nicholas Cage or generating cat videos.
But there are a ton of other potential uses for this technology, and very few of them are good. To name just a few…
- Generating political unrest by creating libelous videos of political figures.
- Creating fake pornographic images or videos to use as blackmail material.
- Falsifying video evidence of a crime.
- Creating fake Airbnb listings.
“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” actress Scarlett Johansson, a frequent victim of deepfake pornography, told Business Insider. “The fact is that trying to protect yourself from the Internet and its depravity is basically a lost cause. The Internet is a vast wormhole of darkness that eats itself.”
Basically, deepfake technology is frightening, especially if it’s in the hands of some of the Internet’s less savory denizens.
As for what we can do to defend against it? At this point, not much. Regulatory agencies and lawmakers are already flirting with legislation that would ban deepfakes but that doesn’t stop them from being circulated, and could potentially infringe on freedom of speech.
Others are looking for ways to create a system capable of distinguishing a deepfake from a real video. Just as there’s software that can tell when an image has been photoshopped, this theoretical tool would be able to uncover the signs of AI tampering. Yet such software could still potentially be years away.
And in the meantime, deepfakes will continue to surface, a sure sign of artificial intelligence gone too far.