Deeper “Deep Fakes” Are Here
Did you know Hollywood star Nicholas Cage played Lois Lane in a Superman movie? Or that Facebook founder Mark Zuckerberg once boasted that the social media giant has “total control of billions of people’s stolen data?”
Both these statements are false. But they illustrate a pernicious trend: the use of artificial intelligence (AI) to create increasingly believable “deep fake” images, audio, and video.
“Seeing is believing,” the old saying goes. But is it really? Check out this webpage. The person you see there doesn’t really exist but has been created using AI.
Fast forward to November 2, 2020. It’s the day before the US presidential election. Democrat Joe Biden and Republican Donald Trump are locked in a race that’s too close to call.
An ultra-realistic video appears on Instagram of Biden accepting an envelope full of cash from the national director of the Knights of the Ku Klux Klan. Almost simultaneously, another video shows up on Facebook of Trump raping a White House intern. Both videos appear genuine and bear time stamps attesting to when they were supposedly recorded.
The technology to make such videos exists now and it’s only getting better due to the development of programs like generative adversarial networks (GANS), which pit two AI algorithms against each other. One algorithm creates the fakes and the other grades its efforts, teaching it to create more and more convincing fakes. The process repeats until the grading algorithm can’t tell the difference between the fakes and the real thing.
Want to try deep faking your own face? Download an app called FaceApp on your Android or iOS smart device. You can make yourself look younger, older, or even change your gender.
Of course, the idea of modifying images is nothing new. An early example occurred in the 1860s, when a photo of President Abraham Lincoln was altered by superimposing the body of former Vice President John Calhoun on it. Airbrushing – photo retouching to add or remove objects, deleting skin imperfections, etc. – eventually became commonplace.
By the 1990s, digital airbrushing using products like Adobe Photoshop became possible. In movies, combining separate pieces of film to replace part of a frame created special effects such as characters flying. This process rapidly grew in sophistication to produce the special effects that are an integral part of movies today. The latest development in this field is to automate the process using AI.
We’re already accustomed to deep fakes in the movies. But AI makes it possible to extend it to every realm of our existence. As we become more accustomed to deep fakes, we will no doubt learn to discount sensationalistic recordings that appear to catch politicians or celebrities in compromising or unflattering circumstances. But that’s hardly the end of the story.
Consider how easy it would be to create a virtual human starting with just a facial image created on a platform like This Person Does Not Exist . You could first create email addresses and social media profiles for this non-existent person. The virtual human then could begin spewing out tweets to support whatever political or social viewpoint its creators desire. It could be programmed to forward messages and respond to tweets.
Count on intelligence services with virtually unlimited resources to create hundreds of thousands of virtual humans to influence opinions and behaviors. Russia used crude versions of these techniques to influence the 2016 presidential election and is using much more sophisticated versions of them in the 2020 elections. US intelligence agencies are no doubt doing the same.
The creators of the virtual human could even use a generative adversarial network to manufacture biometric markers for it such as retinal scans and fingerprints. Combined with a social security number acquired on the dark web, the virtual human is now almost indistinguishable from a real one. It would be then be possible to create a fake passport, fake driver’s license, etc. and use those credentials to open bank accounts or register to vote.
Ultimately, deep fakes will make us reluctant to trust any audio or video record presented to represent objective reality. Anyone embarrassed or damaged by a compromising digital recording will simply dismiss it as fake news. We’ll return to an era where we can trust only what we personally witness.
Unfortunately, there is no easy technical solution to analyze the bits in a digital image, audio recording, or video to tell if it’s genuine. Hany Farid, who invented an AI technology to identify and block child pornography, says: “We're decades away from having forensic technology that ... [can] conclusively tell a real from a fake.”
But it’s not necessarily impossible using cryptographic blockchains like the one used for bitcoin. The blockchain won’t be able to conclusively identify a digital record as real or fake. But it will be able to create an unalterable record of the record’s metadata – when, who, and even where it was created. Once this data is on the blockchain, it can’t be altered and becomes part of a permanent ledger.
The science fiction series Black Mirror suggests a creepier solution. An episode entitled The Entire History of You depicts a future society where most people have a device implanted in them that records everything they see and hear, 24 hours a day. The device is called a “grain” and constitutes an unalterable life log. Recordings from the grain can be played back so that others can witness it.
There is no easy long-term solution for what will quickly become a global plague of deep fakes. But as an individual, you can lower your own vulnerability to deep fakes by limiting the number of photographs and (especially) audio and video clips you post to social media. That will deter all but the most sophisticated parties from creating deep fakes with you as the star.
Protecting your assets (and yourself) against any threat - from the government, the IRS or a frivolous lawsuit - is something The Nestmann Group has helped more than 15,000 Americans do over the last 30 years.
Feel free to get in touch at firstname.lastname@example.org or call +1 (602) 688-7552 to learn how we can help you.
Want to learn more about us first?
Why not get instant access to my very popular e-course - Inside the World of Big Money Asset Protection. It tells the story of John and Kathy, two clients we helped from the heartland of America.
We subsidize copies of the course to new readers. In other words, it's yours free.
Many clients have used this program to really be clear about what they need to do - and how to get started. You likely will too.
To begin, we just need to know where to send it:
Share this article: