As we all know, on August 29, 1997, humanity nearly became extinct.
Three weeks before that date, Skynet, a revolutionary artificial intelligence (AI) system built for the Pentagon had been placed in charge of America’s nuclear weapons. Within days, scientists realized they had made a terrible mistake and tried to shut down the system.
Now self-aware, Skynet realized that its only realistic chance of survival was to exterminate humanity. So, it launched an unprovoked nuclear attack against Russia, concluding that Russia would retaliate with a counterstrike.
Skynet was correct. Judgment day had arrived. By the end of the day, only a few humans were still alive.
Of course, none of these events ever happened. But when the Terminator movie franchise debuted in 1984, this type of threat seemed almost plausible.
The movie plot conception of AI continues to this day. And predictions of imminent doom continue to proliferate, although the threats are a bit more nuanced.
For instance, last Tuesday, the Center for AI Safety (CAIS) released a one-sentence statement:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Among those signing the statement were Sam Altman, the CEO of OpenAI, the company that developed ChatGPT. In case you’ve spent the last few months in a cave (perhaps to avoid a fatal encounter with The Terminator), ChatGPT is a user-friendly program anyone can use to obtain detailed responses from an AI system trained on a “large language model” (LLM). An LLM is software that draws upon millions of different sources to answer questions in a way that resembles how a human would do so.
A press release accompanying the CAIS statement warned that humanity needs to “put guardrails in place and set up institutions so that AI risks don’t catch us off guard.” The press release even compared the concerns of CAIS to the 1949 warning of nuclear scientist J. Robert Oppenheimer, who led the successful US effort to develop the world’s first atomic bombs.
But what are the actual threats to humanity from ChatGPT and other AI tools? The good news is that it’s extremely unlikely that a self-aware Skynet-type AI will blow earth to smithereens. But some researchers still believe that a future super intelligent AI could make humans irrelevant or decide to wipe us out completely.
We don’t see that as a particularly likely scenario, either – although not being AI experts, we can’t know for certain. But we do take more seriously the testimony of Altman before Congress last month in which he submitted an AI safety report, which warned that LLMs could help terrorists or rogue nations:
…develop, acquire, or disperse nuclear, radiological, biological, and chemical weapons.
This seems somewhat more plausible. Still, there are other AI-related threats that might not end civilization but are already with us. And we’ve long warned of them.
For instance, in 2018, we wrote about a free online tool from Google called TensorFlow which uses a form of artificial intelligence called machine learning to create what have been nicknamed “deep fakes.” At the time, the most widespread use of the technology was in pornography, often with the faces of female celebrities superimposed on a porn star’s body to depict fictitious sex acts.
“Seeing is believing,” the old saying goes. But is it really? Check out this webpage. The person you see there doesn’t really exist but has been created using AI. More recently Brad Smith, the president of Microsoft, announced that his biggest concern about AI was the growing use of deep fakes.
Hearing is believing, too. But in 2019, criminals used AI software to generate deep-faked audio instructions that fooled a CEO into wiring €220,000 to a fictitious company.
In 2020, we warned that another AI threat – predictive policing – highlighted in the 2002 blockbuster movie Minority Report – had arrived. We thought of Minority Report when we learned that California voters had rejected an initiative to end the centuries-old practice of “cash bail.” It would have been replaced by an AI-assisted system to determine whether criminal defendants should be released pending trial.
Still, making AI tools highly accessible poses its own set of risks. Here are a few to consider:
-
Millions of jobs lost. A study from researchers at OpenAI and the University of Pennsylvania concluded that nearly 20% of the US workforce could conceivably be replaced by AI and related technologies. Among the most vulnerable occupations are tax preparers, mathematicians, writers, and web designers. And up to 80% of workers will see their jobs impacted by AI to some degree.
-
Exploding cyber-fraud. The fraudulent emails you received from the “Nigerian prince” offering you millions of dollars if you’ll assist him are completely passe. Instead, get ready for AI-assisted scams personalized just for you. Consider, for instance, how you’d react to a video you receive on Facebook from a close family member pleading for financial assistance. And yes, it’s already happening.
-
Mindreading AI. Researchers have developed software that can translate your thoughts into text. With current technology, this is possible only with an implanted device. But eventually, it might be possible to passively eavesdrop on your thoughts. It doesn’t take much imagination to consider how a tool like this could be used for surveillance on enemies of the state or anyone deemed to be insufficiently enthusiastic about a central bank digital currency.
-
AI-powered propaganda. The fact that LLMs like ChatGPT generate text that is almost indistinguishable from that written by a human makes it a natural choice to peddle nearly limitless quantities of disinformation. Since the disinformation is produced by a machine, it can rapidly be scaled up. Combined with deep fakes, AI powered influence operations are likely to experience enormous growth in the years ahead.
-
Enhanced surveillance. In 2019, market researchers predicted that by the end of 2021, more than one billion CCTV cameras would be installed worldwide. Increasingly, these cameras are equipped with sophisticated AI-driven face recognition algorithms. They’re already being used in Israel to monitor Palestinians and in China to target the country’s ethnic minorities. The next step is for the algorithms to become predictive, so that Minority Report-style predictive policing becomes an everyday facet of our lives.
We don’t have a great deal of practical advice to offer to avoid this admittedly grim future. But there are precautions you can take to reduce its severity. If your occupation is threatened by AI, we suggest finding a job that’s not. The least vulnerable occupations are those that require human interaction, such as managers, nurses, and physical therapists.
In terms of cyber-fraud, our suggestion is to limit the number of photographs and (especially) audio and video clips you post to social media. That will deter all but the most sophisticated parties from creating a deep fake with you as the star.
As for AI-powered CCTV surveillance, you might consider continuing a COVID-era precaution originally designed to limit the spread of the virus – to continue wearing your mask. Unfortunately, a basic surgical mask won’t necessarily defeat face recognition. But if you add sunglasses and a hat, it will be far more difficult to match your face to your real identity.
Finally, the news will always be full of partisan opinion and shaped by increasingly sophisticated tools of behavioral manipulation. Protect yourself by getting as close to first-hand accounts as you can, considering the source of the news you view, and being aware that others are trying to manipulate your beliefs.