At the time, ELIZA was billed as an “online therapist.” It was intriguing and surprisingly engaging. Essentially, the program would reflect back the statements made to it. If you told ELIZA, “I’m feeling depressed,” it would respond with a statement like, “Why do you think you’re feeling depressed?”
ELIZA was created in the mid-1960s by Dr. Joseph Weizenbaum, His objective was to demonstrate how superficial human to computer interactions were at that time. But I wasn’t the only one to find it eerily human-like. Even his own secretary once asked him to leave the room “so that she and ELIZA could have a real conversation.”
Decades later, the relative unsophistication of ELIZA almost seems quaint. We now live in an era of surveillance capitalism: an entire industry quietly capitalizing on the explosion of data that we (mostly) unknowingly generate as we go about our day. Whether it’s the location of our smartphones, the websites we visit, the messages we post on social media, or the images of our faces secretly compiled in giant databanks, there are few (if any) limits on how this data is used.
And now, psychiatrists want to use it to measure and evaluate our mental state.
There’s little doubt that this data can be extremely revealing when examined with artificial intelligence tools. Our social media postings can be used to predict conditions such as depression, even psychosis. Our search engine histories turn out to be a useful forecaster of suicide. The words we use in our speech can even indicate whether we’re likely to become psychotic.
Our question, as you might suspect, is “what could possibly go wrong?”
One obvious concern is the how the data that would be collected to feed today’s new breed of AI-aided psychiatrist can be used. As we pointed out a few months ago, US medical privacy laws are junk. Your medical records can be disclosed to data aggregators, accounting firms, law firms, banks, and in some cases, credit bureaus, marketing companies and your employer – all without your permission. (We provided some suggestions on how to protect your medical data in this article and avoid medical identity fraud here. [Access to that last article requires subscription to the Nestmann Inner Circle)
Of course, there’s also the security of the data itself. And whether it’s ransomware or another threat, the state of online data security is abysmal. The state of online security is such that it’s not a matter of if your identity will be stolen. The only uncertainty is how often it will happen. That’s a big reason we’re a big fan of a measure credit bureaus detest: putting a security freeze on your data.
But privacy and data security are minor concerns compared the danger posed by psychiatry itself. After all, this is a “science” whose practitioners only a few decades ago were routinely prescribing lobotomy to treat mental illness – in many cases, involuntarily. The method used most frequently was known as the “icepick lobotomy.” Practitioners used a hammer to drive an icepick into their patients’ brains through their eye sockets. The icepick was then rotated inside the eye sockets to sever the nerves connecting the prefrontal cortex to other parts of the brain.
This procedure was thought to prevent the recurrence of “abnormal” behaviors and thoughts. Sometimes lobotomies “worked” in the sense of calming patients, but it also could leave them in a persistent vegetative state in which they were unable to speak or otherwise communicate effectively.
Then there’s the diagnosis of mental illness itself. A 2019 study published in the journal Psychiatry Research concluded that psychiatric diagnoses are “scientifically worthless as tools to identify discrete mental health disorders.” In particular, “diagnoses tell us little about the individual patient and what treatment they need.”
Meanwhile, a researcher claimed in a 2015 article in the British Medical Journal that hundreds of thousands of patients die annually from overdoses of psychiatric medications. The vast majority of patients, he concluded, could stop taking these medications without harming themselves.
With this background in mind, we’re not optimistic that AI-aided psychiatry will be a net benefit to mankind. Indeed, we’re inclined to suggest that you avoid the organized racket of psychiatry any way possible.
Just ask Britney Spears.