Image courtesy of BrownMantis / Pixabay

Will AI make us crazy?

By Dawn Stover, September 11, 2023

https://thebulletin.org/wp-content/uploads/2023/09/Mask-and-numbers-L-150x150.jpg
https://thebulletin.org/wp-content/uploads/2023/09/Mask-and-numbers-L-150x150.jpg

Image courtesy of BrownMantis / Pixabay

Critics of artificial intelligence, and even some of its biggest fans, have recently issued urgent warnings that a malevolently misaligned AI system could overpower and destroy humanity. But that isn’t what keeps Jaron Lanier, the “godfather of virtual reality,” up at night.

In a March interview with The Guardian, Lanier said that the real danger of artificial intelligence is that humans will “use our technology to become mutually unintelligible.” Lacking the understanding and self-interest necessary for survival, humans will “die through insanity, essentially,” Lanier warned (Hattenstone 2023).

Social media and excessive screen time are already being blamed for an epidemic of anxiety, depression, suicide, and mental illness among America’s youth. Chatbots and other AI tools and applications are expected to take online engagement to even greater levels.

But it isn’t just young people whose mental health may be threatened by chatbots. Adults too are increasingly relying on artificial intelligence for help with a wide range of daily tasks and social interactions, even though experts—including AI creators—have warned that chatbots are not only prone to errors but also “hallucinations.” In other words, chatbots make stuff up. That makes it difficult for their human users to tell fact from fiction.

While researchers, reporters, and policy makers are focusing a tremendous amount of attention on AI safety and ethics, there has been relatively little examination of—or hand-wringing over—the ways in which an increasing reliance on chatbots may come at the expense of humans using their own mental faculties and creativity.

To the extent that mental health experts are interested in AI, it’s mostly as a tool for identifying and treating mental health issues. Few in the healthcare or technology industries—Lanier being a notable exception—are thinking about whether chatbots could drive humans crazy.

 

A mental health crisis

Mental illness has been rising in the United States for at least a generation.

A 2021 survey by the Substance Abuse and Mental Health Services Administration found that 5.5 percent of adults aged 18 or older—more than 14 million people—had serious mental health illness in the past year (SAMHSA 2021). Among young adults aged 18 to 25, the rate was even higher: 11.4 percent.

Major depressive episodes are now common among adolescents aged 12 to 17. More than 20 percent had a major depressive episode in 2021 (SAMHSA 2021).

According to the Centers for Disease Control and Prevention, suicide rates increased by about 36 percent between 2000 and 2021 (CDC 2023). More than 48,000 Americans took their own lives in 2021, or about one suicide every 11 minutes. “The number of people who think about or attempt suicide is even higher,” the CDC reports. “In 2021, an estimated 12.3 million American adults seriously thought about suicide, 3.5 million planned a suicide attempt, and 1.7 million attempted suicide.”

Suicide is the nation’s 11th leading cause of death in the United States for people of all ages. For those aged 10 to 34, it is the second leading cause of death (McPhillips 2023).

Emergency room visits for young people in mental distress have soared, and in 2019 the American Academy of Pediatrics reported that “mental health disorders have surpassed physical conditions as the most common reasons children have impairments and limitations” (Green et al. 2019).

Many experts have pointed to smartphones and online life as key factors in mental illness, particularly among young people. In May, the US Surgeon General issued a 19-page advisory warning that “while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents” (Surgeon General, 2023).

A study of adolescents aged 12 to 15 found that those who spent more than three hours per day on social media faced “double the risk of experiencing poor mental health outcomes including symptoms of depression and anxiety.” Most adolescents report using social media, and at least a third say they do so “almost constantly” (Surgeon General 2023).

Although the Surgeon General did not mention chatbots, tools based on generative artificial intelligence are already being used on social-media platforms. In a recent letter published in the journal Nature, David Greenfield of the Center for Internet & Technology Addiction and Shivan Bhavnani of the Global Institute of Mental & Brain Health Investment noted that these AI tools “stand to boost learning through gamification and highlighting personalized content, for example. But they could also compound the negative effects of social media on mental health in susceptible individuals. User guidelines and regulations must factor in these strong negative risks” (Green and Bhavnani 2023).

Chatbots can learn a user’s interests and emotional states, wrote Greenfield and Bhavnani, “which could enable social media to target vulnerable users through pseudo-personalization and by mimicking real-time behaviour.” For example, a chatbot could recommend a video featuring avatars of trusted friends and family endorsing an unhealthy diet, which could put the user at risk of poor nutrition or an eating disorder. “Such potent personalized content risks making generative-AI-based social media particularly addictive, leading to anxiety, depression and sleep disorders by displacement of exercise, sleep and real-time socialization” (Greenfield and Bhavnani 2023).

RELATED:
A new “all-hazards” approach for reducing multiple catastrophic threats

Many young people see no problem with artificial intelligence generating content that keeps them glued to their screens. In June, Chris Murphy, a US senator from Connecticut who is sponsoring a bill that would ban social media’s use of algorithmic boosting to teens, tweeted about a “recent chilling conversation with a group of teenagers.” Murphy told the teens that his bill might mean that kids “have to work a little harder to find relevant content. They were concerned by this. They strongly defended the TikTok/YouTube/ algorithms as essential to their lives” (Murphy 2023)

Murphy was alarmed that the teens “saw no value in the exercise of exploration. They were perfectly content having a machine spoon-feed them information, entertainment and connection.” Murphy recalled that as the conversation broke up, a teacher whispered to him, “These kids don’t realize how addicted they are. It’s scary.”

“It’s not just that kids are withdrawing from real life into their screens,” Murphy wrote. They’re also missing out on childhood’s rituals of discovery, which are being replaced by algorithms.

 

Rise of the chatbots

Generative AI has exploded in the past year. Today’s chatbots are far more powerful than digital assistants like Siri and Alexa, and they have quickly become some of the most popular tech applications of all time. Within two months of its release in November 2022, Open AI’s ChatGPT already had an estimated 100 million users. ChatGPT’s growth began slowing in May, but Google’s Bard and Microsoft’s Bing are picking up speed, and a number of other companies are also introducing chatbots.

A chatbot is an application that mimics human conversation or writing and typically interacts with users online. Some chatbots are designed for specific tasks, while others are intended to chat with humans on a broad range of subjects.

Like the teacher Murphy spoke with, many observers have used the word “addictive” to describe chatbots and other interactive applications. A recent study that examined the transcripts of in-depth interviews with 14 users of an AI companion chatbot called Replika reported that “under conditions of distress and lack of human companionship, individuals can develop an attachment to social chatbots if they perceive the chatbots’ responses to offer emotional support, encouragement, and psychological security. These findings suggest that social chatbots can be used for mental health and therapeutic purposes but have the potential to cause addiction and harm real-life intimate relationships” (Xie and Pentina 2022).

In parallel with the spread of chatbots, fears about AI have grown rapidly. At one extreme, some tech leaders and experts worry that AI could become an existential threat on a par with nuclear war and pandemics. Media coverage has also focused heavily on how AI will affect jobs and education.

For example, teachers are fretting over whether students might use chatbots to write papers that are essentially plagiarized, and some students have already been wrongly accused of doing just that. In May, a Texas A&M University professor handed out failing grades to an entire class when ChatGPT—used incorrectly—claimed to have written every essay that his students turned in. And at the University of California, Davis, a student was forced to defend herself when her paper was falsely flagged as AI-written by plagiarism-checking software (Klee 2023).

Independent philosopher Robert Hanna says cheating isn’t the main problem chatbots pose for education. Hanna’s worry is that students “are now simply refusing—and will increasingly refuse in the foreseeable future—to think and write for themselves.” Turning tasks like thinking and writing over to chatbots is like taking drugs to be happy instead of achieving happiness by doing “hard” things yourself, Hanna says.

 

Can chatbots be trusted?

Ultimately, the refusal to think for oneself could cause cognitive impairment. If future humans no longer need to acquire knowledge or express thoughts, they might ultimately find it impossible to understand one another. That’s the sort of “insanity” Lanier spoke of.

The risk of unintelligibility is heightened by the tendency of chatbots to give occasional answers that are inaccurate or fictitious. Chatbots are trained by “scraping” enormous amounts of content from the internet—some of it taken from sources like news articles and Wikipedia entries that have been edited and updated by humans, but much of it collected from other sources that are less reliable and trustworthy. This data, which is selected more for quantity than for quality, enables chatbots to generate intelligent-sounding responses based on mathematical probabilities of how words are typically strung together.

In other words, chatbots are designed to produce text that sounds like something a human would say or write. But even when chatbots are trained with accurate information, they still sometimes make inexplicable errors or put words together in a way that sounds accurate but isn’t. And because the user typically can’t tell where the chatbot got its information, it’s difficult to check for accuracy.

Chatbots generally provide reliable information, though, so users may come to trust them more than they should. Children may be less likely than adults to realize when chatbots are giving incorrect or unsafe answers.

When they do share incorrect information, chatbots sound completely confident in their answers. And because they don’t have facial expressions or other human giveaways, it’s impossible to tell when a chatbot is BS-ing you.

RELATED:
Can't quite develop that dangerous pathogen? AI may soon be able to help

AI developers have warned the public about these limitations. For instance, OpenAI acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” This problem is difficult to fix, because chatbots are not trained to distinguish truth from lies, and training a chatbot to make it more cautious in its answers would also make it more likely to decline to answer (OpenAI undated).

Tech developers euphemistically refer to chatbot falsehoods as “hallucinations.” For example, all three of the leading chatbots (ChatGPT, Bard, and Bing) repeatedly gave detailed but inaccurate answers to a question about when The New York Times first reported on artificial intelligence. “Though false, the answers seemed plausible as they blurred and conflated people, events and ideas,” the newspaper reported (Weise and Metz 2023).

AI developers do not understand why chatbots sometimes make up names, dates, historical events, answers to simple math problems, and other definitive-sounding answers that are inaccurate and not based on training data. They hope to eliminate these hallucinations over time by, ironically, relying on humans to fine-tune chatbots in a process called “reinforcement learning with human feedback.”

But as humans come to rely more and more on tuned-up chatbots, the answers generated by these systems may begin to crowd out legacy information created by humans, including the original content that was used to train chatbots. Already, many Americans cannot agree on basic facts, and some are ready to kill each other over these differences. Add artificial intelligence to that toxic stew—with its ability to create fake videos and narratives that seem more realistic than ever before—and it may eventually become impossible for humans to sort fact from fiction, which could prove maddening. Literally.

It may also become increasingly difficult to tell the difference between humans and chatbots in the online world. There are currently no tools that can reliably distinguish between human-generated and AI-generated content, and distinctions between humans and chatbots are likely to become further blurred with the continued development of emotion AI—a subset of artificial intelligence that detects, interprets, and responds to human emotions. A chatbot with these capabilities could read users’ facial expressions and voice inflections, for example, and adjust its own behavior accordingly.

Emotion AI could prove especially useful for treating mental illness. But even garden-variety AI is already creating a lot of excitement among mental health professionals and tech companies.

 

The chatbot will see you now

Googling “artificial intelligence” plus “mental health” yields a host of results about AI’s promising future for researching and treating mental health issues. Leaving aside Google’s obvious bias toward AI, healthcare researchers and providers mostly view artificial intelligence as a boon to mental health, rather than a threat.

Using chatbots as therapists is not a new idea. MIT computer scientist Joseph Weizenbaum created the first digital therapist, Eliza, in 1966. He built it as a spoof and was alarmed when people enthusiastically embraced it. “His own secretary asked him to leave the room so that she could spend time alone with Eliza,” The New Yorker reported earlier this year (Khullar 2023).

Millions of people already use the customizable “AI companion” Replika or other chatbots that are intended to provide conversation and comfort. Tech startups focused on mental health have secured more venture capital in recent years than apps for any other medical issue.

Chatbots have some advantages over human therapists. Chatbots are good at analyzing patient data, which means they may be able to flag patterns or risk factors that humans might miss. For example, a Vanderbilt University study that combined a machine-learning algorithm with face-to-face screening found that the combined system did a better job at predicting suicide attempts and suicidal thoughts in adult patients at a major hospital than face-to-face screening alone (Wilimitis, Turer, and Ripperger 2022).

Some people feel more comfortable talking with chatbots than with doctors. Chatbots can see a virtually unlimited number of clients, are available to talk at any hour, and are more affordable than seeing a medical professional. They can provide frequent monitoring and encouragement—for example, reminding a patient to take their medication.

However, chatbot therapy is not without risks. What if a chatbot “hallucinates” and gives a patient bad medical information or advice? What if users who need professional help seek out chatbots that are not trained for that?

That’s what happened to a Belgian man named Pierre, who was depressed and anxious about climate change. As reported by the newspaper La Libre, Pierre used an app called Chai to get relief from his worries. Over the six weeks that Pierre texted with one of Chai’s chatbot characters, named Eliza, their conversations became increasingly disturbing and turned to suicide. Pierre’s wife believes he would not have taken his life without encouragement from Eliza (Xiang 2023).

Although Chai was not designed for mental health therapy, people are using it as a sounding board to discuss problems such as loneliness, eating disorders, and insomnia (Chai Research undated). The startup company that built the app predicts that “in two years’ time 50 percent of people will have an AI best friend.”

References

Centers for Disease Control (CDC). 2023. “Facts About Suicide,” last reviewed May 8. https://www.cdc.gov/suicide/facts/index.html

Chai Research. Undated. “Chai Research: Building the Platform for AI Friendship.” https://www.chai-research.com/

Green, C. M., J. M. Foy, M. F. Earls, Committee on Psychosocial Aspects of Child and Family Health, Mental Health Leadership Work Group, A. Lavin, G. L. Askew, R. Baum et al. 2019. Achieving the Pediatric Mental Health Competencies. American Academy of Pediatrics Technical Report, November 1. https://publications.aap.org/pediatrics/article/144/5/e20192758/38253/Achieving-the-Pediatric-Mental-Health-Competencies

Greenfield, D. and S. Bhavnani. 2023. “Social media: generative AI could harm mental health.” Nature, May 23. https://www.nature.com/articles/d41586-023-01693-8

Hanna, R. 2023. “Addicted to Chatbots: ChatGPT as Substance D.” Medium, July 10. https://bobhannahbob1.medium.com/addicted-to-chatbots-chatgpt-as-substance-d-3b3da01b84fb

Hattenstone, T. 2023. “Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane.” The Guardian, March 23. https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane

Khullar, D. 2023. “Can A.I. Treat Mental Illness?” The New Yorker, February 27. https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness

Klee, M. 2023. “She Was Falsely Accused of Cheating with AI—And She Won’t Be the Last.” Rolling Stone, June 6. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/

McPhillips, D. 2023. “Suicide rises to 11th leading cause of death in the US in 2021, reversing two years of decline.” CNN, April 13.

Murphy, C. 2023. Twitter thread, June 2. https://twitter.com/ChrisMurphyCT/status/1664641521914634242

OpenAI. Undated. “Introducing ChatGPT.”

Substance Abuse and Mental Health Services Administration (SAMHSA). 2022. Key substance use and mental health indicators in the United States: Results from the 2021 National Survey on Drug Use and Health. Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration. https://www.samhsa.gov/data/report/2021-nsduh-annual-national-report

Surgeon General. 2023. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf

Wilimitis, D., R. W. Turer, and M. Ripperger. 2022. “Integration of Face-to-Face Screening with Real-Time Machine Learning to Predict Risk of Suicide Among Adults.” JAMA Network Open, May 13. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2792289

Weise, K. and C. Metz. 2023. “When A.I. Chatbots Hallucinate.” The New York Times, May 1. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

Xiang, C. 2023. “‘He Would Still Be Here’: Man Dies by Suicide After Talking With AI Chatbot, Wife Says.” Motherboard, March 30. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

Xie, T. and I. Pentina. 2022. “Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika.” In: Proceedings of the 55th Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/items/5b6ed7af-78c8-49a3-bed2-bf8be1c9e465

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Get alerts about this thread
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ed Would
Ed Would
7 months ago

I asked ChatGPT some questions and received an authoritative response. It then asked me what I thought of the answer. How was I supposed to know if it’s answer was correct. If I knew the correct answer, I wouldn’t have asked it a question in the first place. I then learned that it judges the accuracy of it’s information, at least in part, to the responses it gets asking people what they thought of it’s answer. That’s a perfect recipe for creating a world full of blithering idiots. If this is what the information sciences community regards as legitimate science,… Read more »

Last edited 7 months ago by Ed Would