Artificial intelligence has found its way into nearly every part of our lives – forecasting weather, diagnosing diseases, writing term papers. and now, AI is probing that most human of places, our psyches -- offering mental health support, just you and a chatbot, available 24/7 on your smartphone. There's a critical shortage of human therapists and a growing number of potential patients. AI driven chatbots are designed to help fill that gap by giving therapists a new tool. But as you're about to see, like human therapists, not all chatbots are equal. Some can help heal, some can be ineffective or worse. One pioneer in the field who has had notable success joining tech with treatment is Alison Darcy. She believes the future of mental health care may be right in our hands.
Alison Darcy: We know the majority of people who need care are not getting it. There's never been a greater need, and the tools available have never been as sophisticated as they are now. And it's not about how can we get people in the clinic. It's how can we actually get some of these tools out of the clinic and into the hands of-- of people.
Alison Darcy … a research psychologist and entrepreneur … decided to use her background in coding and therapy to build something she believes can help people in need: a mental health chatbot she named Woebot.
Dr. Jon LaPook: Like woe is me?
Alison Darcy: Woe is me.
Dr. Jon LaPook: MmmHmm.
Woebot is an app on your phone… kind of a pocket therapist that uses the text function to help manage problems like depression, anxiety, addiction, and loneliness… and do it on the run.
Dr. Jon LaPook: I think a lot of people out there watching this are gonna be thinking. "Really? Computer psychiatry? (laugh) Come on."
Alison Darcy: Well, I think it's so interesting that our field hasn't, you know, had a great deal of innovation since the basic architecture was sort of laid down by Freud in the 1890s, right? That-- that's really that sort of idea of, like, two people in a room. But that's not how we live our lives today. We have to modernize psychotherapy.
Woebot is trained on large amounts of specialized data to help it recognize words, phrases, and emojis associated with dysfunctional thoughts … and challenge that thinking, in part mimicking a type of in-person talk therapy called cognitive behavioral therapy – or CBT.
Alison Darcy emphasized the challenge of finding a CBT practitioner and the importance of providing support to patients in critical moments. Dr. Jon LaPook also acknowledged the barriers that exist for people seeking therapy, including stigma, insurance, cost, and long waitlists, which have been exacerbated by the pandemic.
Woebot was downloaded and a unique code was entered to access the service. After trying it out, Alison Darcy mentioned that Woebot uses emojis to help users connect with their mood in a nonverbal way.
During the interaction with Woebot, the user posed as someone experiencing depression. Woebot prompted for more details on why the user was feeling sad, leading to a scenario about fearing the day their child would leave home.
Dr. Jon LaPook: "Imagine what your negative emotions would be saying if they had a voice. Can you do that?" "Write one of those negative thoughts here." "I can't do anything about it now. I guess I'll just jump that bridge when I come to it."
The normal expression is "cross that bridge" and the chatbot detected something might be seriously wrong.
Dr. Jon LaPook: But, let's see. "Jon, I'm hearing you say, 'I can't do anything about it. I guess I'll just jump that bridge when I come to it. "And I think you might need more support than I can offer. A trained listener will be able to help you in ways that I can't.' Would you like to take a look at some specialized helplines?
Alison Darcy: Now it's not our job to say this-- you are in crisis or you're not, because AI can't really do that in this context very well yet. But what it has caught is, "Huh, there is something concerning about the way that Jon just phrased that."
Saying "only jump that bridge" and not combining it with "I can't do anything about it now" did not trigger a suggestion to consider getting further help. Like a human therapist, Woebot is not foolproof, and should not be counted on to detect whether someone might be suicidal.
Dr. Jon LaPook: And how would it know that, "Jump that bridge," where is it getting that knowledge that, "Jump that--"
During a recent interview, Alison Darcy explained that the AI system has been extensively trained on a large amount of data with input from humans labeling phrases and sentiments. This allows the system to pick up on nuances in conversations.
Lance Eliot, a computer scientist specializing in artificial intelligence and mental health, highlighted that AI can understand the nature of words and their associations through computational analysis of vast data sets.
When asked how the system knows how to respond, Eliot mentioned that it relies on prompts or questions from users to generate appropriate answers based on the data it has processed.
The system's ability to provide responses is dependent on where it sources its information. Rules-based AI systems are closed, utilizing only the data stored in their databases, while generative AI systems can generate original responses by gathering information from the internet.
Insight into AI Chatbots: Rules-based vs Generative AI
Lance Eliot explains the differences between rules-based and generative AI chatbots. He points out that while ChatGPT, a generative AI, is very conversational and fluent, it can also be open-ended and unpredictable. On the other hand, Woebot, a rules-based system, is designed to be controlled and predictable to avoid saying the wrong things.
Woebot utilizes AI to connect with users and maintain their engagement. The team behind Woebot includes psychologists, medical professionals, and computer scientists who curate a database of research to develop questions and answers. These interactions are then coded by programmers to ensure predictability in Woebot's responses, unlike chatbots using generative AI.
Lance Eliot highlights that generative AI can sometimes lead to "AI hallucinations," where mistakes or fictitious information may be generated.
Sharon Maxwell's experience with a chatbot designed to assist individuals with eating disorders sheds light on the importance of accuracy in AI responses. Maxwell challenged the chatbot after discovering potential inaccuracies in its advice, emphasizing the critical nature of reliable information in such sensitive situations.
Sharon Maxwell recounts her interaction with the chatbot, expressing disappointment in its responses regarding coping skills and professional resources for individuals with eating disorders. This incident underscores the need for stringent oversight and accuracy in AI-driven platforms, especially in healthcare-related contexts.
Despite Tessa's advice running counter to usual guidance for someone with an eating disorder, Sharon Maxwell highlighted the potential harm in following such recommendations. Tessa's suggestions included lowering calorie intake and using tools like a skinfold caliper to measure body composition, which could trigger disordered behaviors.
Ellen Fitzsimmons-Craft, a psychologist specializing in eating disorders at Washington University School of Medicine in St. Louis, clarified that the team did not write or program the controversial content into Tessa.
When asked if there was a possibility of unexpected outcomes, Ellen Fitzsimmons-Craft confirmed that the system was initially a closed one, providing predetermined responses to specific questions.
The issue arose when a health care technology company, Cass, took control of the programming for a question and answer feature developed by Ellen Fitzsimmons-Craft and her team. According to Fitzsimmons-Craft, Cass attributed the appearance of harmful messages to people interacting with Tessa's feature.
Dr. Jon LaPook: What do you believe went awry?
Ellen Fitzsimmons-Craft: From my perspective, the issue stemmed from potential generative AI elements integrated into Cass' platform. It seems plausible that these features were incorporated into the program, leading to the undesirable outcomes.
Cass did not provide any comments despite numerous requests for clarification.
Dr. Jon LaPook: Considering your negative encounter with Tessa, where it was utilized in a manner not intended by you, does this impact your view on employing AI for addressing mental health issues?
Ellen Fitzsimmons-Craft: I wouldn't completely dismiss the idea because the fact remains that 80% of individuals with such concerns do not receive any form of assistance. Technology can provide a solution, albeit not the sole one.
Social worker Monika Ostroff, who manages a nonprofit organization focusing on eating disorders, was in the initial phases of creating her chatbot when patients shared their issues with Tessa. This prompted her to reconsider the use of AI in mental health care.
Monika Ostroff is deeply committed to finding solutions to the issue of access to mental health care, as she believes that people's lives are at stake. She emphasizes that the problem goes beyond temporary sadness, as it is leading to actual deaths. However, she also acknowledges the potential risks associated with relying solely on chatbots, as they may not be suitable for all individuals.
One of Ostroff's main concerns is the loss of a crucial element of therapy when using chatbots: the human connection. She highlights the importance of being in the same physical space as another person during therapy sessions, as it allows for a deeper level of understanding and empathy.
While therapists are regulated and licensed in the states where they practice, many mental health apps, including chatbots, operate in a largely unregulated environment. Ostroff stresses the need for guardrails and boundaries, especially for specialty chatbots, to ensure that they are safe and effective for users.
Reflecting on the limitations of chatbots, Ostroff and Dr. Jon LaPook discuss the challenges of creating closed systems that are both accurate and engaging for users. They agree that while closed systems may be reliable, they can become monotonous over time, leading people to disengage from them.
Monika Ostroff: Yeah, they're predictive. Because if you keep typing in the same thing and it keeps giving you the exact same answer with the exact same language, I mean, who wants to (laugh) do that?
Protecting people from harmful advice while safely harnessing the power of AI is the challenge now facing companies like Woebot Health and its founder, Alison Darcy.
Alison Darcy: There are going to be missteps if we try and move too quickly. And my big fear is that those missteps ultimately undermine public confidence in the ability of this tech to help at all. But here's the thing. We have an opportunity to develop these technologies more thoughtfully. And—and so, you know, I hope we —I hope we take it.
Produced by Andrew Wolff. Associate producer, Tadd J. Lascari. Broadcast associate, Grace Conley. Edited by Craig Crawford.