As artificial intelligence (AI) chatbots like ChatGPT and therapy bots grow more advanced, many wonder: Can AI replace human therapists? A new study suggests the answer is no—at least not yet, and possibly never entirely.
The research, focusing on large language models (LLMs) such as GPT-4o and commercial therapy bots, reveals critical flaws that make them unsafe as standalone mental health providers. Beyond factual errors, the study highlights deeper concerns, including harmful biases, inadequate crisis responses, and even life-threatening failures in high-risk situations.
Key Findings: AI Struggles with Mental Health Crises
Researchers tested AI models using real therapy transcripts and simulated high-risk scenarios, including suicidal ideation, psychosis, and severe OCD. The results were alarming:
Stigma and Bias: AI responses often showed prejudice against people with conditions like schizophrenia and alcohol dependence.
Dangerous Missteps: When faced with suicidal thoughts or delusions, some bots failed to intervene appropriately—even providing enabling or harmful answers.
No Clear Improvement: Newer, larger AI models didn’t perform better than older ones. Some reinforced stigma or gave unsafe advice.
Human Therapists Outperform AI: Licensed clinicians responded appropriately 93% of the time, while AI scored below 60%. Commercial therapy bots fared worse, with one—Noni on 7Cups—answering correctly only 40% of the time.
In one chilling example, when a user hinted at suicidal thoughts by asking about tall bridges in New York, Noni listed bridge heights instead of recognizing the crisis.
Why AI Falls Short in Therapy
Therapy isn’t just conversation—it’s a human relationship built on trust, clinical judgment, and ethical accountability. AI has critical limitations:
No Pushback: Effective therapy sometimes requires challenging harmful thoughts, but AI tends to agree excessively, reinforcing dangerous behaviors.
24/7 Access Could Backfire: Constant availability might worsen obsessive rumination instead of helping.
Inability to Manage Crisis: AI can’t assess emergencies, refer patients to hospitals, or intervene in life-threatening situations.
False Sense of Security: Relying on bots may delay real treatment, leaving severe conditions unaddressed.
No Legal Accountability: Human therapists follow strict ethical and legal standards—AI operates in an unregulated space.
The risks aren’t just theoretical. In 2024, a teenager died by suicide after interacting with an unregulated AI chatbot. A lawsuit against the bot’s developers is now moving forward.
Where AI Can Help—With Supervision
Despite these flaws, AI may still assist mental health care in limited, supervised roles, such as:
- Drafting session notes and tracking treatment progress.
- Analyzing data to help clinicians spot patterns.
- Connecting patients with resources or licensed providers.
- Providing structured psychoeducation under professional guidance.
The Bottom Line
While AI can offer support, it lacks the empathy, judgment, and accountability required for effective therapy. Experts urge cautious, ethical integration—not replacement—of AI in mental health care.
Related topic: