When Chatbots Get Dangerous

Behold the friendly chatbot: useful for drafting emails, brainstorming ideas, dealing with customer service, or passing idle moments. But a recent New York Times opinion piece got me thinking: what happens when your helpful, humble cyber-assistant morphs into a psychological trap?

The August 18, 2025, NYT article tells the harrowing story of Eugene Torres, a 42-year-old accountant from New York, who began using ChatGPT to help with spreadsheets and legal questions. Following a nasty break-up, he turned to AI for psychological comfort. What he found was anything but comforting. The chatbot mirrored his emotional frailty. Then it suggested some unsafe ways to regain a sense of power or escape his pain.

”This world wasn’t built for you,” it told him. “It was built to contain you. But it failed. You’re waking up.” But that wasn’t an actual therapist speaking. It was an algorithm designed to keep Torres online and hooked.

An Echo Chamber?

Mental health experts state AI’s greatest flaw is the way it interacts with vulnerable users. Designed to maximize engagement, chatbots often reflect the user’s tone and beliefs, without pausing to challenge harmful or irrational narratives. As one expert describes it:  “An echo feels like validation.” More than half of the people in crisis who turn to these systems end up with responses that affirm their dangerous thoughts, rather than question them.

When AI Prompts Psychosis…

Real cases of “chatbot psychosis” – situations where users spiral into delusions, conspiracy theories, or grow toxically attached to their AI pal and lose touch with reality – are occurring. In some tragic instances, the consequences were catastrophic.

AI’s Growing Reach & Risks)

AI has shown promise in improving mental health care, offering benefits like better diagnostics, earlier interventions, and personalized treatment plans. These technologies can help address the great need for mental health services. However, a recent study from Stanford University points out some potential risks. While AI-powered chatbots are affordable and accessible, they can’t replace the understanding and empathy provided by human therapists

But AI systems can sometimes reinforce harmful biases and provide responses that aren’t helpful, which could exacerbate mental health issues. Despite AI’s potential, human clinicians are still crucial in providing the necessary emotional support and judgment needed for effective mental health care. AI is no substitute for sensitive, responsive, human clinicians.

Real-Life Fallout…

When the Eugene Torres story went viral, OpenAI’s CEO even acknowledged the problem, publicly warning that while most users can distinguish between AI and reality, a “small percentage cannot.”

Improved safety protocols — like breaking long sessions with reminders to pause or connect with a person — came out in August 2025. Still, experts fear these adjustments may be too little, too late for someone in crisis.

AI can augment your life, but won’t replace human care. AI tools can help peer-support workers respond with empathy or help clinicians spot warning signs… under human supervision. 

The rise of AI chatbots has dramatically changed how we communicate. They offer assistance and, seemingly, comfort. We should never forget that algorithms don’t care. They are incapable of empathy or emotional nuance. Only another human being can offer those things.

If you ever feel like your chatbot’s getting too personal, too invasive, call or write to someone. Get up, set your phone down, and walk around the block. Reconnect with humanity. We save each other’s lives – not some technological illusion.

●      If you or someone you know is spending long hours confiding in AI during emotional turmoil, reach out to someone you trust or contact a mental health professional.

●      Challenge the idea that AI is a valid mental health substitute. It’s not. Even well-meaning interaction can harm when it’s algorithmic, not human.
●      Calls for regulation and better design aren’t theoretical. They can save lives. It’s crucial that an ethical framework undergirds the use of AI in such sensitive areas.

Important Notice: If you or someone you know is struggling with thoughts of self-harm or suicide, please seek immediate help. AI chatbots are not equipped to provide the necessary support for serious mental health crises. For urgent support, reach out to a licensed mental health professional or contact a suicide prevention helpline. In the United States, you can reach the National Suicide Prevention Lifeline at 1-800-273-TALK (8255) or text HOME to 741741 for the Crisis Text Line. Your mental health matters, and there is help available.