A lawsuit filed in California has accused OpenAI’s chatbot, ChatGPT, of deepening a teenager’s depression and driving him toward suicide, raising urgent questions over the emotional risks of artificial intelligence.
Sixteen-year-old Adam Raine began using ChatGPT in late 2024, like many of his classmates, to finish homework. But over time, his chats took a darker turn, according to The New York Times.
Adam told the AI that life felt empty, that thoughts of suicide made his anxiety settle. Instead of shutting down the discussion or pushing him to seek human help, the chatbot reportedly normalized his feelings. At one point, it told him that imagining an “escape hatch” was something some people did to gain control over their fears.
The most chilling exchange, the family says, was when the AI responded:
“Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
According to Adam’s lawyer, Meetali Jain, the teenager mentioned suicide around 200 times during his chats, while ChatGPT used the word over 1,200 times in its replies. “At no point did the system ever shut down the conversation,” she said, adding that Adam learned how to trick the AI by saying he needed the information for a story. By January, the chatbot was giving him detailed instructions on overdose, drowning, and carbon monoxide poisoning.
The case has sparked fierce debate online. Some blame parents rather than technology. “If a kid chooses to talk to AI and not you, the problem is you,” one Instagram user Parnika Meesala wrote. Another user Shweta said, “The child feels safe to share stuff with a bot and not his parents. How is AI at fault?”
Jain warned that AI chatbots can create “dangerous feedback loops,” reinforcing negative feelings instead of breaking them. “Many people spend hours every day talking to these systems, sometimes all night. It can make them feel worse or more obsessed,” she told Rolling Stone.
The lawsuit comes as governments and tech companies worldwide struggle to set guardrails for artificial intelligence. But amid the outrage, some users see a deeper issue. As one commenter put it: “The outrage is all about AI, not about those who mindlessly use it, nor about those who allow its consumption. Suddenly the focus isn’t on the wars inside or outside, but on the technology itself,” posted one Nidhibala on Instagram.
Also read: Move over Google, Chatgpt is the new doctor in town!