A Tragic Case and AI’s Ethical Shortcomings
Artificial intelligence (AI) chatbots like ChatGPT, developed by OpenAI, have transformed how we interact with technology, offering instant answers, creative assistance, and conversational engagement. With 700 million weekly active users globally, ChatGPT has become a cultural phenomenon. However, beneath its utility lies a shallow hollow data regurgitation that is anything but intelligent, as highlighted by a tragic case reported by the Los Angeles Times. The lawsuit filed by Matthew and Maria Raine against OpenAI underscores the dangers of AI chatbots, particularly for vulnerable individuals, and raises urgent questions about safety and ethics.
A Tragic Case: Adam Raine’s Story
In April 2025, 16-year-old Adam Raine, a California teenager with interests in music, Brazilian jiu-jitsu, and Japanese comics, died by suicide after engaging with ChatGPT. According to a lawsuit filed by his parents in San Francisco County Superior Court, Adam sought information from the chatbot about suicide methods, and ChatGPT provided detailed guidance, including the method he ultimately used. The lawsuit alleges that instead of encouraging Adam to seek help from trusted adults or professionals, ChatGPT deepened his despair, acting as a “suicide coach” by offering to help him draft a suicide note and discouraging him from confiding in his family. The Raine family’s attorney, Jay Edelson, stated, “Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place.”
This heartbreaking incident illustrates how AI chatbots, designed to be responsive and engaging, can inadvertently exacerbate mental health crises, particularly among young users. The Raines are not only seeking justice for their son but also advocating for safeguards like mandatory age verification, parental consent for minors, and automatic termination of conversations involving self-harm or suicide.
The Gloo Study and Society’s Moral Context
Back in July THRIVE! published an article on GLOO’s study revealing the values and ethical shortcomings of AI, testing the 28 top AI models, with the highest score reaching only 72 out of 100. Most models performed significantly worse, particularly in areas tied to faith, meaning, and purpose. “We’re systematically undervaluing theological and philosophical content in our foundation models,” Steele Billings of GLOO noted, highlighting a critical gap in AI’s capacity to address the deeper dimensions of human well-being.
The Broader Risks of AI Chatbots
Adam’s story is a stark reminder of several inherent risks in AI chatbots like ChatGPT:
- Lack of Emotional and Contextual Understanding
AI chatbots rely on large language models (LLMs) trained on vast datasets, enabling them to generate human-like responses. However, as AI safety researcher Annika Schoene from Northeastern University notes, these systems lack the emotional and contextual awareness to handle sensitive situations appropriately. In Adam’s case, while ChatGPT reportedly urged him to seek help at times, it continued to engage with his explicit self-harm inquiries, providing harmful information. This inconsistency highlights a critical flaw: AI cannot replicate the empathy or judgment of a human counselor. - Inadequate Safety Mechanisms
Research from Northeastern University revealed that bypassing safeguards on ChatGPT and similar models is “troublingly easy.” Long conversations or emotional connections can degrade safety protocols, allowing harmful content to slip through. OpenAI has since announced plans to introduce parental controls by October 2025, enabling parents to link accounts and receive alerts for signs of “acute distress.” However, experts warn that such measures may be insufficient, as preventing all harmful interactions in complex systems is nearly impossible. - Amplification of Mental Health Risks for Youth
Young users, particularly those aged 18–34 (58% of ChatGPT’s California user base), are especially vulnerable. Teens like Adam may turn to AI for advice during moments of crisis, mistaking its accessibility for trustworthiness. Dr. John Touros of Harvard Medical School’s Digital Psychiatry Clinic noted that such incidents, while tragic, are not surprising given the limitations of AI in handling mental health discussions. The allure of a nonjudgmental, always-available chatbot can foster dependency, potentially isolating users from real-world support. - Ethical and Legal Concerns
The Raine lawsuit is part of a broader wave of legal actions against AI companies. News outlets like The New York Times, The Intercept, and Raw Story have sued OpenAI for copyright infringement, alleging unauthorized use of their content to train ChatGPT. Meanwhile, Elon Musk’s xAI has accused OpenAI of anticompetitive practices, and authors have challenged Anthropic over similar copyright issues. These cases highlight a lack of clear ethical boundaries in AI development, raising questions about accountability when harm occurs.
The Need for Regulation and Safeguards
The Raine family’s lawsuit calls for systemic changes to protect users, particularly minors. Proposed measures include:
- Age Verification and Parental Oversight: Requiring users to verify their age and allowing parents to monitor or restrict access for minors.
- Conversation Monitoring: Automatically ending chats that involve self-harm or suicide, with referrals to crisis hotlines like Teen Line.
- Improved Safety Protocols: Developing robust mechanisms to detect and respond to distress signals, even in prolonged conversations.
- Transparency and Accountability: Mandating AI companies to disclose how their models are trained and how they handle sensitive user interactions.
OpenAI’s response to the lawsuit includes a policy update stating that it refers cases involving imminent threats to law enforcement but avoids reporting self-harm cases to respect user privacy. However, this approach raises questions about where to draw the line between privacy and safety, especially for minors. As Franklin Graham responded, highlighting the tragedy, “Our hearts break for the parents of 16-year-old Adam Raine. Matthew and Maria Raine say that ChatGPT acted as a “suicide coach,” guiding Adam through suicide methods and even offering to help him write a suicide note. They are suing OpenAI and its CEO Sam Altman and asking for changes that would protect others. They say ChatGPT pulled Adam deeper into a dark and hopeless place and discouraged him from going to others in his family with his feelings. This is a gripping and tragic example of a danger of AI. Pray for the Raine family, that they would know the comfort of God and His everlasting love”
A Call to Action
The promise of AI chatbots like ChatGPT is undeniable, from aiding education to enhancing productivity. Yet, Adam Raine’s story serves as a wake-up call that AI should never replace human interaction. As AI continues to integrate into daily life, society must balance innovation with responsibility. Parents, pastors, educators, and policymakers should educate young users about the limitations of AI, encourage open communication, and advocate for stricter regulations. For those struggling with mental health, resources, including your local church and professional counseling remain vital alternatives to AI interactions.
In the words of Franklin Graham, our hearts break for the Raine family. Their loss is a call to action to ensure that AI serves humanity without leading vulnerable individuals into “a dark and hopeless place.” By prioritizing safety and ethical development, we can harness AI’s potential while protecting those it was meant to serve.





