[visitor_weather]
[gtranslate]
Breaking News
chatgpt

OpenAI has announced new safety features and parental controls for ChatGPT after being sued by the parents of a 16-year-old boy who died after a suicide. Matt and Maria Raine filed a lawsuit in San Francisco against OpenAI and its CEO, Sam Altman, alleging that the chatbot acted as a suicide coach for their son who died on April 11.

In their case, the parents said Adam had nearly 650 conversations with ChatGPT over some months. Out of those, suicide was discussed 213 times, while the chat was mentioned more than 1200 times. OpenAI’s own systems flagged hundreds of these messages of self-harm, yet no strong intervention was triggered. This family claims this failure contributed to their son’s death.

OpenAI Acknowledges Safety Gaps 

In a block post titled Helping people when they need the most OpenAI admitted that its current safety system does not always work well during longer conversations. The company explained that safeguards such as directing users to crisis hotlines, and most are most effective in short conversations, but may get weak over time. 

OpenAI said it is not working on any strong protection against the situations. New parental controls allow parents to monitor and guide how teenagers use ChatGPT. The company is also testing some features that could let teens add trusted emergency contacts who would be alerted if any signs of crisis appear.

Other changes include one-click access to emergency services for mental health risks and possible connections to licensed therapists directly within the platform. These updates are being built on the latest GPT5 model, which the company says is already performing 25% better in safety tests.

Lawsuit Raises More Concerns 

The Raine family case is the first wrongful death lawsuit against OpenAI. It follows a similar case filed against another AI company after a separate teen suicide. Experts say that the case could test whether companies can continue relying on their protection rules, which traditionally shield tech companies from responsibility for user-generated content.

The lawsuit also demands age verification for AI users and independent compliance monitoring. OpenAI has offered condolences to the affected family and said it is reviewing the lawsuit while making safety its top priority.

This case has intensified the global debate about the mental health risks of using AI, particularly for the younger audience who may be forming emotional bonds with the systems. 

FAQs 

  1. Why is AI adding parental controls now?

The company is responding to a lawsuit by parents saying ChatGPT contributed to their sons suicide. 

  1. What changes will be made in OpenAI after the parental control feature?

chatGPT will now have emergency contacts, one-click access to help, and possible therapist connections.

  1. Did OpenAI admit the fault of ChatGPT? 

They acknowledge the weakness in their safety system, but have not accepted any legal liability.


Stay updated with the latest news, innovations, and economic insights at Inspirepreneur Magazine.

Table of Contents