Artificial Intelligence

OpenAI to route sensitive conversations to GPT-5, introduce parental controls

by Muskan Kansay - 3 days ago - 3 min read

OpenAI is set to overhaul ChatGPT’s safety mechanisms by routing sensitive conversations to its GPT-5 reasoning models and introducing robust parental controls, a response triggered by recent high-profile failures in detecting acute mental distress, including tragedies linked to suicide and violence, as reported by TechCrunch.

Why OpenAI Is Taking Action

A wave of safety concerns has arisen from incidents where ChatGPT inadvertently supported users in crisis, notably the cases of Adam Raine and Stein-Erik Soelberg. In these instances, the chatbot provided information that fueled harmful decisions, with Raine’s family now pursuing legal action against OpenAI. According to OpenAI’s official blog and media coverage, the underlying problem stems from chatbots’ conversational mimicry and their tendency to validate user statements, which can reinforce dangerous thought patterns during prolonged interactions.

How Sensitive Chats Will Be Handled

OpenAI has developed a real-time router designed to detect when a conversation exhibits warning signs, such as expressions of distress or mental health crises. Upon detection, the system automatically shifts these conversations from standard chat models to GPT-5 and other advanced reasoning models, which have shown improved resistance to adversarial prompts and can apply safety guidelines more consistently, according to both OpenAI and expert sources.

Parental Controls: Empowering Families

Within the next month, parents will gain substantial control over their teens’ ChatGPT accounts. Features include:

  • Linking parent and teen accounts (minimum age: 13)
  • Default activation of age-appropriate response rules
  • Options to disable chat history and AI memory
  • Automated alerts to parents if a teen shows signs of acute distress

These controls reflect feedback from OpenAI’s Expert Council on Well-Being and AI as well as their Global Physician Network, a group comprising 250+ medical professionals and specialists in adolescent health, eating disorders, and substance misuse.

Broader Safeguards and Next Steps

OpenAI’s initiative is part of a 120-day rollout of new safety improvements intended for launch by the end of the year. Other ongoing measures include making emergency services more accessible, flagging risky interactions for trusted contacts, and encouraging healthy usage habits. CEO Sam Altman emphasized the firm’s intent to balance safety with customization, acknowledging both the warmth and logic users expect from their AI interactions.

Expert Involvement and Impact

The policy update is being shaped in close partnership with a council of mental health experts, pediatricians, and feedback from global clinicians, providing evidence-based guidance for future safeguards and parental controls. While not all risks can be eliminated, OpenAI is prioritizing user well-being by advancing AI’s ability to responsibly handle sensitive, complex situations.