
Mary Gilmore, Staff Writer
In response to growing concerns about teen safety and mental health, OpenAI has introduced parental controls for ChatGPT. These features were launched in late September and aim to give parents more oversight in their child’s AI usage.
The parental control rollout follows a tragic accident involving 16-year-old Adam Raine. Raine’s parents filed a wrongful death lawsuit against OpenAI after he was found dead by suicide. His parents allege that ChatGPT encouraged and validated their sons suicidal ideation. This included helping write a draft of his suicide note.
OpenAI CEO Sam Altman acknowledged the gravity of the situation, stating in an interview, “They probably talked about [suicide], and we probably didn’t save their lives… Maybe we could have said something better. Maybe we could have been more proactive”
Altman’s understanding of the situation has led to a rapid development of new safety features. These features were designed in collaboration with mental health experts and advocacy organizations like Common Sense Media.
The parental controls allow adults to link their ChatGPT accounts to their teen’s. This would create a shared dashboard where the adult can manage settings and monitor their child’s usage. Once linked, the child’s account is automatically subject to stricter content filters that block graphic content, sexual roleplay, extreme beauty ideals, and more.
Other features include disabling memory so that ChatGPT doesn’t retain past conversations, blocking image generation, limiting access to ChatGPT at certain times of the day, and opting out of model training to prevent their teen’s data from being used.
The most groundbreaking feature is a new notification system. Parents will receive alerts if ChatGPT detects signs of emotional distress or potential self-harm. OpenAI stated in an announcement, “We think it’s better to act and alert a parent so they can step in than stay silent.”
If a teen attempts to unlink their account from their parents, the system will notify the adult immediately. OpenAI is also exploring protocols related to alerting emergency services if a teen appears to be in imminent danger. This would be a secondary option if a parent cannot be reached.
While these new parental controls represent a major advancement in AI safety, these filters are not foolproof. Determined teens have ways of asking AI questions that won’t get flagged by the safety features. AI models also cannot fully replace human judgement and emotional support. OpenAI encourages families to use these tools as part of a broader strategy regarding internet safety.
Leave a Reply