OpenAI to Start Using ID Verification to Protect Teens on Some ChatGPT Versions

OpenAI has rolled out new precautions for users of ChatGPT that involve asking some people to identify themselves in an effort to protect teenagers using the AI platform. The decision reflects increasing concern about how AI interacts with people in their formative years, as well as the fact that technology companies are heeding calls to address it.
Why OpenAI is Taking Action
The move comes in the wake of growing concern about the potential health risks posed by AI chatbots for young people. While AI can help children with education, entertainment, and creativity tools, some experts have cautioned that unsupervised chats might harm the mental health of young users.
This concern was compounded after news stories of adolescents becoming distressed following extended or intense interactions with AI chatbots.
In response, OpenAI says that its first priority is ensuring safety for all users – particularly children – while creating a fun and informative AI experience.
How the ID Verification Works
The new measures include an age-prediction model that assesses user interactions with ChatGPT to determine if someone is likely under 18.
- When the system is uncertain, users can upload an ID to prove their age.
- This ensures that minors are not exposed to mature content.
For teens verified as under 18, OpenAI will provide access to a customized version of ChatGPT with additional protections. This includes stricter moderation and safety measures to avoid exposure to dangerous or sensitive content.
Other Teen User Safety Changes
The modified ChatGPT platform for under-18s includes several crucial safety features:
- Content Filters: The AI will limit access to graphic sexual content and tone down discussions about suicide or self-harm. Content that could be harmful is restricted even in creative or speculative writing.
- Parental Controls: Parents will have tools to oversee their child’s interactions with ChatGPT. This includes time controls, chat history review, and alerts from the AI when potentially concerning behavior is detected.
- Crisis Intervention: If a young user expresses suicidal thoughts or appears upset, the platform might reach out to parents. If parental contact is not possible, OpenAI may refer the situation to relevant authorities to ensure the teenager’s safety.
These features are not meant to inhibit general use but to add a protective layer for younger users, ensuring interactions remain positive and safe.
Balancing Privacy with Safety
While these measures aim to protect teens from explicit content, requiring identification for age verification raises privacy concerns. OpenAI acknowledges that asking users to provide personal details can be sensitive, particularly for adults who may feel their privacy is compromised.
To address these concerns:
- Identification is used solely for age verification purposes.
- Data will not be retained long-term or shared with any other entity.
- OpenAI emphasizes the tradeoff between privacy and safety is justified due to the potential risks of unmonitored AI interactions for minors.
A Broader Shift in AI Safety
OpenAI’s strategy reflects a broader shift in AI development and regulation, emphasizing the protection of vulnerable populations.
- As AI becomes increasingly part of daily life, it is crucial to keep children and teens safe.
- With age verification and personalized content for younger users, OpenAI sets a precedent for other AI companies.
- This initiative signals that the focus of technology should be not just innovation, but responsible and ethical use.
Challenges and Considerations
While the initiative is commendable, challenges remain:
- Some teenagers may try to circumvent the verification process.
- Parents may have different opinions on the level of supervision needed.
- Continuous updates to content filters and moderation tools are required to balance helpful AI functionality with safety.
Experts suggest that OpenAI’s layered approach—including age prediction, parental controls, and crisis intervention—is robust but will require ongoing recalibration as AI usage evolves.
Looking Ahead
OpenAI plans to gradually roll out the new verification and safety features, using user feedback and system performance to refine the platform.
- The company emphasizes it is not trying to limit ChatGPT’s capabilities.
- The goal is to create a safe environment where young users can explore AI’s potential without unnecessary risk.
“This will have implications for how tech companies manage AI interactions with minors and will likely set new industry standards for safety, transparency, and accountability,” said Paula Bernal, head of Trust and Security at Sensay.
Conclusion
OpenAI’s introduction of ID verification for some ChatGPT users represents a significant step toward teen safety online. The company is demonstrating a commitment to user protection through:
- Age prediction systems
- Content access controls
- Crisis intervention protocols
As AI continues to be integrated into education, social communication, and creative projects, protecting younger audiences will remain a priority.
OpenAI’s initiative shows the importance of combining innovation with responsibility, providing a platform that is both cutting-edge and safe. By prioritizing the safety of its youngest users, OpenAI is taking a major step toward a more secure AI experience that supports exploration, learning, and creativity while acknowledging teens’ vulnerabilities.



