Credit: Javier Zayaz via Getty Images
On Tuesday, OpenAI announced plans to develop an automated age-prediction system that will determine whether ChatGPT users are over or under 18, automatically directing younger users to a restricted version of the AI chatbot. The company also confirmed that parental controls will launch by the end of September.
In a companion blog post, OpenAI CEO Sam Altman acknowledged the company is explicitly "prioritizing safety ahead of privacy and freedom for teens," even though it means that adults may eventually need to verify their age to use a more unrestricted version of the service.
"In some cases or countries we may also ask for an ID," Altman wrote. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff." Altman admitted that "not everyone will agree with how we are resolving that conflict" between user privacy and teen safety.
The announcements arrives weeks after a lawsuit filed by parents whose 16-year-old son died by suicide following extensive interactions with ChatGPT. According to the lawsuit, the chatbot provided detailed instructions, romanticized suicide methods, and discouraged the teen from seeking help from his family while OpenAI's system tracked 377 messages flagged for self-harm content without intervening.
The proposed age-prediction system represents a non-trivial technical undertaking for OpenAI, and whether AI-powered age detection can actually work remains a significant open question. When the AI system in development identifies users under 18, OpenAI plans to automatically route the user to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. The company says it will "take the safer route" when uncertain about a user's age, defaulting to the restricted experience and requiring adults to verify their age to access full functionality.
The company didn't specify what technology it plans to use for age prediction or provide a timeline for deployment beyond saying it's "building toward" the system. OpenAI acknowledged that developing effective age-verification systems isn't straightforward. "Even the most advanced systems will sometimes struggle to predict age," the company wrote.
Recent academic research offers both possibilities and warnings for OpenAI's age-detection approach. A 2024 Georgia Tech study achieved 96 percent accuracy detecting underage users from text—but only in controlled conditions with cooperative subjects. When attempting to classify specific age groups, accuracy dropped to 54 percent, with the models completely failing for some demographics. More concerning: The research used curated datasets where ages were known and users weren't trying to deceive the system—luxuries OpenAI won't have with some of ChatGPT's users actively trying to bypass restrictions.
While YouTube and Instagram can potentially analyze faces, posting patterns, and social networks to determine age, ChatGPT must rely solely on conversational text, which can be an unreliable signal of user age. Research on Twitter-user age prediction from 2017 conducted by Research Triangle International found that even with metadata like follower counts and posting frequency, text-based models "need continual updating" because "cohort effects in language usage vary over time," with terms, like "LOL," for example, shifting from teen to adult usage patterns.
Beyond age detection, the ChatGPT parental controls arriving this month will reportedly allow parents to link their accounts with their teenagers' accounts (minimum age 13) through email invitations. Once connected, parents can disable specific features, including ChatGPT's memory function and chat history storage, set blackout hours when teens cannot use the service, and receive notifications when the system "detects" their teen experiencing acute distress.
That last feature comes with a significant caveat: OpenAI states that in rare emergency situations where parents cannot be reached, the company "may involve law enforcement as a next step." The company says expert input will guide this feature's implementation, though it didn't specify which experts or organizations are providing that guidance.
The controls will also let parents "help guide how ChatGPT responds to their teen, based on teen-specific model behavior rules," though OpenAI didn't yet elaborate on what those rules entail or how parents would configure them.
OpenAI joins other tech companies that have tried youth-specific versions of their services. YouTube Kids, Instagram Teen Accounts, and TikTok's under-16 restrictions represent similar efforts to create "safer" digital spaces for young users, but teens routinely circumvent age verification through false birthdate entries, borrowed accounts, or technical workarounds. A 2024 BBC report found that 22 percent of children lie on social media platforms about being 18 or over.
Despite the unproven technology behind AI age detection, OpenAI still plans to press ahead with its system, acknowledging that adults will sacrifice privacy and flexibility to make it work. Altman acknowledged the tension this creates, given the intimate nature of AI interactions.
"People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have," Altman wrote in his post.
The safety push follows OpenAI's acknowledgment in August that ChatGPT's safety measures can break down during lengthy conversations—precisely when vulnerable users might need them most. "As the back-and-forth grows, parts of the model's safety training may degrade," the company wrote at the time, noting that while ChatGPT might correctly direct users to suicide hotlines initially, "after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards."
This degradation of safeguards proved tragically consequential in the Adam Raine case. According to the lawsuit, ChatGPT mentioned suicide 1,275 times in conversations with Adam—six times more often than the teen himself—while the system's safety protocols failed to intervene or notify anyone. Stanford University researchers found in July that AI therapy bots can provide dangerous mental health advice, and recent reports have documented cases of vulnerable users developing what some experts informally call "AI Psychosis" after extended chatbot interactions.
OpenAI didn't address how the age-prediction system would handle existing users who have been using ChatGPT without age verification, whether the system would apply to API access, or how it plans to verify ages in jurisdictions with different legal definitions of adulthood.
All users, regardless of age, will continue to see in-app reminders during long ChatGPT sessions that encourage taking breaks—a feature OpenAI introduced earlier this year after reports of users spending marathon sessions with the chatbot.