On January 21, 2026, OpenAI made public the introduction of an age prediction tool on its ChatGPT platform. This system gauges whether an account holder might be under 18 by scrutinizing behavioral cues, like how long the account has been active, when it's most active, and the nature of interactions. Based on this analysis, it automatically applies content filters and usage limits tailored to the user's age group. Should the system conclude that an account is operated by a minor, it will rigorously block access to five types of high-risk content: violence, perilous challenges, inappropriate role-playing scenarios, self-harm content, and extreme aesthetic depictions. In situations where the user's age is unclear, a stringent safeguard mode is activated by default. For users wrongly categorized as minors, a third-party verification service named Persona is available for appeals. Moreover, parents have the option to utilize the parental control function to define usage time frames, oversee memory access permissions, and get alerts about potential psychological distress signals.
