As governments worldwide increase their oversight of online safety for minors, OpenAI announced it will be implementing a suite of measures in ChatGPT.A brand new AI age prediction mechanismThis feature no longer relies solely on the user's entered date of birth, but instead uses AI to analyze the user's "behavioral patterns" to determine their true age. Once the system determines that a user is under 18, it will forcibly enable additional safety filters to prevent minors from accessing inappropriate content such as pornography, violence, or self-harm.
Prior to this, OpenAI announced the launch of a "..." designed specifically for users aged 13 to 18.ChatGPT for TeensParents or guardians can bind their accounts to teenagers' accounts via invitation, and can set whether to enable conversation memory, whether to retain chat history, and even directly set "disabled periods" to prevent teenagers from using AI chat tools for extended periods late at night. The newly launched AI age prediction mechanism analyzes and estimates the user's age through AI analysis.
AI doesn't just look at birthdays; it also captures people's attention by analyzing "what they talk about" and "when they talk about it."
OpenAI's new age prediction model operates on a rather unique logic. It comprehensively analyzes multiple signals from an account, including the general topics the user discusses and behavioral characteristics such as the time periods during which they use ChatGPT. This means that even if a user misrepresents their age during registration, if their actual conversation content or online time matches the characteristics of a minor, the AI will still classify the user as a teenager.
Once flagged as under 18 by the system, ChatGPT will automatically enable strict security settings. These filtering mechanisms will block or obfuscate the following:
• Excessive violence and gore
• Content involving sex, romance, or role-playing
• Self-harm or dangerous behavior
• Topics related to extreme aesthetics, unhealthy eating habits, or body image anxiety
This feature has already been deployed globally, and will be available in the EU in the coming weeks.
What if I am wrongly judged? I need to upload my identification documents for verification.
Since it's an AI prediction, there's a possibility of misjudgment. If an adult user is mistakenly identified as a child and subjected to "downgraded" restrictions, OpenAI provides an appeal channel.
Users can find the "Verify" option in the settings menu of the web version of ChatGPT. The system will then redirect users to a third-party partner, Persona, for identity verification. The verification process requires users to take an online selfie and upload their ID card, driver's license, or passport.
OpenAI emphasizes that these documents and photos are only for Persona's comparison. OpenAI itself only receives "pass/fail" and age information, and Persona will delete the relevant image data within 7 days to protect privacy.
Global efforts to combat cybersecurity vulnerabilities affecting minors
OpenAI is not the only company using AI technology to analyze user age. Last year, Meta also began using AI to identify user age, and the Australian government took a firm stance in December last year, legislating to ban users under the age of 16 from using social media services such as TikTok and Facebook. The UK is also considering following suit. Clearly, relying solely on "user self-discipline" to fill in their birthdays is no longer sufficient to meet current regulatory requirements, and AI proactively intervening in detection has become a market trend.
Analysis of viewpoints
While OpenAI's "AI bone age measurement" is well-intentioned, it has also raised many privacy concerns.
First, analyzing age through "conversation topics" means that AI must semantically label users' historical conversations to a certain extent. Although OpenAI states that it can choose not to use the data to train the model, this feeling of being "monitored in real time" may make some privacy-conscious users uncomfortable.
Secondly, for many adult users who use ChatGPT for creative writing (including novel writing and role-playing), if the topics of their chats are considered "cringeworthy" or they habitually use it after school (in the afternoon), the AI may mistakenly identify them as minors and lock out the features, which will cause great inconvenience. Although Persona verification is provided for unlocking, handing over one's identity documents to a third-party platform is another layer of security hurdle.
However, under the stringent regulations of European and American governments regarding AI safety and child safety, such mandatory AI rating mechanisms, which prioritize "better to kill the innocent than let the guilty go free," will likely become more and more common in the future, becoming a standard feature of all generative AI services.



