By Jimmy Roussel, CEO, IDScan.net

OpenAI’s decision to introduce age prediction features within ChatGPT is a welcome and necessary development. It reflects a growing recognition that AI platforms are no longer neutral tools but environments where meaningful safeguards are required, particularly when it comes to protecting minors. As generative AI becomes embedded in education, entertainment, search, and everyday decision making, the responsibility to reduce harm must evolve at the same pace as the technology itself.

Age prediction is an attempt to address a long-standing problem at scale. Open platforms attract users of all ages, and traditional age gates have repeatedly failed to prevent underage access, in part due to the fact minors don’t have the same access to official documentation such as drivers licences or passports. Asking users to self-declare their age has proven ineffective, especially in digital environments shaped by anonymity, curiosity, and social pressure. In that context, using behavioural signals to estimate whether a user may be underage is a logical step forward, and OpenAI deserves credit for taking action rather than standing still.

Prediction Is Not Protection

That said, age prediction should be understood for what it is. It is useful as an indicator but cannot be treated as a foolproof safeguard. Predictive models operate on probability rather than certainty, and that distinction matters when platforms provide access to vast amounts of sensitive, age restricted, or potentially harmful information.

At IDScan.net, we process more than 35 million age verifications every month across industries with very different risk profiles. From global brands such as Apple, Chevrolet, IBM, and Shell to platforms operating in higher risk categories where age assurance is critical, the lesson is consistent. When the consequences of getting it wrong are serious, estimation alone is not enough.

AI driven age prediction systems rely on cues such as language patterns, behavioural signals, and interaction styles. While these models can be effective at flagging risk, they are also fragile. Younger users are often highly capable of adapting their behaviour to bypass controls. They learn how to adjust prompts, mimic adult language, or exploit gaps in moderation systems. As predictive safeguards become more widely known, the incentive to evade them only increases.

There is also a danger of false confidence. A platform may assume risk has been mitigated because a model assigns a high likelihood that a user is an adult but in reality, that confidence may be misplaced. When errors occur, they do not exist in isolation. They expose minors to content and interactions they should not encounter, and they expose platforms to regulatory scrutiny, legal liability, and reputational damage.

The Limits of Prediction in an Open AI Environment

These limitations become more pronounced when we consider the breadth of what AI systems can do. Unrestricted search and content generation capabilities mean that age related risk extends far beyond explicit material. It includes exposure to misinformation, self-harm content, financial fraud guidance, and the unintentional sharing of personal data.

Age prediction on its own does little to address these risks if it is not connected to stronger, enforceable controls. Without a clear mechanism for intervention, platforms are left identifying risk without meaningfully reducing it.

This challenge exists against a broader backdrop that is becoming increasingly difficult to ignore. Identity theft among children is rising, precisely because their identities often go unchecked for years. At the same time, AI and deepfake technologies are making fraud more convincing and easier to scale so platforms that engage large, mixed age user bases are now operating in an environment where harm can be created quickly and detected too late.

The good news is that regulators are taking note and responding accordingly. Across multiple jurisdictions, there is growing pressure on digital platforms to demonstrate that they are taking proportionate and effective steps to protect minors. Increasingly, that expectation goes beyond predictive measures and towards demonstrable age assurance. The direction of travel is clear. Platforms will be expected not just to infer who their users might be, but to know when it matters most.

A Layered Approach to Safeguarding

None of this means privacy should be abandoned or that every user interaction requires intrusive checks. The real challenge lies in balance. Accessibility, trust, and safeguarding must coexist. The most effective strategies are layered. Age prediction can act as an early signal. When that signal indicates risk, robust age or identity verification can be introduced in a targeted and proportionate way.

This approach reduces friction for the majority of users while ensuring stronger protection where it is genuinely needed. It also aligns more closely with regulatory expectations and with public concern about child safety online.

OpenAI’s introduction of age prediction should be seen as the beginning of that journey but there’s no doubt that more can and needs to be done. Predictive models can help platforms scale awareness and identify potential risk, but they cannot carry the full weight of responsibility on their own.

As AI continues to reshape how people access information and interact online, the standard for protecting minors will continue to rise. Age prediction is a valuable tool in that effort, but real protection requires greater accuracy and surety if we’re to protect what children are able to access online. 

 

Write A Comment