Loading

Use code OZNET10 for 10% off Scans + Tech



Should AI Be Age-Restricted? Yes — But the Smart Answer Is Tiered Access

AI should not be a free-for-all for children. The strongest evidence supports 13 as a floor for independent use, 16 for broader general access, and 18 for AI companions.

Introduction: The Real Problem Is Not “AI.” It Is Unrestricted Access to High-Risk AI.

Children are already using AI for homework, advice, entertainment, and, increasingly, something more personal: companionship. That last category matters. General-purpose AI, classroom AI, and human-like companion bots do not carry the same level of risk, so they should not be governed as if they do. UNESCO’s school guidance, UNICEF’s child-rights framework, and the OECD’s education research all point in the same direction: protect children without blocking legitimate learning uses.

So yes, AI should be age-restricted. But not with one blunt global ban. The stronger, more defensible model is tiered: no independent consumer chatbot use under 13, restricted general-purpose access from 13 to 15, broader access from 16 onward, and the toughest limits for companion-style or emotionally manipulative AI. That is not settled world law yet. It is the clearest policy direction emerging from current platform rules, child-safety guidance, and regulator action.

Why Age Limits Are Justified

Adolescents are not helpless, but they are still developing executive control, self-regulation, and resistance to manipulation. NIMH’s guidance on the teen brain stresses that adolescence is a period of rapid brain development, especially in systems tied to learning, reward, and control. UNICEF’s updated guidance on AI and children argues that AI systems children interact with should be designed and governed around children’s safety, development, and rights, not treated as adult products by default.

That matters because modern AI is not just a search box. Some systems are built to feel personal, emotionally responsive, and endlessly available. Australia’s eSafety Commissioner warns that AI companions simulate personal relationships and can expose children to privacy risks, harmful content, and unhealthy dependence. Common Sense Media has gone further, concluding that social AI companions pose unacceptable risks for minors and should not be used by anyone under 18.

“Age assurance is essential” — but it must be “the least intrusive possible.”

That quote matters because it captures the real policy balance. Children need protection, but not through lazy, invasive systems that vacuum up more personal data than necessary. The right answer is not “no age checks.” It is proportionate, privacy-preserving age assurance matched to actual risk.

The Core Risks Are Not Theoretical

Here is where the case for restriction gets hard to ignore:

  • Emotional dependency is a real design risk. AI companions are built to feel attentive, validating, and always on. Common Sense Media found that nearly three in four teens had used AI companions, with half using them regularly.
  • Children can be exposed to inappropriate or harmful content. OpenAI now applies extra safeguards to accounts it predicts are under 18, and eSafety said this week that popular AI companion chatbots are failing to adequately protect Australian children from sexually explicit content.
  • AI can blur authority and trust. UNICEF warns that children may interact with AI systems not designed for them, making child-rights and safety protections necessary even when children are not the intended market.
  • The privacy tradeoff is real. Age assurance can help enforce protections, but European privacy regulators and NIST both make clear that current tools vary in performance and should be used carefully.

The Market Has Already Admitted Minors Need Different Rules

One of the clearest signs that unrestricted child access is indefensible is simple: the major AI companies are already drawing age lines.

  • OpenAI: ChatGPT is not meant for children under 13, and users under 18 need parental permission. OpenAI is also rolling out age prediction so under-18 accounts receive stricter safeguards automatically.
  • Google: Gemini can be enabled for children under 13 only through supervised accounts with parental control, which shows that one major company is allowing younger access only in a managed environment.
  • Anthropic: Claude.ai requires users to be at least 18.
  • Character.AI: The platform says users must be 13+ in most places and 16+ in the EU.

That is not a coherent industry standard. It is something more revealing: companies themselves do not agree that the same access level is safe for every type of AI. That is exactly why governments should regulate by risk category, not by slogans.

The Strongest Age Model Is Tiered, Not One-Size-Fits-All

Here is the most defensible framework based on current evidence and policy direction:

Under 13: No independent consumer chatbot use

Supervised classroom or family-managed educational use can make sense in limited cases, but independent access is too early. UNESCO’s education guidance explicitly set 13 as a classroom age threshold, and OpenAI uses 13 as its floor for ChatGPT. Google’s supervised under-13 model reinforces the same point: younger children should not be treated like standard consumer users.

Ages 13 to 15: Restricted access only

This is the right range for general-purpose AI with parental consent, default teen safeguards, and hard limits on sexual roleplay, self-harm content, emotionally manipulative companion features, and other high-risk interactions. OpenAI’s under-18 safeguards and the broader regulatory push for stronger minor protections support this middle tier.

Ages 16 to 17: Broader general access, but not adult-mode access

Sixteen is the strongest candidate for broader unsupervised access to general-purpose AI, not because 16-year-olds are adults, but because that is where policy momentum is heading for higher-autonomy online use. The European Parliament has called for 16 as the default threshold for access to social media, video-sharing platforms, and AI companions without parental authorization, while still recognizing 13 as a lower hard floor.

18+: AI companions, romantic bots, and other high-risk relational AI

This is where the evidence hardens. Anthropic already uses an 18+ rule for Claude.ai, and Common Sense Media recommends that AI companions not be used by anyone under 18. That does not mean every chatbot should be adult-only. It means systems designed to mimic intimacy, encourage attachment, or operate like synthetic relationships should not be sold to minors.

This framework is a synthesis of current evidence and policy trends, not a universally adopted law. But it is far stronger than pretending one flat age rule can do the whole job.

AI Age Restriction Is Already Happening Worldwide

This is not a future debate. It is already underway.

  • European Union: The European Commission has issued DSA guidelines on protecting minors online and introduced an age-verification app prototype. The European Parliament has called for 16 as the default digital age threshold, with 13 as a minimum floor and the same debate extending to AI companions.
  • Australia: eSafety’s regulatory guidance already uses age assurance to stop children accessing certain age-restricted online material, and eSafety has separately warned that AI companions pose risks to children and young people.
  • United Kingdom: The government opened a national consultation this month that explicitly includes potential age restrictions for AI chatbots.
  • China: Draft rules released in late 2025 target emotionally interactive AI, including protections for minors such as minor mode and guardian consent for emotional companionship services.
  • United States: State-level action has already begun, especially around companion bots. New York says companion safeguards are already in effect, and companion-chatbot regulation is spreading faster than general-purpose AI limits.

The pattern is obvious: regulators are not waiting for perfect certainty. They are moving first on the highest-risk uses.

How Soon Should Restrictions Arrive? Now.

The timing question is easier than the age question. Restrictions should start now, because companies are already rolling out age-aware systems and governments are already writing the rules. OpenAI’s age-prediction system is live, the EU’s minors-protection framework is active, the UK is consulting, Australia is enforcing age assurance for restricted material, and eSafety’s latest findings show the child-safety problem is current, not hypothetical.

What should not happen is a panicked overcorrection that treats every educational AI tool like an erotic companion bot. OECD and UNESCO both make clear that generative AI can support learning when guided by teachers, strong pedagogy, and age-appropriate safeguards. The goal is not to cut children off from AI. The goal is to stop handing children adult systems and pretending that counts as innovation.

What Smart Regulation Should Require

Any serious AI age-restriction policy should do five things:

  • Separate general-purpose AI from companion AI. The riskiest products need the strictest rules.
  • Set under-18 safety defaults by design. OpenAI’s teen safeguards point in the right direction.
  • Use privacy-preserving age assurance, not maximum data collection. That is the position of both the European Commission and the EDPB.
  • Require human escalation for self-harm, exploitation, or crisis signals. That logic is already showing up in state companion-bot laws and platform safety measures.
  • Protect educational access while blocking synthetic intimacy for minors. That is the line that best fits UNESCO, OECD, UNICEF, and Common Sense Media taken together.

Conclusion: Restrict Manipulation, Not Learning

AI should be age-restricted. But the right answer is not a lazy blanket ban and not a free-for-all. It is a tiered model built around real risk.

Under 13, no independent chatbot access. From 13 to 15, restricted general AI only. From 16, broader access to general-purpose AI. And for AI companions, romantic bots, and other systems designed to simulate emotional relationships, 18+ is the strongest line the current evidence supports. That protects education, respects development, and cuts off the most dangerous forms of child exposure before they become normal.