Washington: The tech giant has announced that it will begin using artificial intelligence tools to detect potentially underage users by studying visual indicators such as height, bone structure, facial proportions and overall appearance in photos and videos shared on its platforms.
According to reports, the AI system will also assess captions, bios, comments, interactions and broader content patterns to determine whether an account likely belongs to someone below the minimum required age of 13 in many regions.
Meta clarified that the technology is not facial recognition software and is not designed to identify individuals. Instead, it estimates age ranges using what the company described as “general themes and visual cues.”
Accounts flagged as potentially underage could face suspension unless users complete official age verification checks.
The rollout comes amid growing global pressure on social media companies to strengthen protections for children online. Governments and regulators in the United States, Europe and Australia have increasingly criticised platforms for relying on self-reported birth dates, arguing that the system fails to prevent minors from accessing age-inappropriate content.
Meta has already expanded AI-powered “Teen Account” protections across Facebook and Instagram in parts of Europe and the US, while Australia is preparing stricter regulations around minors’ access to social media platforms.
The development also signals a broader shift in how online moderation and identity verification are evolving. For years, social media companies relied heavily on manual moderation and user reports. Now, increasingly advanced AI systems are being trained to automatically infer age, behaviour and risk patterns.
However, the move is expected to spark fresh concerns over privacy, bias and accuracy. Critics have questioned whether physical appearance alone can reliably distinguish between younger children and teenagers who are just above the minimum age threshold.