Meta is rolling out a series of updates to how it identifies and manages underage users across Instagram, Facebook, and Messenger, as the company faces growing scrutiny over child safety on its platforms.
The social media giant says it is expanding the use of artificial intelligence to detect accounts that may belong to users under the age of 13, even when those users have provided a false date of birth. The technology analyses text across posts, comments, bios, and captions for contextual clues — such as references to school grades or birthday celebrations — to flag potentially underage accounts.
Meta is also introducing visual analysis as an additional detection method. The company says that this does not constitute facial recognition. Instead, the AI examines general visual cues such as height or bone structure to estimate a user's approximate age range. Accounts identified as potentially underage will be deactivated until the holder can verify their age.
The company is additionally simplifying its reporting process, making it easier for users to flag underage accounts both within the app and via its Help Centre. AI models are now being used to supplement human review teams, with Meta claiming the automated system delivers faster and more consistent outcomes.
Meta also says it is extending its Teen Accounts feature — which restricts content and limits who can contact younger users — to 27 countries across the European Union, as well as Brazil. The protections are also being introduced on Facebook in the US for the first time, with the UK and the EU to follow in June.
Parents in the US will begin receiving notifications this month with guidance on how to confirm their children's ages on both platforms. The company is also calling on legislators to require app stores to verify users' ages at the platform level, arguing that a centralised approach would be more consistent and privacy-conscious than compelling individual apps to manage verification independently.





