
Meta is introducing stricter controls on how teenagers interact with its apps – including new parental permissions for Instagram and an expanded rollout of Teen Accounts to Facebook and Messenger.
More Built-In Restrictions for Younger Teens
Meta’s Teen Accounts are getting tougher. Originally launched in September 2024 for Instagram, these accounts are designed to give 13 to 15-year-olds a more protected experience by default. Now, new restrictions are being layered on top, and the whole setup is expanding to Facebook and Messenger for the first time.
Teen Accounts come with a suite of safety-first settings, i.e. private profiles, stricter content filters, overnight notification pauses, and limited messaging capabilities. Teens can’t be messaged by strangers, and they get reminders to step away from the app after an hour. So far, Meta says the changes have been well-received, with a reported 97 per cent of younger teens sticking to the default restrictions.
However, with growing pressure from regulators, charities and concerned parents, Meta says it’s now raising the bar even further.
Stricter Limits on Live Streaming and Messages
The biggest headline change is that teens under 16 will now need a parent’s permission to go live on Instagram. Meta says this is in response to widespread concerns from parents about the risks of strangers watching (or contacting) their children in real time.
There’s also a clampdown on direct messages. For example, Instagram’s existing tool to blur suspected nude images in DMs will remain on by default, and teens under 16 won’t be able to turn it off without a parent’s sign-off.
These updates are due to roll out in the coming months. According to Meta, the aim is to “give parents more peace of mind across Meta apps” and strengthen the platform’s age-appropriate protections.
Coming to Facebook and Messenger
Until now, Teen Accounts were exclusive to Instagram but, from this week, it seems that Facebook and Messenger will be joining the club.
For example, teen users in the UK, US, Australia and Canada will be automatically moved into Teen Accounts, with more countries to follow soon. Much like Instagram, the Facebook and Messenger versions will restrict who can interact with young users, limit what kind of content they see, and introduce features to encourage healthy screen time habits.
Mock-up screenshots released by Meta show Facebook users receiving alerts that their account will soon become a Teen Account, along with messaging prompts like “Soon your settings will be updated automatically to protect you from unwanted contact.”
This shift is part of Meta’s broader attempt to create a consistent safety experience across its ecosystem, but it also hints at a more strategic goal, i.e. heading off regulation by acting before governments step in.
Regulation, Reputation and Parental Pressure
This latest move by Meta is, therefore, being motivated by a mix of factors, the main ones being:
– Public and political scrutiny intensifying. For example, in the UK, the Online Safety Act now legally requires platforms to prevent children from encountering harmful and illegal content. Failing to do so could land companies like Meta with serious consequences from Ofcom, which now holds enforcement powers.
– Meta has faced reputational damage for years over teen safety, from whistleblower claims about Instagram’s impact on teenage mental health to reports of underage users being served inappropriate content by the algorithm.
– Meta appears to be really listening to parents. A recent Ipsos survey commissioned by the company found that 94 per cent of US parents believe Teen Accounts are helpful. The company says these accounts were created “with parents in mind,” and the latest changes respond to their most common worries, particularly around unwanted contact and exposure to sensitive content.
Critics Say It’s Not Enough – or Still Too Vague
Despite the PR-friendly messaging, not everyone is convinced. For example, campaigners have argued that Meta still hasn’t proven whether Teen Accounts are actually making a difference. Some have commented on the apparent silence from Mark Zuckerberg about the effectiveness of Teen Accounts and have questioned whether teens are still being algorithmically recommended harmful content – something Meta hasn’t publicly clarified.
Matthew Sowemimo, head of child safety policy at the NSPCC, has welcomed the new measures but stressed that they must be paired with proactive content moderation. In his words, “dangerous content shouldn’t be there in the first place.”
There are also some concerns about enforcement. For example, Teen Accounts depend heavily on users being honest about their age, but Ofcom research suggests 22 per cent of 8 to 17-year-olds claim to be over 18 on social media platforms. Meta has said it’s working on that, using AI tools and video selfies to better verify age, but even that raises its own ethical questions.
Some other critics have argued that Meta should take more responsibility anyway for its data-driven commercialised practices, which essentially control young users’ experiences on Meta’s social platforms.
What Are Other Platforms Doing?
Meta isn’t alone in having to respond to several sources of pressure on this issue. Examples of other platforms taking similar measures (for similar reasons) include:
– Online platform for creating and playing user-generated games, Roblox, which has recently introduced new parental controls that allow parents to block individual games or experiences.
– YouTube and TikTok have both added time limits and privacy defaults for teen users, though their approaches differ in terms of enforcement and transparency.
That said, it seems that (too) few platforms have rolled out something as broad as Teen Accounts across multiple services. This move could represent a kind of rebalancing in the social media landscape, where platforms are now vying to be seen as the safest environment for young users, not just the most addictive. However, questions remain over how effectively any system can block determined teens from sidestepping restrictions.
What This Means for Parents, Platforms and the Public
The expansion of Teen Accounts appears to be a signal that Meta is taking the issue of online safety more seriously – or at least wants to be seen as doing so.
For parents, it offers a more unified, manageable set of controls across Instagram, Facebook and Messenger. For regulators, it may buy Meta some goodwill, though it doesn’t exempt the company from scrutiny. Also, for competitors, it may set a new benchmark in platform-wide teen protections.
That said, as with all digital safety initiatives, the effectiveness will depend on execution, transparency, and the company’s willingness to be held accountable. Some may say that whether this is a genuine shift or just another layer of corporate risk management remains to be seen.
What Does This Mean For Your Business?
For a company often accused of doing too little, too late, this multi-app expansion suggests a more joined-up approach to protecting younger users. However, it also highlights the growing complexities in balancing child safety with platform growth, user freedom, and commercial goals.
The scale and scope of the changes, from parental controls on live streaming to AI-led age verification, appear to indicate that Meta is serious about creating a more ring-fenced space for teens. However, it remains to be seen whether these technical safeguards will hold up in practice. After all, digital workarounds are nothing new for today’s tech-savvy teenagers, and critics are right to question the real-world impact without full transparency on outcomes and data.
UK businesses in the digital and tech space should be paying close attention. As regulation like the Online Safety Act continues to tighten, the onus on companies to demonstrate proactive, meaningful protections will only increase. Platforms offering youth-focused content or services may now face growing pressure to match, or even exceed, what Meta is implementing. For agencies and developers, this could mean rethinking how parental consent, privacy defaults and content moderation are built into products from day one – not just bolted on later.
Meanwhile, parents may feel some relief from the growing consistency across Meta’s apps, but they shouldn’t be left holding the reins alone. As experts have pointed out, it’s not enough for tech firms to hand families the tools – they also need to take responsibility for the ecosystem they’ve built, including the algorithms and engagement models that still shape user experiences behind the scenes.
It seems, therefore, that the introduction of stricter Teen Account controls is a step in the right direction, but whether it sets a new gold standard or simply buys Meta more breathing room will depend on what comes next. For now, the move raises the bar, but also the questions.