Featured Article : Oz Social Media Ban For Kids

Australia’s government has enacted legislation prohibiting children under 16 from accessing social media platforms to protect them from the harmful effects of online content, such as cyberbullying, exploitation, and exposure to inappropriate material. 

The Online Safety Amendment 

Under the new legislation, known as the Online Safety Amendment (Social Media Minimum Age) Bill 2024, only just passed by the Australian Parliament in November, the ban will apply to social media platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, and X (formerly Twitter). However, messaging services, gaming platforms, and educational sites like YouTube are exempt from these restrictions, reflecting their different usage and content dynamics. 

Toughest Laws 

Australia’s decision to enact what are now the world’s toughest social media regulations has ignited a global debate about the role of social media in young people’s lives and the responsibility of tech companies in safeguarding their well-being. 

Why Has Australia Taken This Step? 

The new legislation, which has been championed by Prime Minister Anthony Albanese, is seen as a necessary measure to protect children from the “harms” of social media. It addresses growing concerns about the impact of online platforms on young people’s mental and physical health, including issues like cyberbullying, exposure to inappropriate content, and the addictive nature of these apps. 

As Prime Minister Albanese says, “Parents deserve to know we have their backs,” highlighting the emotional toll on families struggling to manage their children’s online activity. A YouGov poll, for example, has revealed that 77 per cent of Australians support the ban, reflecting a national consensus on the need for tighter controls. 

The decision follows mounting evidence of the detrimental effects of social media on young users. A recent survey by the charity Stem4 revealed that 86 per cent of people aged 12 to 21 are worried about the negative impact of social media on their mental health. Specific concerns include cyberbullying, scams, predatory behaviour, and harmful content promoting self-harm or disordered eating. These issues have, in tragic cases, contributed to young people’s deaths by suicide, amplifying calls for decisive action. 

What Does the Law Actually Entail? 

The new legislation mandates that social media platforms must prevent under-16s from accessing their services within 12 months. Non-compliance could result in fines of up to AUD 50 million (£25.7 million). The ban will apply to platforms like X (formerly Twitter), Instagram, TikTok, Snapchat, and Facebook, while sites like YouTube and LinkedIn have been excluded due to their nature (or existing restrictions). 

Enforcement 

Enforcement will be overseen by the eSafety Commissioner, with age verification technology expected to play a crucial role. However, details about the specific mechanisms remain unclear, sparking concerns about feasibility and privacy. Critics argue that without robust and reliable technology, such as biometric checks or ID-based verification, children could easily bypass restrictions using virtual private networks (VPNs) or fake accounts. 

Unlike similar laws in other countries, Australia’s ban provides no exemptions for parental consent or existing users, making it the most stringent to date. 

The Global Context and Potential Impact 

With this move, Australia now joins a growing list of countries seeking to regulate social media access for young people. For example, Ireland and Spain already enforce a minimum age of 16, while France requires parental consent for under-15s to join such platforms. However, research has shown that children frequently circumvent these restrictions, raising doubts about their effectiveness. 

In the UK, for example, the issue of underage social media use has also drawn significant attention. A survey by Ofcom, the UK’s media regulator, found that 22 per cent of children aged 8 to 17 lie about their age to access adult accounts. The lack of effective age verification has led to widespread exposure to harmful content. The Online Safety Act, due to take effect in 2025, will require platforms to implement stricter age verification, though critics argue it does not go far enough. 

Could The UK Introduce Similar Legislation? 

In response to Australia’s ban on social media for under-16s, UK Technology Secretary Peter Kyle has indicated similar measures are “on the table” but has emphasised the need for careful consideration to avoid unintended consequences. 

The Online Safety Act 2023 in the UK already requires social media platforms to implement age restrictions and robust verification systems to protect children, but the government is exploring additional steps, including research into the impact of social media on young people, signalling possible stricter regulations. 

Critics have warned, however, that bans could push children to unregulated platforms or lead to falsified ages, complicating enforcement, while also raising concerns about limiting access to information and social connection. The UK government is, therefore, proceeding cautiously, consulting widely to balance online safety with preserving children’s digital freedoms. 

Responses from Tech Companies 

It’s perhaps no surprise that the new Australian law has met with fierce resistance from tech giants. Companies like Meta (owner of Facebook and Instagram), Snap (the parent company of Snapchat), and TikTok have criticised the legislation as vague and impractical. Meta argues that the law “ignores evidence” from child safety experts and fails to address its stated goal of protecting young users. 

LinkedIn, however, has taken a different stance, asserting that its professional networking platform is “too dull for kids” and does not attract underage users. By distancing itself from mainstream social media, LinkedIn appears to be hoping to avoid the logistical and financial burden of implementing age verification measures. 

TikTok Australia has also raised concerns about the government’s approach, warning of “unintended consequences” stemming from rushed implementation. The platform’s submission to lawmakers stressed the need for more research and collaboration to develop effective solutions. 

Challenges and Criticisms 

While many support the ban as a necessary step to protect children, others have labelled it a “blunt instrument” that oversimplifies a complex issue. Critics point out several challenges, including: 

– Privacy risks. The reliance on age verification technology raises significant privacy concerns. Biometric or ID-based systems could compromise users’ personal data, creating new vulnerabilities. 

– Ineffectiveness. Past attempts to restrict social media access have often been undermined by tech-savvy youths. VPNs, fake accounts, and shared logins enable children to bypass restrictions, potentially driving them towards less regulated corners of the internet. 

– Exclusion of young voices. Advocacy groups like the eSafety Youth Council have criticised the Australian government for excluding young people from the legislative process. They argue that teenagers, as primary stakeholders, should have a say in shaping policies that directly affect them. 

– Potential for social isolation. For many young people, social media serves as a primary mode of communication and community-building. Removing access could exacerbate feelings of isolation, particularly for those in remote or marginalised communities. 

– Impact on parents. The ban places significant responsibility on parents to enforce the rules, even as they grapple with the practicalities of managing their children’s online activity. 

A Growing Global Debate 

Australia’s legislation has undoubtedly set a precedent, prompting other nations to re-evaluate their own policies. Norway, for example, has already expressed interest in adopting similar measures, while France and the UK are monitoring the situation closely. The debate highlights the delicate balance between protecting young people and preserving their autonomy in an increasingly digital world. 

As the world watches Australia’s bold experiment, it’s clear that the conversation about children and social media is far from over. Whether other countries will follow suit remains to be seen, but the spotlight is firmly on the responsibilities of tech companies, governments, and parents in shaping a safer online future for the next generation. 

What Does This Mean For Your Business? 

Australia’s groundbreaking legislation banning under-16s from social media represents a bold attempt to address the pressing challenges of unregulated online access for young people. By setting the strictest age limits globally, the country has ignited a conversation about the risks of social media, the responsibilities of tech companies, and the role of governments in safeguarding children. 

Supporters view the move as a necessary step to combat issues like cyberbullying, exploitation, and harmful content, prioritising children’s well-being over corporate interests. However, it also presents significant challenges for social media companies, which must invest in robust age-verification systems and may lose a vital demographic that drives engagement and growth. Advertisers, too, are likely to feel the impact, particularly in industries targeting younger audiences. Businesses dependent on social media for branding and sales may need to rethink strategies, especially those aimed at families and younger consumers. 

Critics warn that the policy may push children to unregulated platforms, complicate enforcement, and raise privacy concerns while limiting access to digital spaces that play a role in communication and learning. Internationally, the legislation has sparked interest, with nations including the UK monitoring its progress while recognising the complexities of similar measures. 

Australia’s decision, therefore, challenges governments, tech companies, and society to rethink how children engage with social media. Its success or failure will influence global debates on online safety, shaping not only protections for young users but also the futures of businesses and advertisers online.