EU Parliament Declares the Internet “Strictly Unsafe” for Children

MEPs call for age limits, stricter platform rules and safeguards against addictive features, with Cyprus examining its own measures

Header Image

RAFAELLA SPANOU

 

The European Parliament has approved, by a large majority, a resolution calling for far stricter protections for minors online. Adopted on 26 November 2025, the non-legislative resolution sets 16 as the minimum age for autonomous use of social networks, video-sharing platforms and AI applications. Teenagers aged 13 to 16 would only be allowed access with parental consent, enabling parents to monitor their children’s digital activity.

MEPs also supported the creation of a unified EU age-verification system and a digital identity wallet, emphasising accuracy and privacy protection. The resolution further proposes that senior executives of tech companies carry personal responsibility for serious or repeated breaches of EU rules, particularly those concerning child protection.

Measures against addictive design

The resolution targets harmful and addictive platform features, including endless scrolling, autoplay, exploitative games and reward-based engagement systems. It proposes restrictions on targeted advertising, influencer marketing, misleading design patterns and recommendation algorithms based on behavioural data.

It also bans gambling-like mechanisms such as loot boxes and virtual currencies, and seeks protections against the economic exploitation of children, including “kidfluencers”. Additionally, it addresses risks linked to AI tools, such as deepfakes, chatbots and non-consensual digital undressing applications.

Christel Schaldemose, rapporteur and author of the resolution, stated that “Parliament stands united in protecting minors” and stressed that digital platforms should not be designed for children. The urgency is reflected in EU data: 97 per cent of young people use the internet daily, 78 per cent check their devices at least once per hour, and one in four shows signs of addictive behaviour. According to the 2025 Eurobarometer, over 90 per cent of Europeans consider child protection online an urgent priority.

Cyprus examines new measures

On 10 October, Cyprus announced its participation in the European Commission’s pilot project to develop a secure and interoperable age-verification solution aimed at shielding minors from harmful online content, especially on social media. The initiative was presented at the EU Informal Telecom Council in Denmark, where Deputy Minister for Research, Innovation and Digital Policy Nikodimos Damianou said child protection is “not only a regulatory requirement but a moral obligation”.

He noted that children are exposed daily to dangerous, addictive or illegal content, with serious consequences for mental health and learning. The government is considering setting a “digital age of maturity” that would prohibit social media access below a certain age. A ministerial declaration has also been signed promoting age-verification mechanisms and standards for a safe digital environment for minors.

The global picture

Several countries have already implemented or tested national measures, prompting the EU to act. European Commission President Ursula von der Leyen referenced Australia’s impending nationwide ban on social media access for children under 16 and announced plans to convene a panel of experts on child protection online.

From 7 November, Denmark introduced a ban on social media access for children under 15, with parental consent required for ages 13–14 following an assessment. The objective is to protect teenagers from platform-related risks and address social media addiction.

Australia will become the first country globally to enforce a full ban for under-16s from December. Platforms will be required to close existing accounts and implement strict age-verification checks, with non-compliant companies facing fines of up to AUD 49.5 million. Major platforms including Meta and Snapchat are preparing systems based on official documents, biometrics or behavioural analysis. Experts, however, have raised concerns about the effectiveness of such technologies and the potential risks to privacy and social development.

Other countries such as France, Germany and Italy have introduced stricter age requirements and parental consent rules, while Greece is examining restrictions for under-15s. These international efforts highlight growing concern over child safety online and the need for coordinated action between governments and digital platforms.

The impact of TikTok

During a seminar at the European Parliament in Brussels, attended by Politis, MEP Christel Schaldemose stressed that the problem lies not only in the content children see but also in the architecture of social platforms. “Social media is designed to keep users hooked. Children face endless scrolling and manipulation,” she said.

Lawyer Laure Boutron-Marmion, founder of the collective Algos Victima, presented severe cases of online harm among children in France, including that of Marie, a 16-year-old who died by suicide after being exposed to harmful TikTok videos. Boutron-Marmion has filed a lawsuit against the platform for incitement to suicide, relying on the Digital Services Act, which holds companies accountable for the consequences of their systems.

She noted that platforms promote content without consent, heightening risks for vulnerable minors, and stressed that adolescence is already a particularly fragile period. Apps, she said, are not designed for the wellbeing of users but for maximum engagement.

The EU’s Digital Services Act plays a crucial role in reducing these risks, while national actions reinforce legal and judicial responses. In France, a parliamentary committee is already examining the psychological impact of TikTok on children, offering a model for combined European and national approaches to child protection.

 

Comments Posting Policy

The owners of the website www.politis.com.cy reserve the right to remove reader comments that are defamatory and/or offensive, or comments that could be interpreted as inciting hate/racism or that violate any other legislation. The authors of these comments are personally responsible for their publication. If a reader/commenter whose comment is removed believes that they have evidence proving the accuracy of its content, they can send it to the website address for review. We encourage our readers to report/flag comments that they believe violate the above rules. Comments that contain URLs/links to any site are not published automatically.