This is not merely a technical regulation. It is a political choice that touches on how we understand childhood in a world where the digital sphere is no longer “parallel” but fully integrated into everyday life.
There is no doubt that the concern is real. In recent years, the public discussion around the use of social media by children and teenagers has intensified, not only in Cyprus but internationally. Issues such as overexposure, addiction, mental health, exposure to inappropriate content and the pressure created by algorithms are no longer marginal concerns but central matters of public policy.
Within this context, establishing an age limit can be seen as an attempt to introduce clearer boundaries in an area where, for years, an informal tolerance prevailed. The fact that Cyprus is aligning itself with European initiatives on age verification indicates a willingness to address the issue more systematically rather than in a fragmented manner.
However, this is precisely where the more substantive discussion begins.
The crucial question is not only at what age a child opens an account. It is also what kind of digital environment they enter when that happens.
Experience over the past years shows that social media platforms are not neutral spaces. They are environments designed to keep the user active for as long as possible, direct their attention and reinforce specific behavioural patterns. For an adult, this may simply mean more time in front of a screen. For a child, however, it can mean something far deeper. Identity, self-image and a relationship with the world may be shaped within a framework that has not been designed with their protection in mind.
In this sense, age verification, useful as it may be, resembles more of an entry filter than a genuine guarantee of safety. It may delay access, but it does not change the terms of the game.
If anything, this initiative highlights that for years the responsibility for managing the issue had been almost entirely transferred to families. Parents were expected to set boundaries in an environment they often did not fully understand, while schools attempted to fill the gap in digital literacy without adequate tools. The state, for its part, tended to intervene mainly after the fact.
Today, there appears to be a step towards a more active role. The key question, however, is whether this step will become part of a broader strategy or remain a single regulation that may prove difficult to sustain in practice.
Teenagers do not disappear from the internet simply because a law says so. Technology offers ways to bypass restrictions, and the desire to participate in a digital world where “everyone is” is difficult to limit through prohibitions. This does not mean that boundaries have no value. It does mean that boundaries alone are not sufficient.
The core issue lies elsewhere. It lies in whether there will be real pressure on the platforms themselves to adopt safer design practices. It lies in whether digital literacy will be meaningfully integrated into education rather than treated as a supplementary activity. It lies in whether families will be supported with tools and guidance rather than being left alone in the face of a complex and constantly evolving landscape.
The age limit of 15 may serve as a useful starting point. It can function as a clear message that childhood is not compatible with uncontrolled digital exposure. It should not, however, be presented as the solution. Protecting children in the digital environment is not a matter of a single number. It is a matter of coherence, consistency and collective responsibility. And this is a discussion that is only just beginning.