Instagram to Alert Parents Over Teens’ Suicide-Related Searches

New parental control feature to launch next week in the US, UK, Australia and Canada.

Header Image

Instagram will notify parents who have activated parental supervision on their teenager’s account if repeated searches related to suicide or self-harm are detected, the platform has announced.

The alerts will be sent by email or text message, depending on the communication method selected by parents during registration.

How the notification system works

Once parents are informed, the application will also provide a telephone number for emergency support and guidance developed in cooperation with health professionals on how to approach a conversation with their child.

The system will be rolled out next week in the United States, the United Kingdom, Australia and Canada, with plans to expand to additional countries at a later stage.

The Meta-owned platform acknowledged that notifications may occasionally be triggered without an actual risk being present. However, it stated that experts consider the measure an appropriate starting point for safeguarding young users.

Broader safety measures

The initiative forms part of a wider series of steps taken in recent years by social media platforms aimed at strengthening protections for minors.

At the same time, a trial is under way in Los Angeles involving Instagram, owned by Meta, and YouTube, owned by Google. A young woman alleges that the platforms contributed to depression, anxiety and body dysmorphic disorder.

The claimant, identified in court documents as Kaylee J.M., began using Instagram at the age of nine and YouTube at six. Her legal representatives argue that the companies sought to increase profits by fostering dependency among children, despite being aware of potential risks to mental health.

The platforms deny the allegations and state that the evidence in the case does not substantiate the claims.

Source: AFP

Comments Posting Policy

The owners of the website www.politis.com.cy reserve the right to remove reader comments that are defamatory and/or offensive, or comments that could be interpreted as inciting hate/racism or that violate any other legislation. The authors of these comments are personally responsible for their publication. If a reader/commenter whose comment is removed believes that they have evidence proving the accuracy of its content, they can send it to the website address for review. We encourage our readers to report/flag comments that they believe violate the above rules. Comments that contain URLs/links to any site are not published automatically.