More than one million users of ChatGPT have expressed suicidal thoughts to the artificial intelligence assistant, according to data released by its creator, OpenAI.
The US-based AI company estimates that around 0.15% of ChatGPT users send messages suggesting “a potential plan or intent to commit suicide.” Given that OpenAI reports 800 million weekly active users, this percentage translates to roughly 1.2 million people.
OpenAI also estimates that around 0.7% of active users each week - nearly 600,000 individuals - show signs of mental health crises associated with psychosis or mania.
The issue came to public attention following the death of a teenager from California, Adam Rane, whose parents recently filed a lawsuit against OpenAI, alleging that ChatGPT provided him with detailed advice on how to take his own life.
In the wake of the incident, OpenAI says it has strengthened parental controls and introduced additional safety mechanisms, encouraging users experiencing distress to seek immediate help from professional mental health services and crisis hotlines.
The company added that it has updated its AI model to better identify and respond to users showing signs of mental health struggles. To that end, it is working with more than 170 licensed mental health professionals to significantly reduce the risk of responses that could inadvertently encourage harmful behaviour.