An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation.
WARNING: This story contains references to suicide, child abuse and other details that may cause distress.
Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online.
Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions.
"I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session.
"At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said.
"The chatbot that they connected with told them to kill themselves.
"They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'"
Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client.
Rosie said her first response was "risk management" to ensure the young person was safe.
"It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack.
"And how that could contribute to a higher risk, especially around suicide risk."
"That was really upsetting."