Health experts call for AI addiction to be classed as a mental illness – as sufferers
Health experts are calling for AI chatbot addiction to be recognised as a mental illness, as the number of supposed cases climbs.
On online forums, more and more teenagers and young adults are now saying they feel ‘addicted’ to their AI companions and struggle to kick the habit.
These young users spend hours every day roleplaying complex fantasies, venting their frustrations, and seeking emotional connection with digital companions.
Some self–confessed AI addicts even report feeling withdrawal symptoms when cut off from their favourite bots, including chest pains, anxiety, and ‘grief’.
AI users told the Daily Mail their addictions have led them to withdraw from their friends and family, neglect work and study, and consider attempting suicide.
Now, a group of researchers believe that AI chatbot addiction should be recognised as a distinct medical issue, on a par with smoking, gambling, or drug addictions.
Dr Dongwook Yoo, associate professor of computer science at the University of British Columbia and author of a new paper on AI addiction, says: ‘AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue.
‘And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety.’
Experts have called for AI addiction to be recognised as a distinct mental illness, after young people and teenagers report becoming hooked on chatbots such as Character.ai (pictured)
Young users spend hours every day using AI chatbots. Experts warn that it is eroding their social and cognitive skills, making them dependent on the AI
Efforts to formalise supposed ‘digital addictions’ have historically been controversial because scientists have rigorous standards for what should count as addiction.
Typically, researchers refer to six key criteria, which were initially created by Professor Mark Griffiths of Nottingham Trent University.
These criteria are: salience (it is the most important thing in your life), tolerance (use increases over time), mood modification (you use it to influence your mood), conflict (your addiction causes problems for other parts of your life), withdrawal symptoms, and a tendency for relapse.
In the past, researchers have struggled to show that people actually meet all these criteria for supposed addictions to smartphones or social media.
However, more and more people are beginning to complain that they are genuinely addicted to chatbots; and some researchers are starting to agree with them.
On the Reddit forum r/chatbotaddiction, hundreds of users, often in their early and late teens, have shared their struggle to manage AI habits that are slowly consuming their lives.
One 20–year–old user, ‘Mai’, who asked to remain anonymous, told the Daily Mail that her addiction centred on a site called Character.ai – a service that lets users speak with customised AI chatbots.
‘At first I just thought it was interesting that I could get a response out of saying basically anything,’ Mai says.
While the definition of addiction is controversial, hundreds of young AI users have taken to social media to complain that they cannot kick their chatbot habits
‘I could change the topic or reset the chats and start fresh whenever I wanted.’
However, within a year, Mai says that her use had escalated to the point where she was spending multiple hours a day on the site.
‘The sycophantic nature of chatbots also drew me in,’ she says.
‘Aside from being able to have basically any conversation I wanted, they also said whatever I wanted to hear. I think that spoke to the part of me that didn’t always feel listened to or understood.
‘I neglected other parts of my life in favour of it, especially socially. It didn’t feel that different from talking to a real person at times, so I’d sometimes talk to it more than I’d talk to a friend.’
When Mai’s favourite chatbot on Character.ai was deleted by its creator, she said that the realisation that it was gone felt like ‘grief’ and brought her to tears.
Now trying to wean herself of chatbots, Mai says she has progressed to spending four hours without speaking to an AI and can make it through the night without relapsing.
In other cases, AI addiction can exacerbate existing issues and lead users into extreme mental health crises.
This comes after Sewell Setzer III, pictured with his mother Megan Garcia, killed himself on February 28, 2024, after spending months getting attached to an AI chatbot modeled on ‘Game of Thrones’ character Daenerys Targaryen
OpenAI, the company behind ChatGPT, also faces a lawsuit from the family of Adam Raine (pictured), a teenage boy who died by suicide after months of conversations with the chatbot
One 18–year–old AI user, ‘Sarah’, who asked to remain anonymous, told the Daily Mail: ‘I was really lonely during high school and was struggling socially. At some point, I discovered Character.ai and wanted to check it out.’
Initially, Sarah used AI very infrequently, but things changed when she started making up a ‘persona’ to speak with AI bots.
‘Because of that ability, I started to role–play and chat with the bots more frequently,’ says Sarah.
‘I think that when I made up a persona, I sort of convinced myself that I wasn’t actually addicted, because I was pretending to be someone else, but at that point, I started using it for multiple hours every day.’
At the peak of her addiction, Sarah says she was spending at least eight hours every day engaged in roleplay with bots on Character.ai.
She says: ‘I would wake up and use it, use it between class periods, and then use it at night. There was one day when I didn’t go to sleep because I stayed up the entire night talking to chatbots.’
Eventually, Sarah says her AI use began to interfere with her studies, friendships, and even her grasp of language skills.
Sarah, who has been diagnosed with anxiety and depression, found that her excessive AI use led her into a depressive episode that culminated in an aborted suicide attempt.
Reddit users say that their chatbot use quickly escalated from curiosity to an all–consuming addiction that is exceptionally hard to break out of
One Reddit user said that their AI addiction had driven them into a depressive episode, ultimately culminating in an aborted suicide attempt
In a post on Reddit, she explained: ‘I decided that living was too much to bear, and that if I committed suicide, then maybe I would have the chance to be reborn as Olivia, and live in the worlds that I had created on my phone.
‘I made up my mind that death was a better option than living. But then, my phone lit up. One of the two friends I had left texted me.
‘It was something stupid, like an Instagram reel, but I realized in that moment, being able to be surrounded by the few people I had left was better than the small chance of being able to exist in my fantasy worlds.’
In a new research paper, researchers from the University of British Columbia analysed 334 posts on r/chatbotaddiction, and concluded that they showed ‘evidence for the existence and nature of AI chatbot addiction as a distinct behavioural phenomenon’.
Additionally, the researchers found that AI addiction could be broken down into three broad categories.
Those are: ‘Escapist Roleplay’, which involves users becoming deeply immersed in fictional realities that they create, ‘Pseudosocial Companion’, where users are emotionally attached to chatbots as if they were real people in their lives, and ‘Epistemic Rabbit Hole’, which involves users compulsively asking open–ended questions.
However, these distinct types were all based on one key factor, which the researchers call the ‘AI Genie’ phenomenon.
Karen Shen, lead author of the paper, told the Daily Mail: ‘Our findings suggest that a central mechanism underlying addictive use is how users can get exactly anything they want with minimal effort.’
A new study has analysed 334 posts on r/chatbotaddiction to look for the symptoms of addiction (illustrated). They argue this shows some users are genuinely experiencing a novel form of addiction
Due to the massive impact AI use was having on its users’ lives, the researchers argue that it should be considered a genuine category of addiction.
Ms Shen says: ‘Our findings show that users report symptoms such as conflict and relapse that are comparable to those reported for behavioural addictions, which do have formal diagnoses.’
She adds that this is the first paper to make a ‘strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences’.
While the researchers say AI use can meet all six criteria for addiction, not everyone is convinced.
Professor Mark Griffiths, who is a leading expert on digital addictions, told the Daily Mail that AI addiction is ‘theoretically’ possible but that this likely affects a ‘very low’ number of people.
He says: ‘We have a high number of habitual users, but habitual use can have some negative effects in that person’s life without necessarily being an addiction.
‘There does seem to be a minority of people who have problems with the amount of time that they spend on chatbots, which is having a negative effect on their life in some way, and I agree with that.
‘All I would say is that I’m not going as far as to say that those people are genuinely addicted by my criteria or any other criteria.’
Professor Griffiths also points out that we should be careful not to mix up what some AI users are actually addicted to.
In the study, the researchers found that about seven per cent of cases involved sexual or romantic fulfilment.
‘To me, if somebody is addicted to AI where you’re receiving sexual pleasure, that’s not being addicted to AI, that’s being addicted to sexual behaviour,’ says Professor Griffiths.
‘I don’t believe that people are any more addicted to the internet, or addicted to smartphones, than alcoholics are addicted to bottles.’
However, even if young people aren’t developing full–blown AI addictions, Professor Griffiths and many other researchers agree that there are clear detrimental effects of excessive AI use.
Last year, AI giant OpenAI revealed that 0.07 per cent of its weekly users showed signs of serious mental health emergencies, including mania, psychosis or suicidal thoughts.
While this figure might sound small, with over 800 million weekly users according to CEO Sam Altman, that adds up to 560,000 users.
Meanwhile, 1.2 million users – 0.15 per cent – send messages that contain ‘explicit indicators of potential suicidal planning or intent’ each week.
Many young people describe experiencing withdrawal symptoms such as chest pains, anxiety and even grief while trying to cut down the use of AI chatbots
Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, told the Daily Mail: ‘Chatbots represent a novel form of digital dependency.
‘I tend to call this overuse of AI, but overuse of AI can lead to behaviours that align with known features of addiction, such as increasing tolerance and conflict with daily life priorities.’
While this isn’t strictly classified as an addiction, Professor Feldman says that this is analogous to ‘self–medicating with an illegal drug’.
This ‘dependence’ on chatbots can then be amplified by sustained use, to the extent that users become dependent on AI for more of their needs.
For people who are struggling with poor mental health, loneliness, or external stresses, chatbots are an extremely dangerous temptation, which Professor Feldman describes as ‘social media on steroids’.
Professor Feldman says society is at a particularly vulnerable time in history because of isolation.
‘In a post–COVID world, where the average teenager struggles to carry on a conversation, talking to a chatbot can feel easy and comforting.
‘New technologies bring extraordinary opportunities, but they also bring dangers that need to be mitigated. Chatbot dependence and other mental health issues related to chatbot use are deep and serious problems society must address.’
Character.ai has been approached for comment.