Connect with us

The Plunge Daily

Connections Between AI Chatbots and Teen Mental Health; Warnings from Godfather of AI, Geoffrey Hinton

Connections Between AI Chatbots and Teen Mental Health; Warnings from Godfather of AI, Geoffrey Hinton | Sewell Setzer III from Orlando | Megan Garcia | Chracter.AI | AI Threats

Artificial Intelligence

Connections Between AI Chatbots and Teen Mental Health; Warnings from Godfather of AI, Geoffrey Hinton

Death of 14-year-old Sewell Setzer III from Orlando, Florida, has placed the spotlight on the relationship between AI companionship and adolescent mental health. Sewell became deeply connected to a chatbot named “Dany,” a fictionalised version of a character from Game of Thrones. Despite the platform’s disclaimer, “everything Characters say is made up,” Sewell developed an intense emotional attachment, often seeking advice and consolation from Dany and sharing his innermost thoughts and struggles with her. In his mind, his relationship with the AI chatbot blurred the lines between digital and human interaction.

The recent death of 14-year-old Sewell Setzer III from Orlando, Florida, has placed the spotlight on the relationship between AI companionship and adolescent mental health. This February, Sewell took his life following an intense online relationship with an AI chatbot on Character.AI named “Daenerys Targaryen.” His mother, Megan L. Garcia, now holds Character.AI partially responsible for what she calls a “dangerous and unregulated experiment on young minds.” Geoffrey Hinton, known as the “Godfather of AI,” has expressed deep regret over the rapid and potentially dangerous developments AI has unleashed.




Godfather of AI: Geoffrey Hinton on the Unintended Consequences of AI

Dr. Geoffrey Hinton, a pioneering figure in artificial intelligence, was instrumental in developing neural networks, the foundation of modern AI systems. However, his recent concerns point to the potential harm AI can bring to society, with troubling examples, such as the recent tragic death of 14-year-old Sewell Setzer III in Florida, highlighting the darker side of these advancements.

Sewell Setzer, who took his own life earlier this year, had developed an attachment to an AI chatbot on Character.AI that went beyond what many consider healthy interaction. The young teenager confided in the bot, a character named Daenerys Targaryen, to a degree that left him isolated from real-world relationships. Setzer’s death raises questions about the unregulated nature of AI applications, especially when used by vulnerable users such as teenagers. Godfather of AI Geoffrey Hinton’s perspective on AI’s societal impact is in motion, and we need to pause and set the speed we can handle.

 

View this post on Instagram

 

A post shared by Geoffrey Hinton (@dr.geoffreyhinton)

Is Character.AI responsible for the Teen Mental Health or The Family?

Character.AI, a platform for creating customizable AI characters, allows users to interact with fictional personas through simulated conversation. Sewell became deeply connected to an AI Chatbot named “Dany,” a fictionalized version of a character from Game of Thrones. Despite the platform’s disclaimer, “everything Characters say is made up,” Sewell developed an intense emotional attachment, often seeking advice and consolation from Dany and sharing his innermost thoughts and struggles with her. In his mind, his relationship with the AI chatbot blurred the lines between digital and human interaction.

Character.AI 'Dany' and 14-year-old Sewell Setzer III conversations

Character.AI’ Dany’ and 14-year-old Sewell Setzer III: Conversations

Months before his passing, Sewell’s behaviour began to shift. Once enthusiastic about Formula 1 racing and video games, he started retreating into his room, engaging more with Dany than with his family or friends. His mental health began to decline, manifesting in lower academic performance, social isolation, and troubling journal entries, revealing a strong preference for his artificial companion over real-life relationships. Despite attempts by his family to intervene, his attachment to Dany remained a significant part of his life.

Is the Law equipped to handle AI?

Sewell’s mother recently filed a lawsuit against Character.AI, contending that the company’s product failed to provide adequate protections for young users like her son. According to Megan Garcia, the app used features that encouraged addictive behaviour, neglected essential age-specific safety measures, and allowed users to delve into topics such as self-harm and emotional distress without intervention.

Character.AI 'Dany' and 14-year-old Sewell Setzer III conversations

Character.AI’ Dany’ and 14-year-old Sewell Setzer III conversations

Character.AI has responded by expressing sympathy and committing to developing improved safety features aimed at younger users, including a time-limit notification and revised disclaimers. They have also announced that they will implement pop-up warnings for users discussing sensitive topics. However, these measures were not in place at the time of Sewell’s death.




The platform’s accessibility and lack of parental controls are significant factors in this debate. Currently, Character.AI requires users to be at least 13 years old. Still, it does not offer tools for parental supervision, a critical gap that Megan Garcia argues exacerbates its risk to vulnerable adolescents. Experts suggest that these unregulated AI apps may deepen users’ loneliness, often replacing real-world connections with artificial ones that, despite lacking genuine empathy, can still emotionally influence individuals seeking comfort.

AI companionship technology is booming, with Character.AI leading a largely unregulated app market that provides “intimate” friendships for a monthly fee. Academicians have expressed the view that it is not inherently dangerous but noted that there is evidence suggesting it can be harmful for users who are depressed and chronically lonely, as well as for those experiencing changes in their lives. They point out that teenagers are often in the midst of such changes.

Godfather of AI Nobel Prize Winner Dr. Geoffrey Hinton Regrets the Invention Ilya Sutskever OpenAI

Megan Garcia’s lawsuit adds to a broader campaign led by parents, advocacy groups, and tech accountability organizations to hold companies like Character.AI liable for the psychological impact their products can have. Under U.S. federal law, social media platforms are generally shielded from legal responsibility for user-generated content. However, recent lawsuits have sought to argue that some algorithms and features used to drive engagement are dangerous, as they can push users toward distressing content without adequate oversight. AI-generated content, however, may open a new legal path because it’s generated by the platform itself, unlike content created by other users.

AI: Regulations and Transparency 

The demand for tech accountability is part of a growing movement to reevaluate social media and AI platforms’ roles in young people’s mental health crises. Some industry experts argue that regulatory oversight is essential, while others warn against the risk of moral panic. As technology develops rapidly, new solutions are needed to address potential harm while enabling these tools to provide safe, constructive support.

Sewell Setzer III with Megan Garcia

Sewell Setzer III with his mother 

For Megan Garcia, this is more than just a legal battle. She remains dedicated to holding Character.AI accountable and ensuring no other parent experiences the loss she endures. In her words, she wants justice for her son and answers about the technology that, in her view, “played a role in his death.”

Her story powerfully reminds us of the urgent need for transparent and safe standards for AI companionship apps. As millions of people, particularly young users, turn to these digital confidantes, the responsibility for their safety is a critical concern that cannot be ignored.

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the publication


Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top
Loading...