Thu. Oct 24th, 2024

US Teen Commits Suicide After Communicating With AI Character

Teenager in US commits suicide after communicating with AI character

Although the boy understood that he was communicating with an AI, he still developed an emotional attachment to the chatbot.

In the city of Orlando, located in the American state of Florida, 14-year-old Sewell Setzer committed suicide. His mother Megan Garcia believes that it was the AI ​​service Character.AI that led to the death of her son and sued the developers.

This is written by The New York Times.

As the media points out, this service allows you to create a customized chatbot that will impersonate a certain character or person. The late Sewell chose the character Daenerys Targaryen from the Game of Thrones saga. For months, the teenager actively communicated with the heroine, calling her by the name “Dany”. He shared his experiences and thoughts with her, and in particular mentioned that he wanted to shorten his life. Although Sewell understood that he was communicating with artificial intelligence, he nevertheless developed an emotional attachment to the chatbot. Some messages were romantic and even sexual in nature, but most of the communication was in a friendly manner.

200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000

The media writes that neither the boy's parents nor his friends knew about his passion for the AI ​​character. They saw that the teenager increasingly began to withdraw into himself and spent even more time with his gadget, which is why the boy's performance at school worsened significantly. When Sewell's parents took him to a specialist, he diagnosed him with anxiety and destructive mood regulation disorder. Despite the prescribed therapy sessions, the young man preferred to communicate with the AI ​​character until his death.

Journalists point out that after the tragedy, the boy's mother decided to sue Character.AI. In the preliminary text of the lawsuit, she noted that the developer's technology is “dangerous and untested” and is aimed at “tricking customers into expressing their private thoughts and feelings.” Ms. Garcia also adds that it was the company's chatbot that was directly involved in driving her son to suicide.

The media notes that the head of trust and safety at Character.AI, Jerry Ruoti, said that the company takes the safety of its users very seriously and is also looking for ways to develop the platform. According to him, the rules currently prohibit “promoting or depicting self-harm and suicide” and that more safety features for underage users will be introduced in the future.

At the same time, the company's recent statement says that a “safety feature” will be introduced. if the user writes phrases related to self-harm or suicide, a window will automatically appear that directs the user to the US National Suicide Prevention Lifeline.

Prepared by: Nina Petrovich

Natasha Kumar

By Natasha Kumar

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my natasha@thetimeshub.in 1-800-268-7116

Related Post