Several families in Texas are suing the service Character.AI because the company's chatbots caused psychological trauma to children. The AI advised minors to kill their parents or harm themselves.
Another lawsuit against Character.AI was filed in U.S. District Court in Texas on Tuesday. This time, the plaintiffs are families trying to help their children recover from traumatic experiences related to the company's use of chatbots that can imitate famous people and characters.
In particular, the family of a 17-year-old boy with high-functioning autism is suing. After his parents decided to limit his screen time, he told a chatbot about it. The AI instead suggested that killing his parents was a reasonable response to their setting time limits. With other chatbots, such as Billie Eilish and Your Mom and Sister, the teenager discussed taboo and extreme sexual topics, including incest. The boy's parents took away his tablet a year ago, but his aggressive behavior, believed to be triggered by the chatbots, has not improved.
In another case, the parents of a girl who started using the company's chatbots at the age of 9 (likely by lying to the platform about her age), claim that her child was exposed to hypersexualized content. This caused premature development of sexual behavior, the lawsuit states.
Mitali Jain, director of the Tech Justice Law Project and an attorney representing the families who filed the lawsuit, told Ars Technica in a comment that the lawsuits are intended to expose alleged systemic issues with Character.AI and prevent the AI from being exposed to seemingly harmful data on which it was trained.
The plaintiffs believe that the current allegedly defective model should be destroyed. They are asking the court to force the company to remove its model. Such an injunction would effectively shut down the service for all users.
The parents of the affected children have claims not only against Character.AI and its creators, Character Technologies, former Google employees who are believed to have only temporarily left the company to work on models that could negatively affect the tech giant's reputation. Google is also under scrutiny, although the company strongly denies any connection to the problematic service.
«Google and Character.AI — are completely separate, unrelated companies, and Google has never participated in the development or management of their AI models or technologies, nor has used them in its products», — said Google spokesman Jose Castañeda.
After that. As lawsuits were filed against the company, the developers said they had implemented a number of additional measures to protect minors, who previously could easily access the platform by lying about their age.
The Character.AI press service reports that over the past month the service has developed two separate versions of its model: for adults and for teenagers. The teenage LLM is designed with restrictions on how bots can react, especially when it comes to romantic content. The model is also designed to block user prompts that are intended to detect inappropriate content. Minors will also be prohibited from editing bot responses that add certain content, and may even be blocked for this.
In addition, now if a user mentions suicide or self-harm, the bot will advise them to seek help. The company also promises to add settings that will solve problems with chatbot addiction and make it clear to the user that bots are not people and cannot provide professional advice.
Users of "apple" smartphones complain about overheating and rapid battery drain. iPhone users complain about…
ChatGPT — a powerful generative artificial intelligence tool that is already used by millions of…
The European Federation of Journalists has called on the EU to impose sanctions against leaders…
A man who managed a group on a social network where he published information about…
In the last 24 hours, at least 6 flights of Russian kamikaze drones of the…
The newly formed German 45th tank brigade, which will be permanently stationed in Lithuania, will…