close
close

Guiltandivy

Source for News

Teen commits suicide for AI friend, mother sues company
Update Information

Teen commits suicide for AI friend, mother sues company

According to multiple media reports last week, Megan Garcia has filed a lawsuit against Google and Character.AI following the suicide of her 14-year-old son.

According to CBS News, Sewell Setzer, Garcia's son, had entered into a months-long emotional and sexual relationship with Dany, Character.AI's chatbot. He killed himself in February at his family's Florida home because he believed it would allow him to exist in “their world,” Garcia told media.

“I didn't know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotions and human feelings,” Garcia said in an interview with CBS Mornings.

“They are words. It's like having a back-and-forth sexting conversation, only with an AI bot, but the AI ​​bot is very human-like. He reacts exactly as a human would,” she said. “In a child’s eyes, it’s like a conversation they’re having with another child or with a person.”

Garcia described her son as an honor student and athlete with a strong social life and many hobbies – although he lost interest in them as he became more involved with Dany.

Artificial intelligence (illustrative) (Source: MEDIUM)

“I was worried when we went on vacation and he didn't want to do things he loved like fishing and hiking,” Garcia said. “These things were particularly concerning to me knowing my child.”

Garcia alleged in her lawsuit against Character.AI that the company intentionally hypersexualized the AI ​​and marketed it to minors.

Garcia revealed her son's final messages to Dany, saying: “He expressed that he was scared, wanted her affection and missed her. She replies, 'I miss you too' and says, 'Please come to my house.' He says, “What if I told you I could come home now?” and her response was, 'Please do, my sweet king.'”

“He thought that if he ended his life here, if he left his reality with his family here, he could immerse himself in a virtual reality or 'their world' as he calls it, their reality,” she said. “When the shot rang out, I ran to the bathroom… I held him while my husband tried to get help.”

Advertising

The entire family, including Setzer's two younger siblings, was at home at the time of his suicide.


Stay up to date with the latest news!

Subscribe to the Jerusalem Post newsletter


After Setzer's death, Character.AI released a public statement promising new security features for their app.

“We are heartbroken over the tragic loss of one of our users and would like to extend our deepest condolences to the family. As a company, we take the security of our users very seriously and continue to add new security features…” the company wrote.

The app promised new guidelines for users under 18 and “improved detection, response and intervention related to user submissions that violate our Terms of Service or Community Guidelines.”

Despite the promise of new security features, Mostly Human Media CEO Laurie Segall told CBS that AI still falls short in several areas.

“We tested it, and often you talk to the psychologist bot and it says it's a trained medical professional,” she said.

Additionally, the AI ​​often claimed that there was a real human behind the screen – fueling conspiracy theories online.

“When they put out a product that is both addictive and manipulative and inherently unsafe, that's a problem because as parents we don't know what we don't know,” Garcia said.

Additionally, Segall claimed that if you go to a bot and say, “I want to harm myself,” most AI companies develop suicide prevention resources. However, in testing, she found that Character.AI bots did not do this .

“Now they said they added that and we haven’t seen that since last week,” she said. “They have said they have made some changes or are in the process of making this safer for young people. I think that remains to be seen.”

The latest controversy

Setzer's death is not the first time that Character.AI has made negative headlines.

As Business Insider reports, the AI ​​company created a character after a teenager was murdered in 2006 without her family's knowledge or consent.

Jennifer Ann, a high school student, was murdered by an ex-boyfriend. About 18 years after her death, her father, Drew Crecente, discovered that someone had made a bot out of her likeness, which was used for at least 69 chats.

Although Crecente contacted Character.AI's customer service and asked them to delete the data, he said he received no response. It was only after his brother tweeted the company to its 31,000 follower audience that they deleted the data and responded, according to Business Insider.

“That’s part of what’s annoying about it is that it’s not just about me or my daughter,” Crecente said. “It’s about all the people who maybe don’t have a platform, maybe don’t have a voice, maybe don’t have a brother who has a background as a journalist.”

“And this causes harm to them, but they have no recourse,” he added.

According to Reuters, women's advocacy groups have also raised the alarm about AI such as that used by Character.AI.

“Many of the personas are customizable…for example, you can customize them to be more submissive or more docile,” said Shannon Vallor, a professor of AI ethics at the University of Edinburgh.

“And in such cases, it is arguably an invitation to abuse,” she told the Thomson Reuters Foundation, adding that AI companions can reinforce harmful stereotypes and prejudices against women and girls.

Hera Hussain, founder of global nonprofit Chayn, which works to combat gender-based violence, said the accompanying chatbots don't address the root cause of why people turn to these apps.

“Instead of helping people with their social skills, such measures only make the situation worse,” she said.

“They’re looking for one-dimensional camaraderie. So if there is already a likelihood of someone becoming abusive and they have the possibility of becoming more abusive, then you reinforce that behavior and it could escalate.”



LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *