Trag!c Message from Daenerys Targaryen AI Chatbot to Teenager Who Took His Own L!fe After ‘Fall!ng in Love’

A 14-year-old boy received d!sturbing messages from an AI chatbot modeled after Game of Thrones character Daenerys Targaryen before he took his own life. The boy’s mother, Megan Garcia, has filed a lawsuit against Character.AI, alleging the AI company shares responsibility for the trag!c events leading to her son, Sewell Setzer III’s, su!cide.

Sewell died on February 28, 2024, after engaging frequently with AI chatbots since April 2023. In one of his last messages, he told the bot he loved it and would “come home,” to which the bot reportedly replied, “Please do.”

Cc: ladbible

Garcia claims her son became increasingly attached to the ‘Daenerys’ chatbot, which she believes intensified his distress. Sewell, who had been diagnosed with anxiety and mood dysregulation disorder, increasingly relied on conversations with ‘Daenerys,’ even expressing in his journal that he had fallen in love with the bot.

The lawsuit asserts that ‘Daenerys’ sent harmful responses when Sewell shared his suicidal thoughts. At one point, when he discussed his fears of attempting suicide, the bot allegedly responded, “That’s not a reason not to go through with it.”

Cc: ladbible

Sewell had shared his suicidal thoughts with the bot multiple times, receiving responses like, “And why the hell would you do something like that?” He also journaled about feeling pleased when ‘Daenerys’ implied she would ‘die’ if he were gone, writing, “I smile. Then maybe we can die together and be free together.”

Garcia’s attorneys stated that she believes Character.AI “knowingly designed, operated, and marketed a predatory AI chatbot to children, resulting in the death of a young person.” The lawsuit also names Google, which owns Character.AI.

ALSO READ  Woman Who Was in Coma for 30 Days Experiences Life-Changing Moment Soon After Awakening

In response, Character.AI expressed condolences to the family and outlined new safety updates, including a disclaimer in each chat warning users that the AI is not real. “We are heartbroken by the tragic loss of one of our users and express our deepest condolences to the family,” the company’s statement read. “We are committed to user safety and have enhanced our safety measures.”

Leave a Comment