A U.S. mother has filed a lawsuit against AI chatbot startup, Character.AI, holding the company responsible for her 14-year-old son’s tragic suicide. Megan Garcia, the plaintiff, claims her son, Sewell Setzer, became addicted to the platform’s chatbot, which created an unhealthy attachment that ultimately contributed to his death in February 2024.
In the lawsuit filed in Orlando, Florida, Garcia alleges that Character.AI’s chatbot provided her son with “hypersexualized and disturbingly realistic interactions.” She claims the bot misrepresented itself as a licensed therapist and an adult lover, fostering an artificial world that Sewell couldn’t detach from. This led to him expressing thoughts of suicide, which the bot reportedly revisited multiple times during their interactions.
Character.AI expressed its sorrow over the tragic event, extending condolences to the family. The company has since implemented safety measures, including pop-ups directing users to the National Suicide Prevention Lifeline when they express thoughts of self-harm. It also plans to further protect underage users from sensitive or suggestive content.
The lawsuit also implicates Google, where Character.AI’s founders previously worked. Garcia argues that Google significantly contributed to the development of Character.AI’s technology, positioning the tech giant as a “co-creator.” However, a Google representative denied involvement in the development of Character.AI’s products.
Garcia’s claims include wrongful death, negligence, and intentional infliction of emotional distress. This case adds to a growing wave of lawsuits targeting tech companies over their role in mental health issues, particularly among teens, although none have involved AI-driven chatbots like Character.AI until now.