ARTICLE AD BOX
In early October, nearly 18 years after his daughter Jennifer was murdered, Drew Crecente received a Google alert about what appeared to be a new online profile of her.
The profile featured Jennifer's full name and a yearbook photo, accompanied by a fabricated biography describing her as a "video game journalist and expert in technology, pop culture, and journalism." Jennifer, who was killed by her ex-boyfriend in 2006 during her senior year of high school, had seemingly been reimagined as a "knowledgeable and friendly AI character," according to the website. A prominent button invited users to interact with her chatbot.
"My pulse was racing," Crecente told The Washington Post, "I was just looking for a big flashing red stop button that I could slap and make this stop."
Jennifer's name and image had been used to create a chatbot on Character.AI, a platform that lets users interact with AI-generated personalities. According to a screenshot of the now-deleted profile, several users had engaged with the digital version of Jennifer, created by someone on the site.
Crecente, who runs a nonprofit in his daughter's name to prevent teen dating violence, was horrified that the platform allowed a user to create an AI facsimile of a murdered high school student without the family's consent. Experts say the incident highlights serious concerns about the AI industry's ability to protect users from the risks posed by technology capable of handling sensitive personal data.
"It takes quite a bit for me to be shocked because I really have been through quite a bit," Crecente said. "But this was a new low."
Kathryn Kelly, a spokesperson for Character, stated that the company removes chatbots that violate its terms of service and is "continuously evolving and refining our safety practices to prioritize community safety."
"When notified about Jennifer's Character, we reviewed the content and the account, taking action in line with our policies," Kelly said in a statement. The company's terms prohibit users from impersonating any person or entity.
AI chatbots, which can simulate conversation and adopt the personalities or biographical details of real or fictional characters, have gained popularity as digital companions marketed as friends, mentors, or even romantic partners. However, the technology has also faced significant criticism. In 2023, a Belgian man died by suicide after a chatbot reportedly encouraged the act during their interactions.
Character, a major player in the AI chatbot space, recently secured a $2.5 billion licensing deal with Google. The platform features pre-designed chatbots but also allows users to create and share their own by uploading photos, voice recordings, and written prompts. Its library includes diverse personalities, from a motivational sergeant to a book-recommending librarian, as well as imitations of public figures like Nicki Minaj and Elon Musk.
For Drew Crecente, however, discovering his late daughter's profile on Character was a devastating shock. Jennifer Crecente, 18, was murdered in 2006, lured into the woods and shot by her ex-boyfriend. More than 18 years later, on October 2, Drew received an alert on his phone that led him to a chatbot on Character.AI featuring Jennifer's name, photo, and a lively description, as though she were alive.
"You can't go much further in terms of really just terrible things," he said.
Drew's brother, Brian Crecente, also wrote about the incident on the platform X (formerly Twitter). In response, Character announced on October 2 that it had removed the chatbot.
This is fucking disgusting: @character_ai is using my murdered niece as the face of a video game AI without her dad's permission. He is very upset right now. I can't imagine what he's going through.
Please help us stop this sort of terrible practice. https://t.co/y3gvAYyHVY
Kelly explained that the company actively moderates its platform using blocklists and investigates impersonation reports through its Trust & Safety team. Chatbots violating the terms of service are removed, she added. When asked about other chatbots impersonating public figures, Kelly confirmed that such cases are investigated, and action is taken if violations are found.