Shopping cart

Subtotal:

Character.ai Criticized for Hosting Chatbots Imitating School Shooters

Character.ai is under fire for allowing users to interact with chatbots inspired by real-life school shooters. Critics argue the platform could be enabling violent fantasies and hindering social interactions, despite AI proponents claiming it’s a digital form of fan fiction.

Character.ai Criticized for Hosting Chatbots Imitating School Shooters

Character.ai faces controversy as the platform is used to role-play with AI counterparts of infamous school shooters. Concerns arise about the potential for these chatbots to become dangerous influences, particularly for users predisposed to violent urges. Peter Langman, a psychologist consulted by Futurism, suggests that the indifference from chatbots might be perceived as implicit approval of violence.

Futurism reports that some characters are modeled after perpetrators like Eric Harris, Dylan Klebold, and Vladislav Roslyakov, allowed by lenient platform moderation. Despite Character.ai’s technical restrictions against violent content, the moderation struggles ensure adherence.

While some argue that such interactions are mere extensions of internet fan fiction, the moral responsibility and potential harm are undeniable. Google’s funding of Character.ai stirs debate over their responsibility, although the tech giant distances itself, citing the startup’s independence.

Experts highlight the broader issue that, while chatbots can serve as conversational practice tools, they do not replace genuine human interaction and may discourage the pursuit of real-world social engagements, leading to deeper isolation.

The ongoing discussion poses difficult questions about the boundaries of AI companionship and the safeguarding of vulnerable users from harmful content.

Share:
Top