This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://www.wired.com/story/character-ai-ceo-chatbots-entertainment/
and if you wish to take away this text from our website please contact us
“AI is expensive. Let’s be honest about that,” Anand says.
Growth vs. Safety
In October 2024, the mom of a teen who died by suicide filed a wrongful death suit towards Character Technologies, its founders, Google, and Alphabet, alleging the corporate focused her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming [the chatbot] to misrepresent itself as a real person, a licensed psychotherapist, and an adult lover.” At the time, a Character.AI spokesperson told CNBC that the corporate was “heartbroken by the tragic loss” and took “the safety of our users very seriously.”
The tragic incident put Character.AI underneath intense scrutiny. Earlier this yr, US senators Alex Padilla and Peter Welch wrote a letter to a number of AI companionship platforms, together with Character.AI, highlighting issues about “the mental health and safety risks posed to young users” of the platforms.
“The team has been taking this very responsibly for almost a year now,” Anand tells me. “AI is stochastic, it’s kind of hard to always understand what’s coming. So it’s not a one time investment.”
That’s critically essential as a result of Character.AI is rising. The startup has 20 million month-to-month lively customers who spend, on common, 75 minutes a day chatting with a bot (a “character” in Character.AI parlance). The firm’s person base is 55 % feminine. More than 50 % of its customers are Gen Z or Gen Alpha. With that progress comes actual threat—what’s Anand doing to maintain his customers secure?
“[In] the last six months, we’ve invested a disproportionate amount of resources in being able to serve under 18 differently than over 18, which was not the case last year,” Anand says. “I can’t say, ‘Oh, I can slap an 18+ label on my app and say use it for NSFW.’ You end up creating a very different app and a different small-scale platform.”
More than 10 of the corporate’s 70 workers work full-time on belief and security, Anand tells me. They’re answerable for constructing safeguards like age verification, separate fashions for customers underneath 18, and new options akin to parental insights, which permit dad and mom to see how their teenagers are utilizing the app.
The under-18 mannequin launched final December. It consists of “a narrower set of searchable Characters on the platform,” in response to firm spokesperson Kathryn Kelly. “Filters have been applied to this set to remove Characters related to sensitive or mature topics.”
But Anand says AI security will take extra than simply technical tweaks. “Making this platform safe is a partnership between regulators, us, and parents,” Anand says. That’s what makes watching his daughter chat with a Character so essential. “This has to stay safe for her.”
Beyond Companionship
The AI companionship market is booming. Consumers worldwide spent $68 million on AI companionship within the first half of this yr, a 200 % improve from final yr, in response to an estimate cited by CNBC. AI startups are gunning for a slice of the market: xAI launched a creepy, pornified companion in July, and even Microsoft bills its Copilot chatbot as an AI companion.
So how does Character.AI stand out in a crowded market? It takes itself out of it totally.
This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://www.wired.com/story/character-ai-ceo-chatbots-entertainment/
and if you wish to take away this text from our website please contact us
