This web page was created programmatically, to learn the article in its unique location you’ll be able to go to the hyperlink bellow:
https://www.cnn.com/2025/10/23/tech/microsoft-ai-copilot-updates-teen-safety
and if you wish to take away this text from our web site please contact us
New York
—
Popular synthetic intelligence chatbots like ChatGPT and Meta AI are more and more blurring the road between real-world and digital relationships by permitting romantic and typically sexual conversations — whereas scrambling to ensure kids aren’t accessing that grownup content material.
But Microsoft desires no a part of that, the corporate’s AI CEO Mustafa Suleyman advised CNN.
“We are creating AIs that are emotionally intelligent, that are kind and supportive, but that are fundamentally trustworthy,” Suleyman mentioned. “I want to make an AI that you trust your kids to use, and that means it needs to be boundaried and safe.”
Microsoft is locked in a race with tech giants like OpenAI, Meta and Google to make its Copilot the AI software of selection in what Silicon Valley believes would be the subsequent large computing wave. Copilot now has 100 million month-to-month energetic customers throughout Microsoft’s platforms, the corporate mentioned in its most up-to-date earnings name. That’s properly under opponents like OpenAI, whose ChatGPT has 800 million month-to-month energetic customers.
But Microsoft is betting its method will win it a wider viewers, particularly as AI firms grapple with find out how to form their chatbot’s personalities amid experiences of AI contributing to customers’ psychological well being crises.
“We must build AI for people; not to be a digital person,” Suleyman wrote in a weblog publish earlier this 12 months.
The interview got here forward of a collection of new Copilot features that Microsoft unveiled on Thursday, which embrace the flexibility to refer again to earlier chats, interact in group conversations, improved responses to well being questions and an optionally available, sassy tone known as “real talk.”
Some of Microsoft’s AI opponents are going through intense strain to maintain younger customers secure on their platforms.
Families have sued OpenAI and Character.AI claiming their chatbots harmed their kids, in some instances allegedly contributing to their suicides. A string of reports earlier this 12 months raised considerations that Meta’s chatbot and different AI characters would interact in sexual conversations even with accounts figuring out as minors.
The tech firms behind in style AI chatbots say they’ve rolled out new protections for youths, together with content material restrictions and parental controls. Meta and OpenAI are additionally implementing AI age estimation expertise aiming to catch younger customers who join with faux, grownup birthdates — nevertheless it’s unclear how properly these programs work. OpenAI CEO Sam Altman introduced earlier this month that with its new security precautions in place, ChatGPT will quickly let grownup customers talk about “erotica” with the chatbot.
Suleyman mentioned Microsoft is drawing a vivid line at romantic, flirtatious and erotic content material, even for adults. “That’s just not something that we will pursue,” he mentioned.
That means, for now, Microsoft is unlikely to roll out a “young user” mode like a few of its opponents — as a result of they shouldn’t want it, Suleyman mentioned.
A key focus for Microsoft is coaching Copilot to encourage customers to work together with different people, not simply AI. That’s key for an organization that has constructed its enterprise round offering work-oriented productiveness instruments.
Its new “groups” function will let as much as 32 individuals — suppose, classmates engaged on an project or mates planning a visit — be part of a shared chat with Copilot, the place the chatbot can chime in with recommendations.
That theme of pointing customers to actual individuals applies to Copilot’s well being updates, too. The chatbot will advocate close by medical doctors for sure medical-related queries, and can in any other case draw on “medically trusted” sources comparable to Harvard Health.
Suleyman mentioned he believes this push to get Microsoft’s AI chatbot to assist strengthen human-to-human relationships “is a very significant tonal shift to other things that are happening in the industry at the moment, which are starting to see these things as deep simulations where you can go off into your own world and have an entire parallel reality, including, in some cases, adult content.”
This web page was created programmatically, to learn the article in its unique location you’ll be able to go to the hyperlink bellow:
https://www.cnn.com/2025/10/23/tech/microsoft-ai-copilot-updates-teen-safety
and if you wish to take away this text from our web site please contact us
