This page has been generated programmatically; to view the article in its original location, you may visit the link below:
https://www.cnbc.com/2025/01/08/apple-ai-fake-news-alerts-highlight-the-techs-misinformation-problem.html
and if you wish to have this article removed from our website, please get in touch with us
Jaap Arriens | Nurphoto | Getty Images
An artificial intelligence capability on iPhones is creating false news alerts, raising worries about the technology’s potential to disseminate misinformation.
Last week, a newly launched feature by Apple, which summarizes notifications for users using AI, incorrectly relayed BBC News app notifications regarding the broadcaster’s coverage of the PDC World Darts Championship semi-final, erroneously stating that British darts competitor Luke Littler had secured the championship.
This incident transpired a day prior to the actual finals of the tournament, which Littler ultimately won.
Just hours following this event, another notification produced by Apple’s AI system erroneously asserted that tennis icon Rafael Nadal had publicly identified as gay.
The BBC has been endeavoring for about a month to persuade Apple to resolve this issue. The British state broadcaster expressed concerns to Apple in December after its AI feature generated a misleading headline implying that Luigi Mangione, the individual apprehended in connection with the murder of UnitedHealthcare CEO Brian Thompson in New York, had committed suicide — an assertion that was unfounded.
Apple did not respond immediately for comment when approached by CNBC. On Monday, Apple informed the BBC that it is working on an update to address the issue by incorporating a clarification indicating when Apple Intelligence is accountable for the text shown in the notifications. At present, generated news alerts appear to come directly from the source.
“Apple Intelligence features are still in beta, and we are continuously enhancing them with input from users,” the corporation stated in a message to the BBC. Apple also indicated that it’s prompting users to report any concerns if they encounter an “unexpected notification summary.”
The BBC is not the sole news organization impacted by Apple Intelligence’s inaccurate summarization of news alerts. In November, the feature issued an AI-generated notification wrongly claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested.
This error was highlighted on the social media platform Bluesky by Ken Schwencke, a senior editor at the investigative journalism site ProPublica.
CNBC has contacted the BBC and New York Times to comment on Apple’s proposed remedy for its AI feature’s misinformation dilemma.
AI’s misinformation dilemma
Apple promotes its AI-generated notifications as an efficient means to aggregate and rephrase previews of news app alerts into a consolidated notification on users’ lock screen.
The feature, as stated by Apple is designed to aid users in scanning their notifications for important details while reducing the overwhelming influx of updates familiar to many smartphone users.
However, this has led to what experts in AI term as “hallucinations” — outputs generated by AI that include false or misleading information.
“I suspect that Apple is not alone in encountering difficulties with AI-generated content. We have already witnessed numerous instances of AI services confidently providing inaccuracies, often referred to as ‘hallucinations’,” remarked Ben Wood, chief analyst at tech-focused market research firm CCS Insights, in a conversation with CNBC.
In Apple’s scenario, since the AI is attempting to consolidate notifications and condense them to present merely a basic summary of information, it has jumbled the wording in a manner that misrepresents the events — yet it presents them confidently as factual.
“Apple faced the additional challenge of compressing content into very brief summaries, which resulted in misleading outputs,” Wood noted. “Apple will surely aim to remedy this as quickly as possible, and I’m certain competitors will closely observe how it responds.”
Generative AI operates by attempting to identify the most suitable answer to a query or prompt provided by a user, utilizing vast amounts of data on which its foundational large language models have been trained.
At times, the AI may be uncertain of the answer. However, because it is programmed to always furnish a response to user prompts, this can lead to situations where the AI effectively produces falsehoods.
It remains unclear exactly when Apple’s solution to the flaw in its notification summarization feature will be implemented. The iPhone manufacturer has indicated that an update is anticipated within “the coming weeks.”
This page has been generated programmatically; to view the article in its original location, you may visit the link below:
https://www.cnbc.com/2025/01/08/apple-ai-fake-news-alerts-highlight-the-techs-misinformation-problem.html
and if you wish to have this article removed from our website, please get in touch with us