G. Mudalige, Jadetimes Staff
G. Mudalige is a Jadetimes news reporter covering Technology & Innovation
Apple is under mounting pressure to withdraw its AI-driven news alert feature, Apple Intelligence, after multiple incidents of inaccurate and misleading news summaries surfaced. The feature, launched on the latest iPhone models, aims to condense breaking news notifications but has instead generated false claims, raising serious concerns about misinformation and trust in journalism. Critics argue that Apple’s handling of the situation has been inadequate, with prominent media figures and journalism organizations urging the tech giant to take immediate action to protect the integrity of news reporting.
The issue came to light when the BBC first reported errors in AI-generated summaries from Apple’s system in December, where news alerts misrepresented the broadcaster's content. One false summary claimed that Luigi Mangione, accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself. Despite repeated complaints from the BBC, Apple only responded this week, stating it is working on clarifying that these summaries are AI-generated. However, the company’s response has not appeased critics, who believe the technology is not ready for public use.
Alan Rusbridger, former editor of The Guardian and a member of Meta's Oversight Board, has called on Apple to remove the feature altogether, describing it as "out of control." He expressed concern about the risks posed by such technology, especially when trust in news media is already fragile. He highlighted the potential damage caused by major corporations using news content as a testing ground for generative AI tools without considering the broader societal implications.
Journalism bodies, including the National Union of Journalists (NUJ) and Reporters Without Borders (RSF), have echoed these sentiments. The NUJ emphasized that the public must have access to accurate news and warned that Apple's AI feature could undermine trust in journalism. Laura Davison, NUJ General Secretary, called for swift action to prevent further misinformation, particularly at a time when reliable information is crucial. RSF also criticized Apple’s response as insufficient, arguing that merely clarifying the use of AI in notifications shifts the burden onto users to verify the accuracy of news, further complicating the already challenging information landscape.
Apple’s AI news alerts have caused several notable blunders in recent weeks. On Friday, the system incorrectly reported that Luke Littler had won the PDC World Darts Championship hours before the event began. In another instance, it falsely claimed that Spanish tennis star Rafael Nadal had come out as gay. These inaccuracies have added fuel to the ongoing debate about the reliability of AI in journalism and the need for stricter controls on its use in public-facing applications.
The BBC is not the only media outlet affected by these errors. In November, a ProPublica journalist flagged similar issues with Apple’s AI summaries of New York Times alerts, which falsely reported that Israeli Prime Minister Benjamin Netanyahu had been arrested. Another inaccurate summary appeared on January 6, related to the anniversary of the Capitol riots. The New York Times has declined to comment on the matter, but the recurring errors point to a broader problem with the current state of AI-generated content.
Apple has stated that its AI summaries are optional and part of its beta phase, which is intended to improve with user feedback. The feature is available on iPhone 16 models, iPhone 15 Pro and Pro Max handsets running iOS 18.1 and above, as well as on select iPads and Macs. The company has assured users that a software update will arrive in the coming weeks to clarify when notifications are generated by AI. Apple encouraged users to report any unexpected notifications to help refine the system further.
Despite these assurances, critics remain unconvinced. RSF argued that Apple’s planned updates do not address the core issue of inaccurate news summaries and instead place the onus on users to discern fact from fiction. The organization warned that such an approach risks exacerbating public confusion and further eroding trust in reliable news sources.
Apple is not the only tech giant facing challenges with generative AI tools. Google’s AI overviews feature, which summarizes search results, faced criticism last year for producing erratic responses. Google defended the feature, citing isolated incidents, but the incident highlighted the broader challenges of deploying AI tools in complex real-world scenarios.
As generative AI tools become increasingly prevalent, companies must grapple with the ethical implications of their use. The backlash against Apple underscores the importance of ensuring that such tools are reliable and do not contribute to the spread of misinformation. With media organizations, journalists, and rights groups pushing for accountability, Apple faces growing pressure to either refine its AI systems or withdraw them entirely to safeguard public trust in news.
Comments