A group called Reporters Without Borders is asking Apple to remove its new AI feature that summarizes news stories. The group says the tool made a serious mistake when it created a false headline from a BBC report.
Last week, the AI feature wrongly said that Luigi Mangione, a suspect in the killing of a UnitedHealthcare executive, had shot himself. This was sent as a notification to users. The BBC said it contacted Apple to fix the problem but hasn’t heard back from the company.
Vincent Berthier, a technology expert from Reporters Without Borders, urged Apple to remove the tool. He said AI tools are not reliable for news and can harm public trust in media. The group is worried that AI tools are too new and risky to provide accurate news for the public.
Since the tool was launched, other mistakes have been reported. For example, it wrongly summarized a New York Times story by claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested. In reality, there was only a warrant for his arrest.
The problem is that Apple’s summaries are shown under the banner of trusted news outlets, which can harm their credibility if the information is wrong. While some publishers use AI to assist in writing, they control the process. With Apple’s AI, users can choose to enable the feature, but the summaries can still spread incorrect or misleading information.
Apple launched this AI tool in June, promoting it as a way to simplify news updates by summarizing them into bullet points or short paragraphs. However, since its release, users and media groups have raised concerns about errors and misinformation.
The BBC stated that trust in its news and notifications is essential. Apple has not commented on the issue. Meanwhile, publishers and tech companies continue to face challenges with AI tools, including debates over content licensing and the risks of misinformation.
Source: weny