Confusing AI summaries: Apple vows “clarification” against generative fake news
Media such as the BBC complain that Apple Intelligence sometimes misses the mark with AI summaries of notifications. Apple promises some help.
Example of incorrect AI summary by Apple Intelligence – However, this is not news, but personal news.
(Image: @AndrewSchmidtFC / X)
Apple is still unable to get to grips with oddities in connection with so-called AI Summaries as part of Apple Intelligence. These are used to summarize notifications on the iPhone, iPad, or Mac as smartly as possible. This often works, but not always – Sometimes the system, which like Apple Intelligence itself is currently still in beta, misses the mark to such an extent that the output borders on fake news. The British broadcaster BBC recently complained directly to Apple about this. Journalists' associations even called for the service to be shut down completely as it posed a “threat to journalism”. Apple has now commented on the problem and promised at least a partial remedy in the form of an update. However, this is rather mild.
Indication that it is AI
In a statement to the BBC, the company writes that a “software update in the coming weeks” will ensure that text generated by AI Summaries will be “offered by Apple Intelligence”. Surprisingly, this is not currently the case: the AI-powered summaries are only marked with a simple icon consisting of an arrow and two lines of text. Users are encouraged to report their concerns if a notification summary is “unexpected”.
Videos by heise
Apple's solution, which is ultimately not a solution, highlights an issue that competitors such as OpenAI, Google and Anthropic also have. Here, however, it is regularly stated in the small print that the output of the systems must be “checked”. The problem, however, is that very few users actually do this and many rely on the AI output or consider it to be “good enough”, even if it is garbage. Technically, you can only partially exorcize the systems of incorrect generation; the hallucination itself is technically based.
Fake darts world champion and a tennis player without coming out
Apple also told the BBC that “continuous improvements are being made with the help of user feedback”. In addition, receiving AI Summaries is “optional”. Among other things, the BBC had complained that Apple Intelligence had turned a headline into the death notification of an assassin who was still alive. Other incorrectly summarized notifications concerned a fake darts world champion and a tennis player who had supposedly come out. “These AI summaries from Apple do not reflect the original BBC content or in some cases are entirely wrong,” the BBC said.
It is now critically important that Apple addresses the issues “because the accuracy of our news is important for people to trust”. However, the BBC is not the only “victim” here. Other media, such as the New York Times, were also affected. Even simple summaries of chat histories can go wrong. For example, a difficult hiking tour was turned into an alleged “suicide”. On Reddit, there is now a whole Subreddit full of Apple intelligence fails.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.
(bsc)