Tim Cook: Apple Intelligence not free from AI hallucinations

With iOS 18, Apple is throwing itself and its customers headfirst into generative AI. Tim Cook cannot claim that this is running "100 percent" smoothly.

Save to Pocket listen Print view
GenAI in the iOS 18 Messages app

GenAI in the iOS 18 Messages app on an iPhone.

(Image: Apple)

3 min. read
This article was originally published in German and has been automatically translated.

iPhone users should be prepared for the fact that Apple's upcoming AI models will also hallucinate, potentially spitting out misinformation and other dangerous content. The manufacturer admitted this shortly after the unveiling of its AI package "Apple Intelligence", which is coming to a large number of devices with iOS 18, iPadOS 18 and macOS 15 in the fall.

In an interview, Apple CEO Tim Cook emphasized that he is by no means 100 percent sure that the company's own generative AI will not hallucinate. The company has designed the technology to be as safe as possible, and he is convinced that Apple's AI models will generate content of "very high quality". "Honestly, I would say it's not quite 100 percent," Cook told the Washington Post.

In a longer essay on Apple's new Foundation Models, the company refers to test runs in which the AI models provided "harmful content, sensitive topics and false information" in response to targeted inputs ("adversarial prompts") in several cases. The percentage of such rule violations with Apple Intelligence is relatively low compared to other AI models. For its AI models running on servers, Apple says that unwanted responses occurred in 6.6 percent of requests - for other models such as GPT-4-Turbo, the percentage is a good 20 percent. Apple says that it is continuing to actively investigate the security of its own models.

Hallucinations are an omnipresent problem with AI models, ranging from invented locations and false information to clearly questionable answers.

Apple's Foundation Models were trained not only with licensed content, but also with publicly accessible content from the "open web" - without apparently obtaining special permission. Website operators must block the "Applebot", the company's website crawler, to opt out.

Apple's AI models also do not run purely locally, but also partly in the cloud – on the company's servers with in-house chips. The company has gone to great lengths to ensure that user data is handled as securely as possible – nothing is stored and Apple itself does not have access. Apple has now published the first concrete details on the architecture of "Private Cloud Compute". External security researchers should soon be able to verify some promises. It is still unclear to what extent users can prevent their data from ending up on Apple servers for AI functions - and how visible the data transfer in the operating system actually is.

With "Apple Intelligence", ChatGPT is the first external AI model to be integrated into the operating systems: Before data is sent to the OpenAI servers, users are first asked for consent. According to Cook, the provider has taken data protection precautions - for example, the IP address should not be tracked during requests. The Apple systems also allow ChatGPT functions to be used without logging into an account. Replies are marked with a note that they may contain incorrect information. Apple is apparently planning to work with other AI companies in the future.

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

(lbe)