Apple software boss: Context-sensitive Siri was not vaporware

Apple had to postpone features of its voice assistant until 2026. In interviews, software boss Federighi now emphasizes that the software actually existed.

listen Print view
Apple software boss Federighi

Apple software boss Federighi, here at the WWDC 2025 video: Apple is now fully committed to Siri V2.

(Image: Screenshot Apple.com)

3 min. read

Was the “context-sensitive Siri” announced last year and then not released just a video demonstration, i.e., vaporware? Apple's head of software Craig Federighi has now commented on the topic in several interviews at WWDC 2025 and emphasized that it was indeed real software.

The manager told the hardware website Tom's Guide that Apple had developed two versions of the new Siri, V1 and V2, internally. “We had the first version ready here just before [WWDC 2024] and were very confident at the time that we could deliver it.” The plan was to release the software by December, “and if not by spring”. They then spent months working on V1, making it “better and better via more app intents and searches”. “Fundamentally, however, we realized that the limitations of the V1 architecture did not deliver the level of quality that our customers needed and expected.”

Videos by heise

As it did not seem possible for Apple to meet Apple standards with V1, the decision was made to switch to the V2 architecture. “Once we realized that, – that was in the spring – we communicated that we couldn't release that yet and that we would continue to work on actually moving to the new architecture and then releasing something.” Apple is not communicating when this will happen: Federighi said at WWDC 2025 that they would have something to report next year.

Federighi did not say what the specific difference between V1 and V2 was. What is clear, however, is that there have also been massive changes within Apple: Apple replaced the team leadership and Federighi took on the project more directly. The new Siri actually involves comparatively few new functions: For example, the voice assistant should become more context-sensitive, i.e., have access to data on the device, such as calendars or iMessage chats, which can then be interacted with. Siri should also have access to screen content, which Apple is now partially implementing with a screenshot function as part of Visual Intelligence in iOS 26 (also with the help of ChatGPT). Finally, Apple had announced that Siri would also be able to interact with apps – via so-called app intents, which developers should install.

Apple is not yet officially talking about a real chatbot or a voice assistant with communication capabilities, as we know it from the voice mode of ChatGPT or Google's Gemini. An LLM-based Siri is not expected to appear until 2026 or even 2027 – if at all. There was no mention of this at WWDC 2025. In another interview on the subject of Siri with the Wall Street Journal, Federighi tried to calm things down: The topic of AI is a “long-term wave of transformation” for industry and society that will take decades, he said. “There's no reason to rush with the wrong features and the wrong product just to be first.”

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

(bsc)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.