iPhone 15 Pro and Pro Max: Visual Intelligence now active
Apple has made good on its announcement and made an AI feature that was previously only available for the iPhone 16 accessible to older devices.
Visual Intelligence in action: Here during the transfer to the calendar.
(Image: Apple)
Visual Intelligence is the name Apple has given to a function that allows you to query information about your surroundings using the iPhone camera. Recently, functions such as calendar entries, translations and text summaries have also been added. However, the function was only available on the iPhone 16 models released last fall –, probably because Apple wanted to differentiate them from their predecessors. This is why Visual Intelligence could only be triggered via the new camera control, which the 15 Pro and 15 Pro Max lack. Something is now happening with iOS 18.4: As previously announced, Apple has implemented a subsequent delivery. So it was not due to the hardware.
ChatGPT also helps in an emergency
With Visual Intelligence, you can also ask Google, have texts read out loud and contact ChatGPT. On the iPhone 15 Pro and 15 Pro Max, Visual Intelligence can now be assigned to the action button. To do this, select "Visual Intelligence" in the system settings under Action button – and the feature is already on the button, without any camera control.
Videos by heise
Alternatively –, for example if you need the action button for more important things –, you can use the Control Center, where a suitable icon now also appears in the Apple Intelligence and Siri section, which can then be moved to the front.
New basic service in planning
The iPhone 16e was already able to access Visual Intelligence in these two ways before iOS 18.4. Support for iPhone 15 Pro and 15 Pro Max is therefore a no-brainer. With iOS 18.4, Apple also released its AI system Apple Intelligence –, on which Visual Intelligence is partly based –, for the first time.
In the USA, it is now possible to use Visual Intelligence to make restaurant reservations or order delivery services. It can be assumed that the features will be expanded in the future. However, the function is unlikely to achieve a "live assistant" with a camera image, as Google or OpenAI have already presented, and that is not the aim. What is annoying, however, is the fact that it is not possible to analyze existing images with Visual Intelligence – They must either have just been taken or be directly in front of the lens.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.
(bsc)