AMD confirms Zen 6 processors for 2026 and reveals first Zen 7 details
At the Financial Analyst Day, AMD's leadership presented many plans for the coming years. Much remained unspecific, but there were exciting details.
Roadmap: AMD processor cores up to Zen 7
(Image: AMD)
Following the excellent results of the third quarter of 2025, AMD's leadership team, led by Lisa Su, presented themselves confidently and in high spirits at the Financial Analyst Day. As analysts are currently primarily interested in AI topics, these played a major role. However, surprises were hidden elsewhere.
For the first time, AMD publicly mentioned the Zen 7 CPU core generation, which is likely to follow Zen 6 around 2028. Zen 6 is still planned for 2026, and like its manufacturing with TSMC N2 – later possibly also at TSMC in the USA – it has long been known.
Unfortunately, there were only sparse details about Zen 7. Specifically mentioned were additional AI data types and, above all, a “New Matrix Engine.” This is likely to be the Advanced Matrix Extensions for Matrix Multiplication (ACE) conceived together with Intel in the x86 Ecoystem Advisory Group.
Videos by heise
AI accelerators for CPU cores
Current x86 and ARM processors for notebooks and desktop PCs contain not only CPU cores and an integrated GPU (Integrated Graphics Processor, IGP), but also specialized AI compute units, also known as Neural Processing Units (NPUs). NPUs are typically designed for maximum efficiency and are intended for less demanding AI algorithms that run in the background and consume minimal power. Examples include image enhancement in video conferences, audio sound improvement, and transcription.
The specialized AI compute units are usually matrix multipliers, in the case of NPUs for short integer data types like Int8 or Int16, sometimes also FP16 floating-point numbers.
The maximum AI computing power in most current processors is delivered by the shaders or tensor units of the IGP. Some also process more precise AI data types, like BF16.
Apple has now also integrated AI compute units into the IGP cores of the M5.
Intel already supplies current Xeon versions with Advanced Matrix Extensions (AMX) in the CPU cores. However, AMX is not yet available for client CPUs, but it is expected with a coming CPU generation, AVX10.1. These vector compute units are also suitable for certain AI algorithms and process data types such as Int8 and BF16.
Unified AI compute units for x86
There appears to be no public specification for the Advanced Matrix Extensions for Matrix Multiplication (ACE) yet. However, according to an AMD press release, ACE is intended to standardize matrix multiplication capabilities, providing developers with “seamless” programming options for various processors, from mobile to server.
New security feature ChkTag
In the same press release, AMD and Intel also announce the x86 Memory Tagging function, also known as ChkTag. According to an Intel blog post, these appear to be similar functions to what Apple has integrated into its latest ARM chips for smartphones, tablets, and notebooks in the form of Memory Integrity Enforcement (MIE), i.e., in A19 and A19 Pro (iPhone 17, 17 Pro, 17 Pro Max, Air). It is likely that the M5 also supports MIE, but so far no information can be found on the Apple server—even though Apple considers MIE to be “the most significant upgrade to memory safety in the history of consumer operating systems” (the most significant upgrade to memory safety in the history of consumer operating systems).
MIE uses, among other things, an extension of the Memory Tagging Extension (MTE) announced in 2018 with ARMv8.5, announced with ARMv8.5, called Enhanced MTE (EMTE), which ARM describes as an optional feature from ARMv8.7 onwards (FEAT_MTE4, Enhanced Memory Tagging Extension).
(ciw)