AMD shows huge chip constructs Epyc Venice and Instinct MI455X
For the first time, AMD CEO Lisa Su shows bare versions of the next server processors and AI accelerators. Both are huge.
Bare AMD Instinct MI455X.
(Image: AMD)
With the AI accelerator Instinct MI455X, AMD has designed one of the world's largest chip constructs. A total of 12 compute and I/O chiplets plus 12 memory stacks form a chip composite that is about the size of a hand. AMD CEO Lisa Su jokingly called the construct “pretty darn big” at the CES presentation.
The 12 compute and I/O chiplets are produced using 3- and 2-nanometer manufacturing technology, likely by world market leader TSMC. If AMD follows the same path as with processors, the compute dies are likely to be produced with 2-nm structures and the less scalable I/O chiplets with 3-nm technology.
320 billion transistors are in the chips, 70 percent more than in the predecessor Instinct MI355X. Compared to Nvidia's current fastest AI accelerator B300, also known as Blackwell Ultra, that's about three times as many. Nvidia's AI accelerators are popular primarily because of their software support, even if the hardware is potentially not the best. In addition, AMD has 12 stacks of High-Bandwidth Memory (HBM4) with a total memory capacity of 432 GB and a bundled transfer rate of 19.6 TB/s.
(Image:Â AMD)
Significant manufacturing advantage over Nvidia
The lead in the sheer number of transistors is made possible by modern manufacturing technology: Nvidia still relies on TSMC's 4NP process, an improved variant of the now-dated 5-nm generation. AMD's Instinct MI455X is therefore one to two generations further ahead from a manufacturing perspective. Very roughly, TSMC's 3-nm generation offers almost 30 percent higher transistor density compared to 5 nm, and the 2-nm generation another 15 percent. This allows significantly more transistors to fit on the same chip area.
Furthermore, AMD is expanding more broadly through the chiplet design. However, this also costs transistors, for example, for communication between them.
(Image:Â Florian MĂĽssig / heise medien)
The boundaries between the centrally located compute chiplets are barely visible anymore. Previously, AMD used eight of them, sitting on two base I/O dies. The chiplets at the top and bottom are new and could contain interfaces like PCI Express.
The Instinct MI455X is the top model from the upcoming 400 series. AMD intends to use it in its first own server design, Helios, and partners like HPE are adopting this design. For standard servers, the Instinct MI440X variant is intended. The Instinct MI430X, on the other hand, focuses on traditional High-Performance Computing (HPC) with high FP64 accuracy. This variant is also used in German supercomputers.
(Image:Â AMD)
Epyc Venice also grows
As a CPU partner for the MI400 series, AMD sees the next Epyc generation codenamed Venice. The largest version with 256 CPU cores was also shown by Su for the first time. The bare processor shows that the design looks significantly different from previous years.
(Image:Â AMD)
The previously monolithic I/O die is now split in two, simplifying scaling. The top models get the full configuration with 16 DDR5 memory channels and plenty of PCI Express 6.0. For this, AMD is introducing the new CPU socket SP7. 16 MRDIMMs with DDR5-12800 speed achieve a high transfer rate of 1.6 TB/s.
For more compact systems, AMD can use a single I/O die with eight memory channels. TSMC is handling the manufacturing with 3-nm technology.
TSMC, on the other hand, produces the compute dies with 2-nm structures. They are positioned much closer to the I/O die than before. Apparently, the Infinity Fabric data lines no longer run over the organic substrate. Instead, AMD and TSMC are likely using interconnect bridges under the chiplets.
Videos by heise
256 CPU cores on one substrate
The number of compute dies is decreasing, but in the future, 32 CPU cores will be housed in each. It is unclear whether this design uses area-optimized Zen-6c or normal Zen-6 cores. Either way, the maximum is said to be 256 cores.
The website Chips And Cheese, which specializes in chip analysis, estimates that an Epyc Venice I/O die is over 350 mm² in size and a compute chiplet is around 165 mm². This would result in a total of approximately 2000 mm² of silicon for a single processor. Rumor has it that a single processor will consume 700 to 1400 watts of electrical power.
The total of eight mini-dies on the sides still raise questions. These might hide so-called deep trench capacitors to stabilize the power supply. Alternatively, the mini-dies could serve for signal routing or lead to external interfaces.
heise medien is an official media partner of CES 2026.
Empfohlener redaktioneller Inhalt
Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.
Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.
(mma)