Computex

NVLink Fusion: Nvidia opens the NVLink interface to other chip manufacturers

Until now, Nvidia only used the fast NVLink interface for its computing accelerators. In the future, other companies will also be allowed to integrate it.

listen Print view
Nvidia CEO Jensen Huang with an NVLink distributor for server racks

Nvidia CEO Jensen Huang with an NVLink distributor for server racks.

(Image: chh / c't)

2 min. read

The focus of Nvidia CEO Jensen Huang's product announcements at the Computex IT trade fair in Taiwan will continue to be on AI in 2025. The NVLink interface used in AI servers may also be integrated into other manufacturers' chips in the future. Nvidia uses the high-speed interface to connect computing accelerators and to exchange data between accelerators and system processors. For example, NVLink connects Nvidia's Grace CPU with 72 ARM cores and the current AI accelerator B300 (Blackwell). The latter contains 18 links with 100 GByte/s each for a cumulative transfer speed of 1.8 TByte/s.

By opening up the ecosystem to other manufacturers under the name NVLink Fusion, rack servers of Nvidia computing accelerators with a customer-specific processor are conceivable. It is also possible to combine Nvidia's ARM server CPU Grace with AI accelerators from other manufacturers. The individual nodes are connected via NVLink switches between the accelerators. The partners who want to incorporate NVLink Fusion into their application-specific chips (ASICs) include Alchip, AlsteraLabs, Marvell and Mediatek. For CPUs, Nvidia is working with Fujitsu and Qualcomm. This supports speculation that Qualcomm will introduce its server processors. NVLink is not relevant for desktop PCs and notebooks.

Several manufacturers have already announced that they will incorporate NVLink Fusion into their own products.

(Image: chh / c't)

Nvidia also announced RTX Pro servers, which contain the server version of the RTX Pro 6000 Blackwell workstation card presented in March. This Blackwell graphics card with 24,064 shader cores has 96 GB of GDDR7 RAM, making it suitable for rendering tasks and AI applications, among other things. However, unlike the more expensive Blackwell accelerators such as the B100 and B200, it lacks the NVLink interface. This means that the cards in the servers have to communicate with each other via the comparatively slow 16 PCI Express 5.0 lanes.

The RTX Pro servers (left) communicate with each other via ConnectX-8 SuperNICs (right in the hands of Jensen Huang).

(Image: chh / c't)

The individual servers exchange their data with each other via Infiniband at 800 Gbit/s. Nvidia installs the ConnectX-8 SuperNIC network chip for this purpose, which also functions as a PCI Express switch and provides 48 PCIe 6.0 lanes for the RTX Pro 6000 cards. RTX Pro server providers include Asrock Rack, Dell, HPE, Lenovo, Mitac and Supermicro.

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmung wird hier ein externer Preisvergleich (heise Preisvergleich) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (heise Preisvergleich) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

(chh)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.