SUSE AI Factory with Nvidia: Easy self-operation of AI

At SUSECon, SUSE is introducing the "SUSE AI Factory with Nvidia". It is intended to make it easier for companies to operate AI themselves.

listen Print view
Logo with a chameleon and the lettering SUSE in a circle against a space background

(Image: heise medien)

3 min. read
By
  • Udo Seidel

At its in-house conference SUSECon in Prague, SUSE announced a closer alliance with Nvidia. The product SUSE AI Factory with Nvidia is the first step. The name says it all: the turnkey solution is intended to make it easier for companies to operate AI themselves.

The foundation consists of the well-known components SUSE AI, the Kubernetes platform Rancher Prime, and SUSE Linux Enterprise Server. In addition, there are numerous building blocks from Nvidia AI Enterprise, including the NIM microservices with optimized models, NeMo for creating and monitoring AI agents, Nvidia's Run:ai platform, CUDA-X, and the well-known Kubernetes operators for GPU usage, NIM, and NeMo.

The product is made turnkey by certified blueprints that combine these components into a ready-to-use technology stack. Users can thus set up, update, and shut down Kubernetes clusters for various AI use cases as usual. Currently, three blueprints are available: Nvidia RAG (Retrieval-Augmented Generation), Nvidia AI-Q, and NeMoClaw. More are expected to follow as SUSE makes the product available in the course of 2026.

Videos by heise

SUSE AI Factory with Nvidia is part of the “Infrastructure for AI” initiative. With products, partnerships, and integrations, SUSE aims to significantly simplify the deployment of AI infrastructure.

In addition, there is a counterpart called “AI for Infrastructure.” Here too, SUSE is deepening its cooperation with Nvidia – and with Switch, a third partner is added. The data center operator is an early adopter of Nvidia hardware such as the Grace Blackwell server and now uses the Omniverse-DSX Blueprint to create digital twins of data centers. This allows complex thermal, electrical, and mechanical systems to be simulated and problems or errors to be avoided before construction begins. In the future, SUSE AI, Rancher Prime, and SLES will be part of the Switch blueprint. The integration of SUSE products with DGX hardware from the “Infrastructure for AI” initiative proved beneficial.

The Nuremberg-based software manufacturer has also done its homework elsewhere: In February, SUSE released a Model Context Protocol (MCP) server for the AI integration of its in-house multi-Linux manager. According to Rick Spencer, General Manager Engineering, there are corresponding counterparts for Rancher Prime, Observability, Security, and SLES.

(afl)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.