Now official: RHEL 10 is here – with AI for admins

The images for RHEL 10 were already available before the announcement – now they are official. New features include post-quantum crypto and AI for admins.

listen Print view
A red men's hat in the style of a comic graphic.

(Image: Generiert mit Midjourney durch iX)

5 min. read
By
  • Udo Seidel
Contents

The flagship from Red Hat has now been formally released. No major leaps from the previous version are to be expected. Most of the innovations are in the area of AI. Three aspects of RHEL 10 are of interest. It has now integrated the so-called image mode, which Red Hat already announced last year. The technical basis for this is the bootc project –, i.e. bootable containers with kernel and OSTree. With the image mode, admins can keep their systems up to date much more easily and without interruption.

In this context, the new "Security Select Add-On" could also be of interest. Roughly speaking, customers can request patches for certain CVEs – even if Red Hat would not provide this as part of normal support.

Aspect number two comes from the area of preparing for the future. The key word is post-quantum cryptography. The current assumption is that the known asymmetric encryption methods will no longer be secure enough in 2029. And 5 years later, they should be completely crackable by quantum computers. However, the US National Institute of Standards and Technology (NIST) has already published the first encryption standards that address this problem.

RHEL 10 includes a family of these for OpenSSL, GnuTLS, NSS and OpenSSH. The latter is not yet complete. It already works for OpenSSH connections, but is not yet part of libssh. If you want to use these algorithms system-wide, you must install the crypto-policies-pq-preview and crypto-policies-scripts packages and then activate them via "update-crypto-policies --set DEFAULT:TEST-PQ".

The last new aspect of RHEL 10 comes from the area of AI. Last year, Red Hat showed the integration of Lightspeed into the Ansible product. Now this is also the case with RHEL 10. Red Hat Enterprise Linux Lightspeed gives the admin access to generative AI. Instead of reading help pages or knowledge base articles, the admin can access knowledge, instructions, recommendations and experiences from Red Hat and its customers a la ChatGPT. Finally, the expansion of the hardware platform should also be mentioned. RHEL 10 will be available as a developer preview for RISC-V in summer 2025. Red Hat has teamed up with the company SiFive for this. Their HiFive Premier P550 is the reference platform.

Videos by heise

Regarding AI, there is also a new member of the Red Hat product family – the Inference Server. For those familiar with the scene, this is certainly no surprise. In January 2025, Red Hat completed the acquisition of Neural Magic. This company was known as the driver of the vLLM project. The beginnings were in the Sky Computing Labs at the University of California Berkeley, where vLLM stands for Virtual Large Language Model. It is an open-source library that has become particularly well known in the field of inference for generating AI. It is particularly fast thanks to the clever use of GPUs.

The Red Hat AI Inference Server is the enterprise version of vLLM. The models are stored on Hugging Face under Red Hat AI. The Inference Server can be operated under RHLE AI as well as with Openshift AI. An interesting aspect is that Red Hat even supports the product on non-RHEL and non-Openshift environments. In this way, the company also wants to live up to its own claim in the field of AI. This is: any model, any accelerator, any cloud – i.e. free choice of models, calculation processors and infrastructures or platforms.

Speaking of LLM: the "L" – meaning "large" makes it difficult to use the existing computing capacities effectively in the first approach. Red Hat believes that AI will only be partially successful without the simple option of scaling. And this scaling must also be affordable and effective. Together with other major players in the AI environment, Red Hat has launched the llm-d project. The "d" stands for distributed. The idea is to make vLLM available in a highly distributed manner. The project is just a formal step towards better managing and channeling the various tasks and making it easier for others to contribute.

The technical idea behind it is already very advanced. The foundation is Kubernetes for all components. This means that simple installation and scaling are already anchored in the basis of llm-d. The entry point for the user is the inference gateway, and then there is an inference scheduler and an inference pool. There is much more to discover in detail – in the project documentation. In addition to Red Hat, the founders of llm-d also include Google Cloud, IBM Research and Nvidia. AMD, Cisco, Intel, Lambda and Mistral AI are also involved.

()

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.