Proxmox VE 9.1 Uses OCI Images for LXC Containers

The new version of Proxmox Virtual Environment 9.1 creates LXC containers from OCI images, stores vTPM status in qcow2 format, and improves SDN monitoring.

listen Print view
Screenshot of Proxmox VE 9.1
4 min. read
By
  • Michael Plura
Contents

The Vienna-based Proxmox Server Solutions GmbH has released its Proxmox Virtual Environment (VE) 9.1. It is based on Debian GNU/Linux 13.2 "Trixie" with updated packages and bug fixes. Proxmox VE 9.1 does not use Debian's standard Linux kernel but a customized Linux kernel 6.17.2-1. A QEMU, slightly updated to version 10.1.2, handles the emulation and virtualization of virtual machines (VMs).

LXC for Linux containers is now also used in version 6.0.4. OpenZFS 2.3.4 is responsible for mass storage access. Ceph 19.2.3 "Squid" provides distributed storage solutions.

Open Container Initiative (OCI) images can now be used directly as templates for LXC containers. Both full system containers and lean application containers, for example for microservices, are supported. This allows containers to be integrated quickly and easily into Proxmox VE 9.1 via the vendor-neutral OCI standard.

Reportedly, problems with Docker containers that were imported into Proxmox VE LXC containers should also be resolved. The entire OCI import area for LXC containers is still under "tech-preview" and should therefore be approached with some caution (see also the thread on Proxmox VE 9.1 in the Proxmox forum).

Virtual machines such as Windows, which require TPM, can be completely stored with Proxmox VE 9.1, including the current TPM status, in qcow2 format across different storage types. Proxmox VE 9.1 offers finer control for nested virtualization by allowing a new vCPU flag to selectively enable this feature in individual VMs. This gives administrators more flexibility and control without necessarily having to assign the full host CPU type to the guest.

iX Workshop: Setting up a Proxmox VE Cluster with Ceph Storage System
iX Workshop: Setting up a Proxmox VE Cluster with Ceph Storage System

In this practical workshop, IT administrators will learn how to set up and manage a highly available Proxmox VE cluster in conjunction with a Ceph storage system. They will become familiar with the basic concepts, best practices, and troubleshooting methods.

Registration and dates at heise.de/s/ppABK

The GUI now offers more detailed status reporting for the SDN stack, including currently connected guests on local bridges and VNets, as well as IP and MAC addresses learned in EVPN zones. Additionally, fabrics are now visible in the resource tree, displaying important network information such as routes, neighbors, and interfaces.

Proxmox VE 9.1 offers initial integration of Intel TDX (Trust Domain Extensions) to enable isolation of guest memory from the host on suitable Intel CPUs. This also requires guest support and currently does not work with Windows or offer live migration. With Intel TDX and AMD SEV, Proxmox VE 9.1 thus provides fundamental support for the most important manufacturer-specific confidential computing technologies.

Videos by heise

Bulk actions can now be performed not only on individual nodes but also at the datacenter level, allowing many virtual guests to be started, shut down, suspended, or migrated centrally. These new datacenter-wide actions are accessible via a right-click on the datacenter in the resource tree.

All improvements and new features, as well as potential issues when upgrading from Proxmox VE 9.0 to 9.1, are described in detail in the Proxmox roadmap. Proxmox VE 9.0 is available open-source software available for download immediately and can be used free of charge. Access to the Enterprise repository is available from 115 Euros (net) per year, professional support costs between 355 and 1060 Euros (net) per year per CPU socket.

(mho)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.