Linux: Criticism, reasons and consequences of the CVE flood in the kernel

Around 55 kernel CVEs released per week also pose problems for the big names in the Linux industry – and require more collaboration and new tools.

listen Print view
Laptop shows a CVE list, with some "Linux" lines and a penguin sitting in front of it

(Image: Bild erstellt mit KI in Bing Image Creator durch heise online / dmk)

9 min. read
By
  • Thorsten Leemhuis
Contents

All kinds of administrators and developers have been complaining loudly for several months now: whereas for years they only had to react to four to six security vulnerabilities per week in the Linux kernel, since February there have been more than ten times as many. Prominent kernel developers have recently emphasized that the new situation is a good thing in lectures and discussions on the flood of CVE markings. In doing so, they suggested that distributors complaining about this were indicating that they had not taken the security problem seriously before – and had therefore ignored important corrections that were not explicitly identified as security fixes.

Some larger companies have apparently realized this and are considering various activities to make the new situation more bearable for themselves and their customers. This should also benefit hobby admins at some point. However, as this requires a huge amount of work on various fronts, it could take a while, as a closer look reveals.

The developers of the Linux kernel are responsible for the change in the situation. For decades, they didn't give a damn about labels such as CVE (Common Vulnerabilities and Exposures); time and again, they even concealed CVE IDs or even removed them from patch descriptions in order to disguise the security aspect of changes. But because external parties were increasingly being granted CVEs for questionable kernel vulnerabilities, the developers were forced to become the CNA (CVE Numbering Authority) at the beginning of the year in order to control the allocation of CVEs for Linux themselves – in a similar way to other open source projects before.

Videos by heise

Since February, the responsible Linux developers have assigned around 3375 CVE identifiers, including all kinds of vulnerabilities that had been patched in previous years due to the requirements at the time of foundation. Following objections, the responsible parties have withdrawn around one hundred of these CVEs. However, many external parties, but also all kinds of developers of the official kernel, criticize that many of the other CVEs are also unjustified because no real vulnerability has been patched.

Greg Kroah-Hartman, the main driving force behind the whole thing and the second most important kernel developer, always responds to these accusations in the same way: the guidelines for CVE allocation would oblige him and the other responsible parties to assign a CVE label to any potential vulnerability that they eliminate. He also points out that people use Linux in a wide variety of ways –, making it very difficult to assess whether a seemingly harmless fix is actually security-relevant in certain areas of use.

For those interested in the details, we recommend the presentation and video of a recent talk given by Kroah-Hartman, in which he explains these aspects in more detail and provides numerous other insights. For example, he states that the kernel is not the front-runner with around 55 CVEs per week: Wordpress, for example, publishes over 110 per week. He also goes into the procedure: 'Moreover, the allocation only takes place a few days or weeks after the corrections have been incorporated into new kernel versions intended for users.

The responsible parties only assign CVEs to vulnerabilities if three developers currently involved in the public review process vote in favor. . The actual programmers of the changes or the maintainers of the respective kernel code are not involved – but they are often not security experts anyway and the maintainers are often already at or beyond the limit, as Kroah-Hartman recently emphasized elsewhere. As he has done for many years, he still advises users to simply use the latest version of the latest stable or longterm kernel series in order to avoid known vulnerabilities.

In a presentation slide, Kroah-Hartman also quotes Kees Cook, whose great commitment for over a decade has made the Linux kernel significantly more secure. According to him, users looking for the most secure kernel only have the choice between two approaches: Either use the latest version of a current series or investigate each individual CVE entry more closely – and then apply the corrections for those vulnerabilities that are relevant for the kernel used in the specific area of application. Both involve a lot of work and inconvenience, for example when reboots are difficult or impossible; according to Cook, everyone has to see for themselves which of the two approaches is easier in the respective environment.

Cook's comments come from an open discussion on the new kernel CVE situation that recently took place at the Linux Plumbers Conference. The panel included kernel developers from numerous large and small companies that use Linux internally or in products. The panel touched on very different areas and left all sorts of unanswered questions. Nevertheless, one or two things are foreseeable.

Considerations by developers from various companies to define a handful of threat models sounded quite concrete. Interested parties can then go through the CVEs in a division of labor according to open source principles and determine for each one whether it is relevant for the respective threat model.

One such threat model could be "cloud providers": modern ARM64 and x86-64 server hardware with fully trusted users, but virtual machines with software that is not trusted in any way. Companies such as Amazon/AWS, Google, IBM and Microsoft could then work together to determine which CVEs are relevant for this threat model instead of having to do this in-house. The results of these and similar analyses could then flow back into the JSON files, which already centrally bundle all details on the CVEs.

Other threat models mentioned were automotive or enterprise Linux. The latter, if it ever comes to pass, is probably more suitable for Linux distributions that are clearly defined in terms of hardware support and areas of application. In other words, those that Amazon/AWS, Google, Meta, Microsoft, Oracle, Red Hat or Suse offer or use in-house. Even for them, classifying the CVEs would be a huge effort. If you wanted to include kernels compiled with significantly more functions and drivers, such as the Debian kernel, the effort would again increase significantly – possibly to such an extent that no one would volunteer to do the necessary work or pay someone to do it.

A related aspect was only touched on in the discussion round, but was discussed in the environment: The lack of open source tools that check whether the code affected by the CVE is even contained in the deployed kernel – either directly or in a module that could be easily removed or blocked locally. At least Google, Oracle and Suse have been using such tools internally for some time, as has become known in the field. It is probably only a matter of time before someone publishes such a tool under an open source license in order to pull together in the future.

The topic of live patching was only touched on, as there are still unanswered questions there too. It might be theoretically conceivable to apply all or at least a large chunk of the CVE fixes at runtime, but it would be difficult to implement in practice: Creating and verifying such complex live patches is too labor-intensive and time-consuming. Ultimately, it would probably be possible to avoid reboots for a few weeks, but not many months.

One participant in the panel discussion, however, mentioned that the many CVEs are a huge problem in areas where updates are difficult and expensive due to certification regulations –, for example when using Linux in hospitals. Kroah-Hartman explained that US and EU legislators have recognized the problem and are working on solutions, but that this will take time.

A recording of the discussion can be found in the live stream of the conference; however, a few minutes of the audio were missing shortly after the start.

(anw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.