HashiCorp's vision of AI in the infrastructure-as-code world
With Infragraph, Hashicorp has introduced a new project for better recording of IT infrastructure, which is also intended to provide AI training data.
(Image: jam/heise Medien)
- Dr. Udo Seidel
On the second day of HashiConf 2025, HashiCorp CTO Armon Dagdar presented the previously internal Infragraph project to the public. In technical terms, it is a complete inventory of the company's own infrastructure, from computers and operating systems to packages. The software obtains the relevant information from the various data sources via available interfaces, or so-called connectors. You can think of Infragraph as a complete, self-updating CMDB (Configuration Management Database).
Infragraph is the internal code name for the project so far. Interested parties can apply for the non-public beta programme.
The various cloud providers, of course, the HCP product range (Hashicorp Cloud Platform), Red Hat Ansible, and OpenShift, as well as IBM watsonx Orchestrate, Concert, Turbonomic, and Cloudability, can serve as data sources. This gives the project a special weight, as it brings the various members of the IBM family into play.
Data source for AI
Why is Hashicorp now entering this CMDB inventory field? The short answer is: artificial intelligence. More precisely, it's Hashicorp's vision for the role of AI for infrastructure from Day 0 to Day N. The keyword is “autonomous operation”—thanks to artificial intelligence. However, this can only work if it is trained with the right data and is constantly updated. Project Infragraph is a fundamental building block for this. Others are the MCP servers (Model Context Protocol) for Terraform, Vault, and Vault Radar. These are already available as public beta versions.
However, Armon Dagdar put everything into a broader context: Hashicorp's vision of AI, especially the agent-based version. First of all, there are the corresponding applications. This is where the aforementioned MCP servers from Hashicorp come in. Number two is the infrastructure itself, i.e., the various gateways from Solo.io, Microsoft, Docker, and others. These two components are already up and running.
The situation is different with number three: the specific data for the respective environment. Here there are the seemingly eternal problems of fragmentation, missing or outdated information, scaling challenges, and a lack of communication on a technical and human level. Often enough, this is coupled with insufficient visualization of the connections and dependencies of the various infrastructure and platform components.
Videos by heise
Without reliable and up-to-date data, however, AI cannot be trained correctly and efficiently—for its own IT environment. And this is where Project Infragraph wants to start. It wants to provide the necessary data, including a consumable visualization of the various dependencies. That is very ambitious. But Hashicorp is coming up with the aforementioned non-public beta version. In addition, the integration with other products from the IBM family is also visible in the non-AI environment. There should be substantial news here with the Hashiconf 2026—from October 26 to 28 in Atlanta—at the latest.
(kbe)