Framework for AI agents: MCP Gateway links AI with data sources and apps

The MCP Gateway implements a protocol for interactions in AI agents. Meanwhile, CEO Levine is planning AI agents that build complex software from voice input.

listen Print view
Graphic for the use of the framework on a screen

(Image: Solo.io)

3 min. read
By
  • Udo Seidel

At KubeCon EU in London, the US software company Solo.io announced the MCP Gateway. It is designed to implement the open-source Model Context Protocol and connect AI models in cloud environments with data sources and external tools. AI agents can be developed on the basis of this interaction. Behind the MCP Gateway is the kgateway project, a native implementation of an API gateway in Kubernetes. It is a kind of ingress controller for the container orchestration platform.

The MCP Gateway is also under the auspices of the Cloud Native Computing Foundation (CNCF), which has seen an increasing demand for certificates for cloud-native courses. Solo.io also handed over the open source framework for AI agents kagent to the CNCF. It is designed to facilitate the use of autonomous AI agents in the automation of work processes and consists of three components.

On the one hand, the framework comprises the agents themselves, which analyze data, make decisions and execute or initiate tasks. Built-in tools form the second component. They display log files, pods or metrics, for example, but must follow the MCP standard. The third component is a programming interface for building and launching AI agents. It is based on the Microsoft AutoGen framework, which is also open source.

As Solo.io founder and CEO Idit Levine explained in an interview with iX, kagent and the MCG Gateway are just the first steps on the way to a larger goal. In future, she wants users to define the tasks to be performed by software in natural language and for an AI agent to use this to build a functioning program. Although this would involve familiar tools such as a development environment, version control and compiler, the AI agent would have to understand the original language input, ask questions if necessary and convert it into code.

Videos by heise

The AI agent also has to select the appropriate programming language for the software and carry out the necessary functionality tests and quality assurance. Until now, these tasks have been carried out by humans. AI should soon be able to perform these tasks almost completely. It will still take some time before artificial intelligence can create complex software based on voice input. However, Levine announced that Solo.io would be releasing further programs in the coming months to get closer to this goal.

(wpl)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.