GenAI in Business: Leveraging the Existing .NET Foundation
Generative AI with .NET from SDKs and streaming to tools and agents: an overview of OpenAI, Azure, and the new Microsoft Agent Framework.
(Image: vs148 / Shutterstock.com)
- Rainer Stropek
Generative AI provides real value not in individual experiments and prototyping, but when existing software products, platforms, and business processes are specifically enhanced with GenAI.
Often, C# and .NET form the foundation for numerous ERP-related systems, custom applications, and standard products. Around this platform, extensive expertise, stable build and deployment pipelines, and proven operational and security concepts exist. These investments are long-term and cannot be easily replaced.
Instead of completely redeveloping systems for GenAI, existing applications should be made more intelligent through assistance functions, context-aware support, semi-automated workflows, or Copilot-like extensions. For this, Large Language Models (LLMs) must be seamlessly integrable into existing architectures.
Context is Key: GenAI as an Experiment or in Production
This is where the difference between an experiment and a production system becomes apparent. A proof of concept can be quickly implemented in Python. However, for production systems, different criteria count: integration into existing authentication mechanisms, access to internal services and databases, reuse of existing libraries, clean logging, reproducible builds, and clearly regulated operations. Each additional language and each new toolchain increases complexity and thus effort and risk.
From this perspective, .NET is highly relevant for GenAI applications. Not because .NET is fundamentally better suited for AI, but because it significantly simplifies the transition from experiment to production application. Existing teams can work with familiar tools, existing infrastructure remains usable, and governance and security requirements can be more easily met.
At the same time, .NET is rarely the first choice for AI-related research, model training, or early exploration. In these areas, Python and increasingly TypeScript dominate. New frameworks and APIs usually appear there first.
However, for production GenAI systems in companies, .NET is often the obvious platform, provided suitable SDKs are available. No one wants to work at the protocol level with HTTP requests, JSON parsing, and asynchronous event streams just to connect an LLM.
Well-maintained .NET SDKs are therefore not a luxury but an enabler. They determine whether existing development teams can integrate GenAI quickly, securely, and maintainably, or whether unnecessary friction arises. How well the current SDK landscape meets this demand today is shown by looking at the available providers and abstractions.
Videos by heise
The SDK Landscape for Large Language Models from a .NET Perspective
Anyone wanting to use an LLM with .NET today faces a much more diverse SDK landscape than a year ago. However, not all available SDKs are equivalent, and not every one is suitable for production applications.
The following figure illustrates the relationship between base SDKs, LLM proxies, agent layer, presentation layer, and MCP. The following chapters will go into more detail on these points. The article will specifically examine how well the SDK landscape is positioned from a .NET perspective.
For a long time, official SDKs for Large Language Models were almost exclusively available for Python and TypeScript. This is starting to change. Several major model providers now offer .NET SDKs that are suitable for production applications.
The first serious C# SDKs for the OpenAI API did not originate directly from OpenAI itself but from Microsoft as part of the close strategic partnership between the two companies. However, they still had weaknesses. After a transition period, OpenAI itself took responsibility. Today, the official C# SDK is maintained directly by OpenAI and released as a regular NuGet package. Microsoft additionally provides a companion package. It facilitates authentication, configuration, and integration with OpenAI in Azure Foundry, among other things, but it is optional. The core functionality for accessing OpenAI models today clearly lies within the official OpenAI SDK itself.
Anthropic provides an official C# SDK, which is currently still marked as beta but is already well usable. With its GenAI SDK, Google offers an official .NET integration for Gemini models, for both Google AI and Vertex AI. Official .NET SDKs maintained by the manufacturer also exist today for other LLM providers.
However, the OpenAI SDK has the greatest maturity and adoption. Its associated GitHub repository has several thousand stars, and the NuGet package has nearly 35 million downloads.
Community SDKs and Code Generation: tryAGI as a Pragmatic Approach
In addition to official SDKs, there are various community and open-source SDKs. The project tryAGI plays a special role here, systematically generating C# SDKs for a variety of LLM providers based on OpenAI specifications.
This way, SDKs are available for many models for which there are no official .NET libraries, including providers like Mistral and platforms like Ollama. The approach is pragmatic: the SDKs are consistently structured, quickly available, and generally cover the basic API functions reliably. However, they have limitations: generated SDKs follow the respective API specification relatively strictly with no or limited manual adjustments, which is why advanced API functions are sometimes only available in a clunky way or not at all. Long-term maintenance also does not depend on the model provider itself but on community engagement.
For simple use cases, such SDKs are often perfectly adequate. However, for more complex, interactive, or long-term operational systems, one should carefully check whether a community SDK meets one's own requirements.
Proxies and Compatibility Layers: Abstraction Instead of Vendor Lock-in
Another category includes proxies and compatibility layers that attempt to bundle different model providers behind a unified API. A prominent example is LiteLLM, which provides an OpenAI-compatible API and forwards requests to various models.
For .NET developers, this approach can be attractive because they can use OpenAI SDKs. Often, it is sufficient to change the base URL and configure different model names. The same applies to locally operated models via Ollama, which also provides OpenAI-compatible endpoints.
This abstraction facilitates switching between models and providers, but it comes at a price. Compatibility layers typically adhere to the smallest common denominator of the supported APIs. Newer or provider-specific features are often only partially supported or missing entirely. Especially with newer API variants or advanced functions, noticeable limitations can occur here.