Software as a tool, not an end in itself: A plea for more domain expertise
Too many teams discuss technologies instead of problems. It's time to refocus on the domain expertise.
(Image: LanKS/Shutterstock.com)
- Golo Roden
We don't develop software because it's beautiful. We develop it to solve domain problems. This distinction sounds trivial, but it gets lost in the daily work of many teams. For years, I've observed how discussions in development teams unfold: it's about frameworks, architectural styles, the latest technology. It's about whether to opt for microservices or a monolith, whether to use this or that tool, whether code coverage is high enough. What is surprisingly rarely discussed: the domain problem that the software is actually supposed to solve.
This is not by chance or negligence. It's a structural issue in our industry. We've become so accustomed to technical debates that we no longer notice how far we've drifted from the actual task. This article is an attempt to make that visible – and a plea to refocus on what's essential.
The Technology Tunnel Vision
Why do we prefer talking about technology over domain expertise? The answer is uncomfortable but understandable: technology is tangible. It can be compared, measured, evaluated. You can read benchmarks, study documentation, work through tutorials. All of this happens in a world that developers know and control.
Domain issues are different. They are often vaguely formulated, contradictory, and require conversations with people who speak a different language – not in a linguistic sense, but in terms of mindsets and priorities. A domain expert isn't interested in whether the system runs on Kubernetes. They want to know if it makes their work easier. These are different worlds, and building a bridge between them is exhausting.
Furthermore, there's a structural problem: the industry rewards technical expertise more than domain knowledge. Those who are familiar with the latest technology are considered competent. Those who understand the domain of an insurance or logistics company are rarely invited to conferences. This shapes where developers focus their energy. And so, teams emerge that are technically up-to-date but don't truly understand the problem they are solving.
Architectural Styles as Matters of Faith
Nowhere is the technology tunnel vision more apparent than in the debate around architectural styles. Take the discussion about microservices versus monolith. In the 2010s, it was almost a natural law that microservices were the better choice. Anyone who built a monolith had to justify themselves. Those who adopted microservices were considered modern.
But what is actually the domain-based justification for microservices? At its core, it's about being able to develop, deploy, and scale parts of a system independently. This makes sense when different teams work on different parts, when these parts have different lifecycles, when they are used with different intensity. It's less sensitive when a small team builds a manageable system where everything is interconnected.
Nevertheless, teams opt for microservices without asking these questions. The decision is not based on the domain but on what is currently considered best practice. The result is distributed systems with all their complexity (network communication, eventual consistency, debugging across service boundaries, etc.), without this complexity being justified by a domain benefit.
The architecture should follow the domain, not the other way around. If the domain problem suggests a clear separation of responsibilities, microservices can be the right answer. If the issue is manageable and the parts are closely related, a well-structured monolith is often the better choice. But this deliberation happens too rarely. Instead, the architectural style becomes a matter of faith, detached from the reality of the problem.
Principles Without Context
It's similar with design principles. DRY, SOLID, Clean Code: every developer knows these terms. They are taught in books, demanded in code reviews, asked in job interviews. But they are typically treated as if they were universal laws that apply always and everywhere.
Take DRY (Don't Repeat Yourself). The principle states that every piece of information in the system should be represented only once. That sounds plausible. Duplication leads to inconsistencies, makes changes difficult, increases the risk of errors. So much for the theory.
In practice, the dogmatic application of DRY often leads to another problem: incorrect abstractions. Two parts of the code look similar, so they are combined. But all too often, they only look similar by chance; they have nothing to do with each other from a domain perspective. But now they are coupled, and if one part needs to change, the abstraction that also affects the other part must be adjusted. The supposed simplification becomes a complexity trap.
Whether duplication is acceptable cannot be answered without domain context. If two pieces of code represent the same domain concept, they should be merged. If they represent different concepts that are coincidentally implemented identically, they should remain separate. However this distinction requires an understanding of the domain, and that is often precisely what's missing.
The same applies to SOLID, to Clean Code rules, to any principle. They are heuristics, not laws. Their usefulness depends on the context. A class with more than 200 lines is not automatically bad. A method with three parameters is not automatically better than one with five. It depends on what the class or method does, what domain concept it represents, how it is used. Those who apply principles without context optimize for technical metrics instead of domain clarity.
Wrong Measure of Quality
Speaking of metrics: the technology tunnel vision is also evident in quality measurement. Test coverage is the most prominent example. High coverage is considered a sign of good quality. Teams set goals: 80 percent, 90 percent, sometimes 100 percent. Tools visualize coverage, dashboards show trends, code reviews demand tests for every new line.
But what does test coverage actually measure? It measures how much code is executed by tests. It doesn't measure whether the right things are being tested. It doesn't measure whether the tests cover meaningful scenarios. And it certainly doesn't measure whether the software solves the domain problem.
It's possible to achieve 100 percent coverage and still have software that misses the mark. The tests verify that the code does what it does, but no one has ever verified if what it does is the right thing. The requirements were misunderstood, the domain was not penetrated, the conversations with domain experts did not take place. The code is technically flawless and domain-wise worthless.
It's similar with static analysis, linting rules, complexity metrics. They all measure technical properties of the code. They can help avoid certain problems. But they cannot measure what truly matters: whether the software solves the right problem in the right way. Relying on these metrics means confusing technical cleanliness with domain quality.
Bringing Back Domain Expertise
How can the focus be shifted? There are approaches that can help: Domain-Driven Design (DDD), Event Storming, Domain Storytelling & Co. They all have in common that they put domain expertise at the center. They demand that developers talk to domain experts, that the code speaks the language of the domain, that architectural decisions are derived from the domain context.
But there's a trap here: even these approaches can become ends in themselves. I recently wrote about how Domain-Driven Design met this fate. The core observation: DDD has become academicized. What began as a simple idea ("understand the domain and speak the language of the business") has become a catalog of patterns that developers discuss instead of talking to domain experts.
This is telling. Even an approach that explicitly promotes domain focus has been transformed by us as an industry into something technical. We discuss whether something is an aggregate or an entity instead of asking what domain experts call the concept. We draw Bounded Context diagrams instead of deriving boundaries from the domain. We memorize patterns instead of understanding the domain. The pull of technology is so strong that it even co-opts approaches that are directed against it.
The solution doesn't lie in new methods or better tools. It lies in a change of attitude. We must accept that the difficult work, i.e., the conversations with domain experts, the struggle for understanding, the tolerance of uncertainty, cannot be replaced by technology. We must find the courage to talk less about frameworks and more about problems. We must admit to ourselves that technical elegance is not a value in itself if it doesn't serve the domain.
Software as a Tool, Not a Work of Art
Software development is applied problem-solving. We are not paid to write beautiful code. We are paid to solve problems. This sounds trivial, but it has far-reaching consequences.
It means that the best architecture is not the most elegant one, but the one that best addresses the domain problem. It means that principles and patterns are tools, not goals. It means that technical debt is sometimes acceptable if it enables a faster solution to an urgent domain problem. Furthermore, it means that we should not measure our success by coverage numbers or clean code metrics, but by whether the software fulfills its purpose.
This requires a rethink. It requires us to leave our comfort zone and engage with what is uncomfortable: communication with people who think differently than we do. It requires humility, i.e., the insight that as developers, we are not the experts in the domain, even if we like to pretend we are. It requires pragmatism, i.e., the willingness to accept technically suboptimal solutions if they fit the domain better.
I am not advocating for ignoring technical quality. Good code is important. Clean architecture is important. Tests are important. But all of this is a means to an end, not an end in itself. If we forget this, we build technically sophisticated systems that nobody needs. We optimize for metrics that mean nothing. We have debates that achieve nothing.
The question we should ask ourselves in every project, with every decision, is simple: Does this help to solve the domain problem better? If yes, we continue. If no, we should pause and ask ourselves whether we are currently serving technology or domain expertise. The answer to that determines whether we practice software development as a craft or as an end in itself. (rme)