Distributed applications with cloud-native technologies

Software development gains new freedoms through microservices and containers: autonomous teams, polyglot languages and frameworks as well as more resilience.

listen Print view
Distributed applications with cloud-native technologies

(Image: erzeugt mit Midjourney durch iX)

23 min. read
By
  • Matthias Haeussler
Contents
iX-tract
  • Distributed applications enable heterogeneous environments with different systems and architectures. The advantages are platform independence, availability and scalability.
  • The article shows the various options for configuration, architecture and modular design using different technologies and frameworks. The focus is on Kubernetes.
  • Communication in the modular network can be monitored, analyzed and controlled using service meshes with or without a sidecar model.
  • The twelve principles for good cloud apps postulated by Adam Wiggins also apply to a large extent to distributed applications.

Distributed applications and distributed systems in the broader sense are becoming increasingly popular, mainly thanks to developments in microservices and container technology. The accompanying technological advances enable independent development in autonomous teams, the freedom to choose languages and frameworks, and the improvement of resilience through scaling and load balancing.

Matthias Haeussler
Matthias Haeussler

Matthias Haeussler ist Chief Technologist bei der Novatec Consulting GmbH mit Fokus auf Cloud Native. Er ist Dozent für „Verteilte Systeme“ und regelmäßiger Sprecher auf internationalen IT-Konferenzen.

This article examines various approaches to implementing distributed application architectures using modern, cloud-native software technologies. On the one hand, these include frameworks tied to programming languages, which are particularly widespread in the Java environment. On the other hand, there are platforms such as Kubernetes and service meshes – both in the traditional style and new variants without a sidecar.

Videos by heise

First of all, it is important to clarify why a distributed application architecture makes sense at all and which components are necessary for successful implementation. The principles of the 12-factor app published by Adam Wiggins back in 2011 (see box "Concept of the 12-factor app") also serve as a guide.

Concept of the 12-factor app

In 2011, Adam Wiggins published the concept of the 12-factor app. The co-founder of PaaS provider Heroku used it to describe a proven approach (best practices) for the development of applications in order to operate them efficiently in the cloud and make optimum use of its possibilities. The 12-factor app defines fundamental principles that should make an application platform-independent, scalable and granularly configurable. In addition, the principles not only include cloud-specific aspects, but also general principles of good software development, such as the use of a central version control system for each component and a clean separation of code and dependencies. By following the twelve principles, developers ensure that their applications are well adapted to the cloud infrastructure and benefit from the advantages of the platform.

The following factors are particularly noteworthy:

  • Factor 3: "Config" (separation of configuration and code): The separation of configuration and code enables flexible adaptation of the application without recompilation on the one hand and deployment in different environments with different configurations on the other.
  • Factor 6: "Processes" (statelessness and scalability): Stateless processes make it easier to scale and manage the application, as the processes do not need to store any information about their previous state.
  • Factor 7: "Port binding" (binding to ports and network communication): Using standardized network protocols simplifies the interaction between components and enables easy integration into different environments.
  • Factor 11: "Logs" (logs as streams): Treating logs as streams enables more efficient error analysis and troubleshooting when managing distributed applications. The principles of the 12-factor app are used beyond Heroku in systems such as Cloud Foundry, Spring Boot, Docker and Kubernetes to successfully operate modern applications in a dynamic and agile environment.

The traditional monolithic approach to software architecture is considered by many to be outdated. Supporters of distributed systems and microservice architectures in particular often speak disparagingly of the "big ball of mud". However, even the distributed approaches that are considered modern are not free of problems, as Peter Deutsch summarized as early as 1994 in his "Fallacies of distributed computing", which have not lost their validity to this day.

In particular, splitting an application into different modules leads to a network dependency between the components, which in turn has an impact on latency times, configuration and error handling. Nevertheless, it makes sense in certain scenarios – and is sometimes even unavoidable.

In the following, we will take a closer look at the advantages and associated aspects. The fundamental aim of a distributed application should be to deliver benefits to both users and development teams. These lie primarily in non-functional requirements such as availability, reliability and scalability.

Such a system should feel like a single unit for users – in line with Andrew Tanenbaum's demand formulated in his book "Distributed Systems". People who use Google Maps are not interested in how many containers are behind it or which programming languages are used, only reliability and function count.

Heterogeneity plays a central role in the theory of distributed systems, for example with regard to parallelism and concurrency. As Figure 1 shows, higher efficiency can be achieved by processing heterogeneous tasks in parallel.

Parallel processing of tasks in a distributed system (Fig. 1).

Heterogeneity is also reflected in dependencies between operating systems, runtimes, frameworks, etc. (see Figure 2). In all these cases, it is not possible to bring an application into a monolithic artifact, which makes a distributed approach indispensable.

Heterogeneity in operating systems and technologies (Fig. 2).

Finally, extensibility should be mentioned. A distributed architecture offers the advantage that new components can be integrated into an existing system as independent modules without any significant impact on the existing modules. There is therefore no need to recompile or package the components.

The resilience factor is mainly about making the application highly available and keeping it resistant to unforeseeable events, such as fluctuations in the number of users or failures of subsystems or network segments. The failure of such a component should be controllable and never lead to a failure of the entire application. The scaling of individual components enables fail-safety through redundancy. If one instance fails, sufficient other instances should still be available so that the service continues without interruption (see Figure 3). This is not only for redundancy, but also for load distribution, for example to distribute the load evenly across the individual components in the event of an increase in the number of users and thus ensure the desired performance of the overall system (see Figure 4).

Reliability through redundancy. If an instance fails, the service can be guaranteed by redundant instances (Fig. 3).

Fluctuations in the number of users can be balanced out through balanced load distribution (Fig. 4).

In the event of a sudden increase in the number of users or – worse – a denial of service attack, the load can quickly increase to such an extent that it can no longer be balanced even by scaling. To protect the application from this, a network component can be placed that blocks or at least throttles incoming traffic. Circuit breakers or bulkheads are usually used in such cases.

Another aspect of resilience is uninterrupted availability during an application update. To ensure this, various deployment and zero-downtime techniques are available – including blue/green deployments and canary releases.

Both variants basically work according to the principle that a new version of the application is deployed while the old one is still running. Whereas with blue/green deployments, the changeover takes place in a single step, a canary deployment introduces the new version gradually and selectively, initially for a limited group of users, before the new release goes fully into production operation.

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.