Cloud Native: Reusable CI/CD pipelines with GitLab
To provide developers with reusable building blocks for test and build pipelines, GitLab offers two concepts: Job Templates and GitLab Components.
(Image: iX)
- Dennis Sobczak
In many organizations, software teams develop their own CI/CD pipelines to handle recurring tasks such as code checkout, testing, scanning, build and deployment. This individualized approach often leads to unnecessary overhead in configuration and maintenance, costing time that would be better invested in developing new features. Although the tasks and workflows are similar between teams, each team reinvents the wheel instead of taking advantage of existing solutions. This practice can lengthen release cycles and reduce the efficiency of teams.
To address these challenges, it would make sense to establish company-wide standards for CI/CD pipelines. This could be done by providing reusable standard building blocks such as job templates and GitLab components that are centrally cataloged, documented and shared across teams. Such modules should be generic and flexible so that they can be easily installed, exchanged or extended by different teams without complex dependencies or a multitude of parameters. A company-wide standard would increase efficiency and ensure that all teams can fall back on proven and stable solutions, which ultimately accelerates development and increases the quality of the software.
Videos by heise
The first approach and prerequisites
The first step is to record the technologies used – programming language, resulting artifacts, configuration files – and to fully define the necessary steps for the CI/CD pipelines, contexts and the context limits of the steps.
The following scenarios are based on a Java application that is available in the form of a Maven project in a GitLab repository. The technology stack therefore comprises: Java, Maven, Docker and GitLab as a CI/CD system. The Java source code should first be checked out, tested (unit tests) and then compiled, i.e. run through the "test" and "build" stages.
The directory structure of the application:
- app/
- .mvn/wrapper
- src # Java source code of the application
- .gitignore
- mvnw # Maven wrapper
- pom.xml
The "naive" approach
Before the two GitLab concepts, Job Templates and GitLab Components are used, a "naive" test and build pipeline will first illustrate the procedure.
The Maven CLI tool with the command mvn test is used to execute the unit tests. The command mvn package triggers the build of the application. Both steps require a Docker container image in which the Maven CLI tool is included and can be used – provided the wrapper file mvnw is not in the application's repository. Otherwise, the script ./mvnw with parameters of the same name should be executed instead of mvn.
The Dockerfile is defined as follows:
# syntax=docker/dockerfile:1
MAINTAINER <maintainer-name>
FROM docker.io/library/debian:bookworm-slim
ENV TZ=Europe/Berlin # tzdata vorausgesetzt
RUN apt-get install -y mvn
RUN useradd -u 10001 noadmin
USER noadmin
The following aspects must be observed for the Dockerfile:
- A concrete version of the Dockerfile specification must be specified.
- The Docker container image (obtained here from the Docker Hub) must be lightweight. It should only contain programs and libraries that are necessary for the successful execution of the actual task so that the download from the registry is quick and any attack surfaces for hackers remain small.
- Attention should also be paid to the vulnerabilities found in the Docker Hub.
- The Maven CLI is installed and usable.
- The time zone must be set so that the timestamps are also displayed correctly.
- The Docker container image is "rootless". A dedicated user must therefore be created who executes commands and starts processes.
- This user is assigned a UID, which is used in the security context in the Kubernetes deployment resources (such as the GitLab runners) to increase security at runtime.
It is advisable to build the container image in advance so that it can be obtained from a container image repository.
Once this preliminary work has been completed, the initial GitLab pipeline – can be defined in .gitlab-ci.yml in the project root of the application – as follows:
# Stages und Abfolge definieren
stages:
- test
- build
# Code liegt bereits vor
run_maven_test:
stage: test
image:
name: <custom container image> # based on docker.io/library/openjdk:21
entrypoint: [""]
script:
- echo "Run Unit Tests with Maven"
- mvn test
run_maven_compile:
stage: build
image:
name: <custom container image> # based on docker.io/library/openjdk:21
entrypoint: [""]
script:
- echo "Run Build with Maven"
- mvn compile
After checking the pipeline file into the application's repository, the job moves into the queue. As soon as a GitLab runner is available, the defined steps run sequentially.
The approach described has an immediately recognizable disadvantage: The workflow cannot be imported directly into other projects via a reference – such as a include – cleanly. The simplest workaround in this case is copy and paste. However, this scatters the same code and you end up having to maintain it in all places and therefore multiple times.
The more stages and therefore steps a GitLab pipeline contains, the more time-consuming such additional activities become. For example, the stages "secrets-scanning" (for detecting unintentionally checked-in secrets), "cve-scanning" (scanning dependencies for known CVEs), "build-container-image" (for building a Docker container image) and "dependency-update" (for updating dependencies) are to be added to the existing pipeline, using the tools Gitleaks, Aquasec Trivy, Kaniko Executor and Renovate.
(Image:Â cloudland.org)
From July 1 to 4, 2025, interested parties will find a packed line-up with more than 200 highlights at the Cloud Native Festival CloudLand –, including the topic of platform engineering. Visitors can expect a colorful mix of predominantly interactive sessions, hands-on sessions and workshops, accompanied by a comprehensive supporting program that invites active participation.
Spread across up to ten streams, which are characterized by topics from the communities of the cloud hyperscalers AWS, Azure and Google, there are sessions on:
- Cloud-native software engineering
- architecture
- AI & ML
- Data & BI
- DevOps
- Public Cloud
- Security & Compliance
- Organization & Culture
- Sovereign Cloud
- Compute, Storage & Network
Tickets for the festival are still available until May 6 at a special early bird price.