Loading container images: Keeping Docker Hub Rate Limits under control

The download of container images at Docker Hub is restricted by the pull rate limits. These should be tightened – but there is a suitable workaround.

Save to Pocket listen Print view

(Image: erstellt mit Dall-E durch iX)

8 min. read
By
  • Nicholas Dille
Contents

The company Docker recently announced that it would be tightening the rate limits for downloading (pulling) container images from Docker Hub. This change was to apply to certain customer accounts and unauthenticated users and would take effect on April 1, 2025. Docker justified the measure with the need to make the operation of Docker Hub profitable.

Nicholas Dille

Nicholas Dille is a Senior DevOps Engineer at the Haufe Group, a digital media group in Freiburg. He deals with containerization, virtualization and automation in heterogeneous environments. He has been active as a blogger, speaker, author and trainer for twenty years and has been one of the Docker Captains since 2017. Microsoft honored him as MVP 2010-2023.

The company's desire to cover its costs is understandable. On the other hand, announcements of this kind have a global impact. Due to the widespread use of Docker Hub, countless developers around the world are affected by the pull rate limits on the platform. It is therefore good news that the announced tightening of the rate limits was withdrawn at the last moment and the old conditions continue to apply. Nevertheless, it is worth checking the available options in order to be prepared for future changes to the rate limit and perhaps also to be more economical with container image downloads in general.

There are various options for circumventing these restrictions. The use case and the environment determine how complex a suitable solution will be. Basically, the following situations should be considered to select the appropriate approach: Depending on the working method and tasks, developers may be affected by the pull rate limits on their work device. If they start build processes locally, the container image pulls quickly add up so that work is impaired by the pull rate limits. As soon as changes are checked by central CI/CD pipelines, container image pulls occur in the core infrastructure, which lead to annoying aborts when the limits are reached. However, central infrastructure such as Kubernetes clusters or individual Docker hosts also access Docker Hub and can be affected by the pull limits.

As a result, developers can be slowed down and central services cannot be deployed and updated. The effects are particularly painful if there is no budget for higher rate limits –, as is often the case with open source projects.

As a rule, developers do not use a Docker account and therefore download container images anonymously without authenticating themselves as users. The rate limit is calculated based on the IPv4 address. In companies, the rate limit is quickly reached because the Internet access is used by many developers at the same time and therefore a single IPv4 address is used as the source. For private individuals, the easiest way is to create a free account. This increases the number of permitted container image pulls to 100 per hour and working with container images should usually be possible without interruptions.

Companies of any size can remove the pull limits through paid subscription plans. However, the maintenance of Docker accounts per employee causes effort in license management, in IT and also for the developers. Each employee concerned must create a personal account, which must then be added to a team or business plan. Only in the business plan can this maintenance be avoided by using single sign-on.

When using Docker Desktop, the necessary plans may already be available. As Docker Desktop does not carry out license control, a Docker account is not mandatory for the developer. However, it may be necessary to set up the accounts at the developer's workstation.

If CI/CD jobs or central infrastructure such as Kubernetes clusters are affected by the pull limits for container images, the following options are available to remedy the situation. In each case, adjustments to the infrastructure are necessary.

In principle, changes to the central infrastructure take time and must be communicated within the organization. The changes must be planned, tested and rolled out in an orderly manner. At the same time, communication with the affected user groups is essential so that the new options are known and necessary adjustments can be made in good time.

Option 1: Use public mirrors of Docker Hub

There are a few providers that operate a public mirror of Docker Hub and make it available free of charge, including Amazon Elastic Container Registry (ECR) Public including search or Google Container Registry (GCR).

Although this initially sounds like a simple workaround, there is still a high risk that rate limits will be introduced at a later date or that the fair use guidelines will be exceeded.

Option 2: Mirroring individual images

If a company already operates a private container registry, the required official container images can be mirrored in it. In this case, special care is required to recognize new image tags and updates to existing image tags and to create copies in the private container registry. A helpful tool for this is regctl image copy, which avoids the cumbersome download and upload with docker save and docker load.

The disadvantage of this procedure is that the mirrored container images must be checked regularly for updates. This requires the maintenance of a list of mirrored container images. New developer requirements require manual intervention. In addition, requests are regularly sent to Docker Hub to check that they are up to date. Although these requests do not count as pulls limited by rate limits, they can still be avoided, as the third option, which is clearly superior to mirroring container images, shows:

Option 3: Private pull-through proxy

The most convenient solution is the pull-through proxy. This is configured as a registry mirror in the Docker daemon and is contacted instead of Docker Hub each time it is accessed. This makes its use completely transparent for developers. Each downloaded container image is cached and can be delivered to many users in the company without having to download Docker Hub each time. The proxy should use an access token from a paid account to remove the pull rate limits.

The pull-through proxy can be used for CI/CD jobs and central infrastructure as well as on the developers' workstations.

Some examples are the GitLab Dependency Proxy or Jfrog Remote Docker Repositories. The behavior can be adapted depending on the tool used. For example, the retention time can be adjusted or how long after an update it should be avoided contacting Docker Hub again.

The image pull rate limits pose a particular challenge for open source projects. There is often no sponsor available to cover hosting costs. Although there are several alternatives to Docker Hub, they often only offer a limited amount of storage space for container images. It is therefore advisable to send an application to one of the open source programs, including the Docker-sponsored Open Source Program or GitLab for Open Source. As a rule, these programs require an annual renewal of support – which makes the future somewhat uncertain.

Even though the tightening of the rate limits for Docker Hub has been suspended and the old rates continue to apply, companies and development teams should fundamentally address the issue. They should select a solution that is suitable for the respective situation. It is always an advantage to be prepared and to be able to tackle any necessary changes without time pressure.

(map)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.