Exascale computer Jupiter: container village instead of data center hall
Europe's first exascale-class supercomputer is to be green and quick to set up and dismantle. That is why it is being built at the FZJ in a container village .
(Image: Forschungszentrum JĂĽlich)
- Bernd Schöne
Europe's fastest computer is taking shape at Forschungszentrum JĂĽlich. Not only its performance is unusual, but also its configuration. The manufacturer delivers it fully assembled in a container. The Modular Data Centre (MDC) is the first data center of a German research institute to be built in a container. Thanks to heat recovery, Jupiter is already considered to be the greenest computer in the world.
In Europe, scientists are eagerly awaiting the continent's most powerful HPC behemoth, the Jupiter exascale computer. West of Cologne, on the grounds of Forschungszentrum Jülich, everything is now ready – What is missing is Jupiter itself. The name was not chosen at random, it is actually short for "Joint Undertaking Pioneer for Innovative and Transformative Exascale Research", it is also the highest god of the Roman Empire, counterpart of the Greek Zeus, and the largest planet in our solar system.
With twice the accuracy, Jupiter reaches the exclusive exascale class. These are HPC computers that can perform more than 1 trillion floating point operations per second. An accuracy of 8 bits is usually sufficient for training AI models. The computing power then skyrockets to 70 trillion. This would make Jupiter the fastest AI computer in the world. Computing times on such machines are coveted and regularly oversubscribed by a factor of 2. A scientific selection process will clarify access from 2025, as Jupiter will then belong to research. However, it is still construction workers who are taking care of it.
(Image:Â Forschungszentrum JĂĽlich)
Half a billion euros
In June 2022, the European supercomputing initiative EuroHPC JU decided that the first European exascale computer will be built in JĂĽlich and set the budget at half a billion euros. Half of this will be paid for by the EU, while the other half will be shared by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW). Of the 500 million euros, 273 million euros will go to the manufacturer, the ParTec-Eviden Supercomputer Consortium, for the hardware including software, service and support over five years. The rest will be used for electricity and other expenses, such as personnel.
Videos by heise
The original plan was to celebrate the commissioning with the Federal Minister and State Secretary in November. However, the pandemic and one or two technical hurdles delayed construction. Easter 2025 is now the date being discussed in JĂĽlich. Sometime in the first half of the year, according to those responsible, the complete computer should undergo final acceptance. By then at the latest, everyone will know how many of Nvidia's Grace Hopper superchips, which combine Hopper GPUs and Grace CPUs, have been donated to Jupiter in order to achieve the contractually agreed exascale. Around 6000 nodes with almost 24,000 Grace Hopper superchips are planned.
(Image:Â Forschungszentrum JĂĽlich)
There are currently two smaller expansion stages. The first, the Jupiter Exascale Development Instrument JEDI, entered the TOP500 in June at number 189 with 19,584 cores and 4.5 PFlops. The second preliminary version, the JUPITER Exascale Transition Instrument, JETI for short, made it into the new, 64th list of the TOP500 last week in 18th place with almost 400,680 cores and 83.14 PFlops. Both are located in the current supercomputer center with its offices, seminar rooms and the huge data center from the mainframe era. The scientists can use them to test and get to know the new technology.
Containers: faster and cheaper
Jupiter itself was actually supposed to move into a smart new data center behind the current one. However, after the pandemic and increased construction costs, the new building became too expensive and the multi-storey building was scrapped. However, it was not only the costs that spoke against such a standard construction, but also the tight time frame until commissioning. "That's why we decided not to build a classic data center, but a modular building in a container design. It's faster and cheaper," says Benedikt von St. Vieth, responsible for the construction of the new computer in JĂĽlich.
(Image:Â Forschungszentrum JĂĽlich)
This container village, known as the Modular Data Center (MDC), is made up of around 50 containers manufactured and supplied by the IT company Eviden of the Atos Group. Each element consists of a double container, which is similar in shape and dimensions to shipping containers. Such a double container contains 20 racks in which around 6000 servers are housed. Each module has its own transformers for the power supply and its own cooling system. According to the manufacturer, this highly unusual design is above all flexible, allowing old servers to be replaced with new ones more quickly. In addition, containers with completely different hardware, such as neuromorphic computers or quantum computers, can be added to the container village quickly and easily.
(Image:Â Forschungszentrum JĂĽlich)
A second Jupiter module, the Jupiter Cluster, is planned alongside the actual exascale computer, specifically the Jupiter Booster Module, which provides the floating point performance. It is due to be built in 2027, around two years after the completion of Jupiter Booster. Its approximately 1300 nodes will then be reserved primarily for vector calculations. Here, too, the exact number is still open. BullSequana XH3000 systems from the French company Eviden/Atos are also planned for the nodes of the Jupiter Cluster, albeit equipped with the European Rhea1 chip from SiPearl. Jupiter completely dispenses with x86 processors from AMD or Intel. Jupiter Cluster is expected to provide 5 PFlops in three years.
(Image:Â Forschungszentrum JĂĽlich)
The 85 x 42 m2 concrete slab is already finished. The pipes for the waste heat and the first containers are also in place. They will form the entrance area to the container village. The Datahall, the location for the storage towers, was also delivered at the beginning of November. It consists of three units: the 21 PByte ExaFLASH flash module, the high-capacity ExaSTORE with parallel cluster file system and a tape module for backups and archives. The UPS (independent power supply) is also already here and the first BullSequana XH3515-HMQ blades are due to arrive before Christmas. Naturally installed in their containers ready for connection.
(Image:Â Forschungszentrum JĂĽlich)
The greenest supercomputer
Those responsible will only know how much electricity their supercomputer draws when it is completed. The consumption is calculated at 9 to 11 MW during normal operation, with the maximum power consumption likely to be 17 MW. This means that the power supply on the campus has already been upgraded. The grid operator Westnetz has replaced the previous redundant 2x40MVA transformers with 2x60-80MVA transformers. Nevertheless, Jupiter will be one of the "greenest" HPC behemoths in the world, as it gets a lot of computing power out of every megawatt. Firstly, according to the contract with the energy supplier, the electricity comes from green, renewable sources, and secondly, the waste heat is put to good use.
Water pipes transport the waste heat to the nearby office buildings. In terms of heating, the supercomputer is something like an expensive instantaneous water heater that heats the institute buildings. The waste heat leaves the computer at over 40 °C. Hot enough to supply heating systems on the campus. Cooled down to 36 degrees, the water flows back into the computer.
For emergencies, heat exchangers are also planned on the roof, like those on the roof of the previous data center, which release the heat into the environment. This is effective, but also a waste of energy. Hot water cooling with subsequent thermal utilization in a building is the only feasible way to use the waste heat instead of releasing it into the environment and thus wasting it.
Whether the new container construction can also cope with heavy rain and other climatic adversities remains to be seen. Most supercomputers around the world are located in well-protected rooms with strict access control. The container village, on the other hand, is being built on the edge of a forest.
(vbr)