Skip to main content
Logo
The European High Performance Computing Joint Undertaking (EuroHPC JU)

Our supercomputers

Today the EuroHPC JU has procured eight supercomputers, located across Europe.

375 petaflops
Sustained performance
550 petaflops
Peak performance
Compute partitions:GPU partition (LUMI-G), x86 CPU-partition (LUMI-C), data analytics partition (LUMI-D), container cloud partition (LUMI-K)
Central Processing Unit (CPU):The LUMI-C partition features 64-core next-generation AMD EPYC™ CPUs
Graphics Processing Unit (GPU):LUMI-G based on the future generation AMD Instinct™ GPU
Storage capacity:

LUMI’s storage system consists of three components. First, there is a 7-petabyte partition of ultra-fast flash storage, combined with a more traditional 80-petabyte capacity storage, based on the Lustre parallel filesystem, as well as a data management service, based on Ceph and being 30 petabytes in volume. In total, LUMI has a storage of 117 petabytes and a maximum I/O bandwidth of 2 terabytes per second.

Applications: AI,  especially deep learning, and traditional large scale simulations combined with massive scale data analytics in solving one research problem
TOP500 ranking:#1 in EU; #3 globally (June 2022)
Green500 ranking:#1 in EU; #3 globally (June 2022)
Other details:

LUMI takes over 150m2 of space, which is about the size of a tennis court. The weight of the system is nearly 150 000 kilograms (150 metric tons)

249,47 petaflops
Sustained performance
323,40 petaflops
Peak performance
Compute partitions: Booster, hybrid CPU-GPU module delivering 240 PFlops, Data-Centric, delivering 9 Pflops and featuring DDR5 Memory and local NVM for data analysis
Central Processing Unit (CPU):Intel Ice-Lake (Booster), Intel Sapphire Rapids (data-centric)
Graphics Processing Unit (GPU):NVIDIA Ampere architecture-based GPUs, delivering 10 exaflops of FP16 Tensor Flow AI performance
Storage capacity:Leonardo is equipped with over 100 petabytes of state-of-the-art storage capacity and 5PB of High Performance storage
Applications:The system targets: modular computing, scalable computing applications, data-analysis computing applications, visualization applications and interactive computing applications, urgent and cloud computing
Other details:Leonardo will be hosted in the premises of the Tecnopolo di Bologna. The area devoted to the EuroHPC Leonardo system includes 890 sqm of data hall, 350 sqm of data storage, electrical and cooling and ventilation systems, offices and ancillary spaces
205 Petaflops
Sustained performance
314 Petaflops
Peak performance
Compute partitions: GPP (General purpose partition), ACC (Accelerated partition), NGT GPP (Next Generation Technology General Purpose partition and NGT ACC (Next Generation Technology General Purpose partition). Additional smaller partitions for pre- and post-processing.
Central Processing Unit (CPU):The GPP, ACC partitions both rely on Intel Sapphire Rapids CPUs. NGT ACC is based on Intel Emerald Rapids and the NGT GPP is based on NVIDIA Grace.
Graphics Processing Unit (GPU):The ACC partition is based on NVIDIA Hopper whereas the NGT ACC partition is built on Intel Rialto Bridge.
Storage capacity:MareNostrum storage provides 248PB net capacity based on SSD/Flash and hard disks, and an aggregated performance of 1.2TB/s on writes and 1.6TB/s on reads. Long-term archive storage solution based on tapes will provide 402PB additional capacity. Spectrum Scale and Archive will be used as parallel filesystem and tiering solution respectively.
Applications:

All the applications suit ideally MareNostrum5 thanks to its heterogeneous configuration, with a special focus on medical applications, drug discovery as well as digital twins (earth and human body), energy, etc. Its large general-purpose partition provides an environment well suited for most current applications that solve scientific/industrial problems. In addition, the accelerated partition provides an excellent environment for large scale simulations, AI and deep learning.

Other details:

MareNostrum 5 is located in BSC’s new facilities, next to the Chapel which is hosting previous systems. The datacenter has a total power capacity of 20MW, and cooling capacity of 17MW, with a PUE below 1,08.

6,92 petaflops
Sustained performance
10,05 petaflops
Peak performance
Compute partitions:CPU partition: 960 nodes, 256GB memory/node, 20% double memory, HDR100 & GPU partition: 60 nodes, HDR200
Central Processing Unit (CPU) :122.800 cores, 1920 CPUs, AMD Epyc 7H12
Graphics Processing Unit (GPU):240 Nvidia A100 cards
Storage capacity:High-performance NVMe Lustre (1PB), large-capacity Ceph (23PB)
Applications:Traditional Computational, AI, Big Data/HPDA, Large-scale data processing
TOP500 ranking:#38 in EU; #131 globally (June 2022)
Green500 ranking:#67 in EU; #247 globally (June 2022)
Other details:Wide bandwidth for data transfers to other national and international computing centres (up to 500 Gbit/s). Data processing throughput 400GB/s from high-performance storage and 200Gb/s from large capacity storage
12,81petaflops
Sustained performance
18,29 petaflops
Peak performance
Compute partitions:Cluster, Accelerator - GPU, Accelerator - FPGA, Large Memory
Central Processing Unit (CPU):AMD EPYC
Graphics Processing Unit (GPU):NVIDIA Ampere A100
Storage capacity:20 petabytes main storage with an all-flash scratch tier at 400GB/s, and a 5 petabytes tape library expandable to 100 petabytes
Applications:Traditional Computational, AI and Big Data/HPDA workloads
TOP500 ranking:#12 in EU; #48 globally (June 2022)
Green500 ranking:#6 in EU; #15 globally (June 2022)
Other details:Modular Supercomputer Architecture with a Cloud Module for complex use cases and persistent services, an aggregated 476TB RAM, Infiniband HDR interconnect in Dragonfly+ topology, high speed links to the GÉANT network and Public Internet
9,59 petaflops
Sustained performance
12,91 petaflops
Peak performance
Compute partitions:

The supercomputer consists of 6 main parts:  

  • a universal part for standard numerical simulations, which will consist of approximately 720 computer servers with a theoretical peak performance of 3.8 PFlop/s,
  • an accelerated part with 70 servers and each of them being equipped with 8 GPU accelerators providing a performance of 11 PFlop/s for standard HPC simulations and up to 150 PFlop/s for artificial intelligence computations, 
  • a part designated for large dataset processing that will provide a shared memory of as high as 24 TB, and a performance of 74 TFlop/s,
  • 36 servers with a performance of 131 TFlop/s will be dedicated for providing cloud services,
  • a high-speed network to connect all parts as well as individual servers at a speed of up to 200 Gb/s,
  • data storages that will provide space for more than 1 PB of user data and will also include high-speed data storage with a speed of 1 TB/s for simulations as well as computations in the fields of advanced data analysis and artificial intelligence.
Central Processing Unit (CPU):More than 100,000 CPU cores and 250 TB of RAM
Graphics Processing Unit (GPU): More than 3.8 million CUDA cores / 240,000 tensor cores of NVIDIA A100 Tensor Core GPU accelerators with a total of 22.4 TB of superfast HBM2 memory
Storage capacity:More than 1 petabyte of user data with high-speed data storage with a speed of 1 TB/s
Applications: Traditional Computational , AI, Big Data
TOP500 ranking:#20 in EU; #79 globally (June 2022)
Green500 ranking:#5 in EU; #14 globally (June 2022)
4,51 petaflops
Sustained performance
5,94 petaflops
Peak performance
Compute partitions:One partition providing 1128 nodes, 4,44 petaflops
Central Processing Unit (CPU):AMD EPYC 7H12 64core, 2.6GHz, 280W (Code name Rome)
Graphics Processing Unit (GPU): No
Storage capacity:2 petabytes
Applications: Traditional Computational
TOP500 ranking:#32 in EU; #113 globally (June 2022)
Green500 ranking:#65 in EU; #241 globally (June 2022)
Other details:Topology - Dragonfly+ with 200Gbps (IB HDR) bandwidth per link
7,22 petaflops
Sustained performance
10 petaflops
Peak performance
Compute partitions:ARM Partition: 1632 nodes, 3.8 PFLops ; x86 Partition: 500 nodes, 1,62 PFLops ; Accelerated: 33 nodes, 1,72 PFLops
Central Processing Unit (CPU):

A64FX (ARM partition), AMD EPYC (x86 partitions)

Graphics Processing Unit (GPU): NVidia Ampere
Storage capacity:

430 TB High-speed NVMe partition, 10.6 PB high-speed based Parallel File System partition.

Applications: Traditional Computational, AI, Big Data
Other details:

Deucalion will be installed at the Portuguese Foundation for Science and Technology (FCT) Minho Advanced Computing Centre (MACC), in close collaboration with the municipality of Guimarães, in the North of Portugal, as part of a fully sustainable computing infrastructure aiming at promoting new advancements in the digital and green transitions.