LUMI
LUMI is a pre-exascale EuroHPC supercomputer located in Kajaani, Finland. It is a Cray EX supercomputer supplied by Hewlett Packard Enterprise (HPE) and hosted by CSC – IT Center for Science.
More technical information regarding LUMI, can be found here.
Compute partitions: |
GPU partition (LUMI-G), x86 CPU-partition (LUMI-C), data analytics partition (LUMI-D), container cloud partition (LUMI-K) |
Central Processing Unit (CPU): | The LUMI-C partition features 64-core next-generation AMD EPYC™ CPUs |
Graphics Processing Unit (GPU): | LUMI-G based on the future generation AMD Instinct™ GPU |
Storage capacity: |
LUMI’s storage system consists of three components. First, there is a 7-petabyte partition of ultra-fast flash storage, combined with a more traditional 80-petabyte capacity storage, based on the Lustre parallel filesystem, as well as a data management service, based on Ceph and being 30 petabytes in volume. In total, LUMI has a storage of 117 petabytes and a maximum I/O bandwidth of 2 terabytes per second. |
Applications: | AI, especially deep learning, and traditional large scale simulations combined with massive scale data analytics in solving one research problem |
TOP500 ranking: | #1 in EU; #5 globally (May 2024 ranking) |
Green500 ranking: | #6 in EU; #12 globally (May 2024 ranking) |
Other details: |
LUMI takes over 150m2 of space, which is about the size of a tennis court. The weight of the system is nearly 150 000 kilograms (150 metric tons) |
For information about pay per use conditions, please contact the hosting site directly: CSC IT - Centre for Science
Compute partitions: |
GPU-partition (Booster) delivering 240 petaflops x86 CPU-partition (Data-Centric) delivering 9 petaflops and featuring DDR5 Memory and local NVMe. |
Central Processing Unit (CPU): | Intel Ice-Lake (Booster), Intel Sapphire Rapids (Data-Centric) |
Graphics Processing Unit (GPU): |
13824 "Da Vinci" GPUs (based on NVIDIA Ampere architecture) delivering up to 10 exaflops of FP16 Tensor Flow AI performance |
Storage capacity: | Leonardo is equipped with over 100 petabytes with state-of-the-art hard disk drives and 5 petabytes with full flash and NVMe technologies. |
Applications: | The system targets: modular and scalable computing applications, data analysis, as well as interactive, urgent and cloud computing applications |
TOP500 ranking: | #2 in EU; #7 globally (May 2024 Ranking) |
Green500 ranking: | #14 in EU; #28 globally (May 2024 Ranking) |
Other details: | Leonardo is hosted in the premises of the Tecnopolo di Bologna. The area devoted to the EuroHPC Leonardo system includes 1240 sqm of computing room floor space and 900 sqm of ancillary space. |
For information about pay per use conditions, please contact the hosting site directly: CINECA
MARENOSTRUM 5
MareNostrum 5 is a pre-exascale EuroHPC supercomputer located in Barcelona, Spain. The system is supplied by Bull SAS combining Bull Sequana XH3000 and Lenovo ThinkSystem architectures. MareNostrum 5 is hosted by Barcelona Supercomputing Center (BSC).
More technical information regarding Marenostrum 5, can be found here.
Compute partitions: | GPP (General purpose partition), ACC (Accelerated partition), NGT GPP (Next Generation Technology General Purpose partition and NGT ACC (Next Generation Technology General Purpose partition). Additional smaller partitions for pre- and post-processing. |
Central Processing Unit (CPU): | The GPP, ACC partitions both rely on Intel Sapphire Rapids CPUs. NGT ACC is based on Intel Emerald Rapids and the NGT GPP is based on NVIDIA Grace. |
Graphics Processing Unit (GPU): | The ACC partition is based on NVIDIA Hopper whereas the NGT ACC partition is built on Intel Rialto Bridge. |
Storage capacity: | MareNostrum storage provides 248PB net capacity based on SSD/Flash and hard disks, and an aggregated performance of 1.2TB/s on writes and 1.6TB/s on reads. Long-term archive storage solution based on tapes will provide 402PB additional capacity. Spectrum Scale and Archive will be used as parallel filesystem and tiering solution respectively. |
Applications: |
All the applications suit ideally MareNostrum5 thanks to its heterogeneous configuration, with a special focus on medical applications, drug discovery as well as digital twins (earth and human body), energy, etc. Its large general-purpose partition provides an environment well suited for most current applications that solve scientific/industrial problems. In addition, the accelerated partition provides an excellent environment for large scale simulations, AI and deep learning. |
TOP500 ranking: | #3 in EU; #8 globally (May 2024 Ranking) |
Green500 ranking: | #7 in EU; #15 globally (May 2024 Ranking) |
Other details: |
MareNostrum 5 is located in BSC’s new facilities, next to the Chapel which is hosting previous systems. The datacenter has a total power capacity of 20MW, and cooling capacity of 17MW, with a PUE below 1,08. |
For information about pay per use conditions, please contact the hosting site directly: Barcelona Supercomputing Center (BSC)
MELUXINA
MeluXina is a petascale EuroHPC supercomputer located in Bissen, Luxembourg. It is supplied by Atos, based on the BullSequana XH2000 supercomputer platform and hosted by LuxProvide.
More technical information regarding MeluXina, can be found here.
Compute partitions: | Accelerator - GPU (500 AI PetaFlops), Cluster (3 PetaFlops peak), Accelerator - FPGA and Large Memory Modules |
Central Processing Unit (CPU): | AMD EPYC |
Graphics Processing Unit (GPU): | NVIDIA Ampere A100 |
Storage capacity: |
20 PetaBytes main storage with all-flash scratch tier over 600GB/s, Tape archival capabilities |
Applications: | AI, Digital Twins, Traditional Computational workloads, Quantum simulation |
TOP500 ranking: | #22 in EU; #89 globally (May 2024 Ranking) |
Green500 ranking: | #18 in EU; #39 globally (May 2024 Ranking) |
Other details: | Modular Supercomputer Architecture, Cloud Module for complex use cases and persistent services, Infiniband HDR interconnect, high speed links to the RESTENA NREN and GÉANT network, Luxembourg Internet Exchange and Public Internet |
For information about pay per use conditions, please contact the hosting site directly: LuxProvide
KAROLINA
Karolina is a petascale EuroHPC supercomputer located in Ostrava, Czechia. It is supplied by Hewlett Packard Enterprise (HPE), based on an HPE Apollo 2000Gen10 Plus and HPE Apollo 6500 supercomputers. Karolina is hosted by IT4Innovations National Supercomputing Center.
More technical information regarding Karolina, can be found here.
Compute partitions: |
The supercomputer consists of 6 main parts:
|
Central Processing Unit (CPU): | More than 100,000 CPU cores and 250 TB of RAM |
Graphics Processing Unit (GPU): | More than 3.8 million CUDA cores / 240,000 tensor cores of NVIDIA A100 Tensor Core GPU accelerators with a total of 22.4 TB of superfast HBM2 memory |
Storage capacity: | More than 1 petabyte of user data with high-speed data storage with a speed of 1 TB/s |
Applications: | Traditional Computational , AI, Big Data |
TOP500 ranking: | #37 in EU; #135 globally (May 2024 Ranking) |
Green500 ranking: | #17 in EU; #36 globally (May 2024 Ranking) |
For information about pay per use conditions, please contact the hosting site directly: IT4Innovations National Supercomputing Center
DISCOVERER
Discoverer is a petascale EuroHPC supercomputer located in Sofia, Bulgaria. It is supplied by Atos, based on the BullSequana XH2000 supercomputer and hosted by Sofia Tech Park.
More technical information regarding Discoverer, can be found here.
Compute partitions: | One partition providing 1128 nodes, 4,44 petaflops |
Central Processing Unit (CPU): | AMD EPYC 7H12 64core, 2.6GHz, 280W (Code name Rome) |
Graphics Processing Unit (GPU): | No |
Storage capacity: | 2 petabytes |
Applications: | Traditional Computational, HPC as a Service / Federated HPC Supercomputing services |
TOP500 ranking: | #59 in EU; #188 globally (May 2024 Ranking) |
Green500 ranking: | #87 in EU; #280 globally (May 2024 Ranking) |
Other details: | Topology - Dragonfly+ with 200Gbps (IB HDR) bandwidth per link |
For information about pay per use conditions, please contact the hosting site directly: Sofia Tech Park
Compute partitions: |
CPU partition: 960 nodes with 2CPUs and 256GB memory/node (20% 1TB/node), 1x HDR100 & GPU partition: 60 nodes with 2CPUs and 512GB memory, 2x HDR100, 4x Nvidia A100/node |
Central Processing Unit (CPU) : | 2040x CPUs AMD EPYC 7H12 (64c, 2.6-3.3GHz), 130.560 cores on CPU and GPU partition |
Graphics Processing Unit (GPU): |
240x Nvidia A100 with 40 GB HBM2 (+4 on GPU login nodes), 6912 FP32 CUDA cores and 432 Tensor cores per GPU |
Storage capacity: | High-performance NVMe Lustre (1PB), large-capacity Ceph (23PB) |
Applications: | Traditional Computational, AI, Big Data/HPDA, Large-scale data processing |
TOP500 ranking: | #68 in EU; #226 globally (May 2024 Ranking) |
Green500 ranking: | #97 in EU; #304 globally (May 2024 Ranking) |
Other details: |
6x 100 Gbit/s bandwidth for data transfers to other national and international computing centres, data processing throughput of more than 400GB/s with high-performance storage and 200GB/s with large-capacity storage |
For information about pay per use conditions, please contact the hosting site directly: IZUM
DEUCALION
Deucalion is a petascale EuroHPC supercomputer located in Guimarães, Portugal. It is supplied by Fujitsu Technology Solutions combining a Fujitsu PRIMEHPC (ARM partition) and Atos Bull Sequana (x86 partitions). Deucalion is hosted by MACC.
Compute partitions: | ARM Partition: 1632 nodes, 3.8 PFLops ; x86 Partition: 500 nodes, 1,62 PFLops ; Accelerated: 33 nodes, 1,72 PFLops |
Central Processing Unit (CPU): |
A64FX (ARM partition), AMD EPYC (x86 partitions) |
Graphics Processing Unit (GPU): | NVidia Ampere |
Storage capacity: |
430 TB High-speed NVMe partition, 10.6 PB high-speed based Parallel File System partition. |
Applications: | Traditional Computational, AI, Big Data |
TOP500 ranking: | #67 in EU; #219 globally (May 2024 Ranking) |
Green500 ranking: | #35 in EU; #80 globally (May 2024 Ranking) |
For information about pay per use conditions, please contact the hosting site directly: MACC
*ARM partition only
JUPITER
JUPITER will be the first EuroHPC exascale supercomputer. The system will be located at the Forschungszentrum Jülich campus in Germany and operated by the Jülich Supercomputing Centre. It will be based on Eviden’s BullSequana XH3000 direct liquid cooled architecture.
Compute partitions: |
Booster Module (highly-scalable GPU accelerated) Cluster Module (general-purpose, high memory bandwidth) |
Central Processing Unit (CPU): |
The Cluster Module will utilise the SiPearl Rhea1 processor (ARM, HBM), integrated into the BullSequana XH3000 platform. |
Graphics Processing Unit (GPU): | The Booster Module will utilise NVIDIA technology, integrated into the BullSequana XH3000 platform. |
Storage capacity: |
JUPITER will provide a 20-petabyte partition of ultra-fast flash storage. The spinning disk and backup infrastructure capacity will be procured separately and subject to change. |
Applications: | JUPITER will be designed to tackle the most demanding simulations and compute-intensive AI applications in science and industry. Applications will include training large neural networks like language models in AI, simulations for developing functional materials, creating digital twins of the human heart or brain for medical purposes, validating quantum computers, and high-resolution simulations of climate that encompass the entire Earth system. |
Green500 ranking: | JEDI module: #1 in EU; #1 globally (May 2024 Ranking) |
*Expected sustained performance
All systems display the real performance of the combined partitions and are ordered according to the last Top 500 ranking.