|Compute partitions:||GPU partition (LUMI-G), x86 CPU-partition (LUMI-C), data analytics partition (LUMI-D), container cloud partition (LUMI-K)|
|Central Processing Unit (CPU):||The LUMI-C partition features 64-core next-generation AMD EPYC™ CPUs|
|Graphics Processing Unit (GPU):||LUMI-G based on the future generation AMD Instinct™ GPU|
LUMI’s storage system consists of three components. First, there is a 7-petabyte partition of ultra-fast flash storage, combined with a more traditional 80-petabyte capacity storage, based on the Lustre parallel filesystem, as well as a data management service, based on Ceph and being 30 petabytes in volume. In total, LUMI has a storage of 117 petabytes and a maximum I/O bandwidth of 2 terabytes per second.
|Applications:||AI, especially deep learning, and traditional large scale simulations combined with massive scale data analytics in solving one research problem|
|TOP500 ranking:||#1 in EU; #3 globally (November 2022)|
|Green500 ranking:||#3 in EU; #7 globally (November 2022)|
LUMI takes over 150m2 of space, which is about the size of a tennis court. The weight of the system is nearly 150 000 kilograms (150 metric tons)
|Compute partitions:||Booster, hybrid CPU-GPU module delivering 240 PFlops, Data-Centric, delivering 9 Pflops and featuring DDR5 Memory and local NVM for data analysis|
|Central Processing Unit (CPU):||Intel Ice-Lake (Booster), Intel Sapphire Rapids (data-centric)|
|Graphics Processing Unit (GPU):||NVIDIA Ampere architecture-based GPUs, delivering 10 exaflops of FP16 Tensor Flow AI performance|
|Storage capacity:||Leonardo is equipped with over 100 petabytes of state-of-the-art storage capacity and 5PB of High Performance storage|
|Applications:||The system targets: modular computing, scalable computing applications, data-analysis computing applications, visualization applications and interactive computing applications, urgent and cloud computing|
|TOP500 ranking:||#2 in EU; #4 globally (November 2022)|
|Green500 ranking:||#7 in EU; #14 globally (November 2022)|
|Other details:||Leonardo will be hosted in the premises of the Tecnopolo di Bologna. The area devoted to the EuroHPC Leonardo system includes 890 sqm of data hall, 350 sqm of data storage, electrical and cooling and ventilation systems, offices and ancillary spaces|
MareNostrum 5 is a pre-exascale EuroHPC supercomputer to be located in Barcelona, Spain. The system is supplied by Bull SAS combining Bull Sequana XH3000 and Lenovo ThinkSystem architectures. MareNostrum 5 is hosted by Barcelona Supercomputing Center (BSC).
|Compute partitions:||GPP (General purpose partition), ACC (Accelerated partition), NGT GPP (Next Generation Technology General Purpose partition and NGT ACC (Next Generation Technology General Purpose partition). Additional smaller partitions for pre- and post-processing.|
|Central Processing Unit (CPU):||The GPP, ACC partitions both rely on Intel Sapphire Rapids CPUs. NGT ACC is based on Intel Emerald Rapids and the NGT GPP is based on NVIDIA Grace.|
|Graphics Processing Unit (GPU):||The ACC partition is based on NVIDIA Hopper whereas the NGT ACC partition is built on Intel Rialto Bridge.|
|Storage capacity:||MareNostrum storage provides 248PB net capacity based on SSD/Flash and hard disks, and an aggregated performance of 1.2TB/s on writes and 1.6TB/s on reads. Long-term archive storage solution based on tapes will provide 402PB additional capacity. Spectrum Scale and Archive will be used as parallel filesystem and tiering solution respectively.|
All the applications suit ideally MareNostrum5 thanks to its heterogeneous configuration, with a special focus on medical applications, drug discovery as well as digital twins (earth and human body), energy, etc. Its large general-purpose partition provides an environment well suited for most current applications that solve scientific/industrial problems. In addition, the accelerated partition provides an excellent environment for large scale simulations, AI and deep learning.
MareNostrum 5 is located in BSC’s new facilities, next to the Chapel which is hosting previous systems. The datacenter has a total power capacity of 20MW, and cooling capacity of 17MW, with a PUE below 1,08.
|Compute partitions:||CPU partition: 960 nodes, 256GB memory/node, 20% double memory, HDR100 & GPU partition: 60 nodes, HDR200|
|Central Processing Unit (CPU) :||122.800 cores, 1920 CPUs, AMD Epyc 7H12|
|Graphics Processing Unit (GPU):||240 Nvidia A100 cards|
|Storage capacity:||High-performance NVMe Lustre (1PB), large-capacity Ceph (23PB)|
|Applications:||Traditional Computational, AI, Big Data/HPDA, Large-scale data processing|
|TOP500 ranking:||#44 in EU; #140 globally (November 2022)|
|Other details:||Wide bandwidth for data transfers to other national and international computing centres (up to 500 Gbit/s). Data processing throughput 400GB/s from high-performance storage and 200Gb/s from large capacity storage|
|Compute partitions:||Cluster, Accelerator - GPU, Accelerator - FPGA, Large Memory|
|Central Processing Unit (CPU):||AMD EPYC|
|Graphics Processing Unit (GPU):||NVIDIA Ampere A100|
|Storage capacity:||20 petabytes main storage with an all-flash scratch tier at 400GB/s, and a 5 petabytes tape library expandable to 100 petabytes|
|Applications:||Traditional Computational, AI and Big Data/HPDA workloads|
|TOP500 ranking:||#14 in EU; #52 globally (November 2022)|
|Other details:||Modular Supercomputer Architecture with a Cloud Module for complex use cases and persistent services, an aggregated 476TB RAM, Infiniband HDR interconnect in Dragonfly+ topology, high speed links to the GÉANT network and Public Internet|
The supercomputer consists of 6 main parts:
|Central Processing Unit (CPU):||More than 100,000 CPU cores and 250 TB of RAM|
|Graphics Processing Unit (GPU):||More than 3.8 million CUDA cores / 240,000 tensor cores of NVIDIA A100 Tensor Core GPU accelerators with a total of 22.4 TB of superfast HBM2 memory|
|Storage capacity:||More than 1 petabyte of user data with high-speed data storage with a speed of 1 TB/s|
|Applications:||Traditional Computational , AI, Big Data|
|TOP500 ranking:||#24 in EU; #85 globally (November 2022)|
|Compute partitions:||One partition providing 1128 nodes, 4,44 petaflops|
|Central Processing Unit (CPU):||AMD EPYC 7H12 64core, 2.6GHz, 280W (Code name Rome)|
|Graphics Processing Unit (GPU):||No|
|Storage capacity:||2 petabytes|
|TOP500 ranking:||#38 in EU; #123 globally (November 2022)|
|Other details:||Topology - Dragonfly+ with 200Gbps (IB HDR) bandwidth per link|
Deucalion is a petascale EuroHPC supercomputer currently built in Guimarães, Portugal. It is supplied by Fujitsu combining a Fujitsu PRIMEHPC (ARM partition) and Atos Bull Sequana (x86 partitions). Deucalion is hosted by MACC.
|Compute partitions:||ARM Partition: 1632 nodes, 3.8 PFLops ; x86 Partition: 500 nodes, 1,62 PFLops ; Accelerated: 33 nodes, 1,72 PFLops|
|Central Processing Unit (CPU):|
A64FX (ARM partition), AMD EPYC (x86 partitions)
|Graphics Processing Unit (GPU):||NVidia Ampere|
430 TB High-speed NVMe partition, 10.6 PB high-speed based Parallel File System partition.
|Applications:||Traditional Computational, AI, Big Data|
Deucalion will be installed at the Portuguese Foundation for Science and Technology (FCT) Minho Advanced Computing Centre (MACC), in close collaboration with the municipality of Guimarães, in the North of Portugal, as part of a fully sustainable computing infrastructure aiming at promoting new advancements in the digital and green transitions.