Discover EuroHPC JU

The European High Performance Computing Joint Undertaking (EuroHPC JU) is a legal and funding entity, created in 2018 and located in Luxembourg. 

The EuroHPC JU allows the European Union and the EuroHPC JU participating countries to coordinate their efforts and pool their resources to make Europe a world leader in supercomputing. This will boost Europe's scientific excellence and industrial strength, support the digital transformation of its economy while ensuring its technological sovereignty. 

More precisely, the EuroHPC JU aims to:

  • develop, deploy, extend, and maintain a world-leading supercomputing and data infrastructure in Europe. The objective is to reach exascale capabilities by 2022/2024. Exascale supercomputers are capable of more than a billion billion operations per second (when compared to ten billion operations per second of an ordinary laptop device). Another objective is to build 'hybrid' machines that blend the best of quantum and classical HPC technologies with the first state-of-the-art pilot quantum computers by 2025.
  • support the development and uptake of innovative and competitive supercomputing technologies and applications based on a supply chain that will reduce Europe's dependency on foreign computing technology. A specific focus will be given to greener and energy-efficient HPC technologies. Synergies with broader technology sectors and markets, such as autonomous vehicles, extreme-scale, big data, and applications based on edge computing or artificial intelligence will be encouraged. 
  • widen the use of HPC infrastructures to a large number of public and private users wherever they are located in Europe and support the development of key HPC skills, education and training for European science and industry. One of the objective is to create a network of national HPC Competence Centres to ease access to European HPC opportunities in different industrial sectors and deliver tailored solutions. Another iconic objective will be to set up the first pan-European Master of Science programme for HPC to develop HPC talents in Europe. 


The EuroHPC Joint Undertaking is composed of public and private members:

Public members:

  • the European Union (represented by the Commission),
  • Member States and Associated Countries that have chosen to become members of the Joint Undertaking: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and Turkey.

Private members:


Map of Europe presenting the 30 EuroHPC participating countries


The EuroHPC Joint Undertaking is jointly funded by its members with a budget of around EUR 7 billion for the period 2021-2027.

Most of this funding comes from the current EU long-term budget, the Multiannual Financial Framework (MFF 2021-2027) with a contribution of EUR 3 billion, distributed as follows:

  • EUR 1,9 billion from the Digital European Programme (DEP) to support the acquisition, deployment, upgrading and operation of the infrastructures, the federation of supercomputing services, and the widening of HPC usage and skills;
  • EUR 900 million from Horizon Europe (H-E) to support research and innovation activities for developing a world-class, competitive and innovative supercomputing ecosystem across Europe;
  • EUR 200 million from Connecting Europe Facility-2 (CEF-2) to improve the interconnection of HPC, quantum computing, and data resources, as well as the interconnection with the Union’s common European data spaces and secure cloud infrastructures.

The EU contribution is matched by a similar amount from the participating countries. Additionally, private members are contributing an amount of EUR 900 million.

The Joint Undertaking provides financial support in the form of procurement or research and innovation grants to participants following open and competitive calls.


Today the EuroHPC JU has procured seven supercomputers, located across Europe: LUMI in Finland, Leonardo in Italy, MeluXina in Luxembourg, Vega in Slovenia, Karolina in the Czech Republic, Discoverer in Bulgaria and Deucalion in Portugal. 


Visual of the LUMI machine, based on a HPE Cray EX Supercomputer


The LUMI system will be a Cray EX supercomputer supplied by Hewlett Packard Enterprise (HPE) and located in Finland. 

Sustained performance: 375 petaflops
Peak performance: 552 petaflops
Compute partitions: GPU partition (LUMI-G), x86 CPU-partition (LUMI-C), data analytics partition (LUMI-D), container cloud partition (LUMI-K)
Central Processing Unit (CPU): The LUMI-C partition will feature 64-core next-generation AMD EPYC™ CPUs
Graphics Processing Unit (GPU): LUMI-G based on the future generation AMD Instinct™ GPU
Storage capacity: LUMI’s storage system will consist of three components. First, there will be a 7-petabyte partition of ultra-fast flash storage, combined with a more traditional 80-petabyte capacity storage, based on the Lustre parallel filesystem, as well as a data management service, based on Ceph and being 30 petabytes in volume. In total, LUMI will have a storage of 117 petabytes and a maximum I/O bandwidth of 2 terabytes per second
Applications:  AI,  especially deep learning, and traditional large scale simulations combined with massive scale data analytics in solving one research problem
Other details:

LUMI takes over 150m2 of space, which is about the size of a tennis court. The weight of the system is nearly 150 000 kilograms (150 metric tons)



Visual of LEONARDO based on a BullSequana XH2000 from ATOS

© Atos

Leonardo will be supplied by ATOS, based on BullSequana XH2000 supercomputer and located in Italy. 

Sustained performance:  249.4 petaflops
Peak performance: 322.6 petaflops
Compute partitions Booster, hybrid CPU-GPU module delivering 240 PFlops, Data-Centric, delivering 9 Pflops and featuring DDR5 Memory and local NVM for data analysis
Central Processing Unit (CPU): Intel Ice-Lake (Booster), Intel Sapphire Rapids (data-centric)
Graphics Processing Unit (GPU): NVIDIA Ampere architecture-based GPUs, delivering 10 exaflops of FP16 Tensor Flow AI performance
Storage capacity: Leonardo is equipped with over 100 petabytes of state-of-the-art storage capacity and 5PB of High Performance storage
Applications: The system targets: modular computing, scalable computing applications, data-analysis computing applications, visualization applications and interactive computing applications, urgent and cloud computing
Other details: Leonardo will be hosted in the premises of the Tecnopolo di Bologna. The area devoted to the EuroHPC Leonardo system includes 890 sqm of data hall, 350 sqm of data storage, electrical and cooling and ventilation systems, offices and ancillary spaces




MeluXina supercomputer


MeluXina is supplied by Atos, based on the BullSequana XH2000 supercomputer platform and located in Luxembourg. 

Sustained performance: Committed 10 petaflops HPL (Accelerator - GPU Module), 2+ petaflops HPL (Cluster Module)
Peak performance: Expected 15+ petaflops HPL and ~500 petaflops AI (Accelerator - GPU Module), 3+ petaflops HPL (Cluster Module)
Compute partitions: Cluster, Accelerator - GPU, Accelerator - FPGA, Large Memory
Central Processing Unit (CPU): AMD EPYC
Graphics Processing Unit (GPU): NVIDIA Ampere A100
Storage capacity: 20 petabytes main storage with an all-flash scratch tier at 400GB/s, and a 5 petabytes tape library expandable to 100 petabytes
Applications: Traditional Computational, AI and Big Data/HPDA workloads
TOP500 ranking: #10 in EU; #36 globally (June 2021)
Green500 ranking: #1 in EU; #4 globally (June 2021)
Other details: Modular Supercomputer Architecture with a Cloud Module for complex use cases and persistent services, an aggregated 476TB RAM, Infiniband HDR interconnect in Dragonfly+ topology, high speed links to the GÉANT network and Public Internet




Vega HPC

© Atos

Vega is supplied by Atos, based on an BullSequana XH2000 supercomputer and located in Slovenia.

Sustained performance: 6.9 petaflops
Peak performance: 10.1 petaflops
Compute partitions: CPU partition: 960 nodes, 256GB memory/node, 20% double memory, HDR100 & GPU partition: 60 nodes, HDR200
Central Processing Unit (CPU) : 122.800 cores, 1920 CPUs, AMD Epyc 7H12
Graphics Processing Unit (GPU): 240 Nvidia A100 cards
Storage capacity: High-performance NVMe Lustre (1PB), large-capacity Ceph (23PB)
Applications: Traditional Computational, AI, Big Data/HPDA, Large-scale data processing
TOP500 ranking: #32 in EU; #106 globally (June 2021)
Other details: Wide bandwidth for data transfers to other national and international computing centres (up to 500 Gbit/s). Data processing throughput 400GB/s from high-performance storage and 200Gb/s from large capacity storage



Karolina supercomputer


Karolina is supplied by Hewlett Packard Enterprise (HPE), based on an HPE Apollo 2000Gen10 Plus and HPE Apollo 6500 supercomputers and located in the Czech Republic.

Sustained performance: 9.13 petaflops
Peak performance: 15.7 petaflops
Compute partitions:

The supercomputer will consist of 6 main parts:  

  • a universal part for standard numerical simulations, which will consist of approximately 720 computer servers with a theoretical peak performance of 3.8 PFlop/s,
  • an accelerated part with 70 servers and each of them being equipped with 8 GPU accelerators providing a performance of 11 PFlop/s for standard HPC simulations and up to 150 PFlop/s for artificial intelligence computations, 
  • a part designated for large dataset processing that will provide a shared memory of as high as 24 TB, and a performance of 74 TFlop/s,
  • 36 servers with a performance of 131 TFlop/s will be dedicated for providing cloud services,
  • a high-speed network to connect all parts as well as individual servers at a speed of up to 200 Gb/s,
  • data storages that will provide space for more than 1 PB of user data and will also include high-speed data storage with a speed of 1 TB/s for simulations as well as computations in the fields of advanced data analysis and artificial intelligence.
Central Processing Unit (CPU): More than 100,000 CPU cores and 250 TB of RAM
Graphics Processing Unit (GPU): More than 3.8 million CUDA cores / 240,000 tensor cores of NVIDIA A100 Tensor Core GPU accelerators with a total of 22.4 TB of superfast HBM2 memory
Storage capacity: More than 1 petabyte of user data with high-speed data storage with a speed of 1 TB/s
Applications: Traditional Computational , AI, Big Data
TOP500 ranking: #20 in EU; #69 globally (June 2021)
Green500 ranking: #6 in EU; #15 globally (June 2021)




Discoverer supercomputer

© Atos


Discoverer is supplied by Atos, based on a BullSequana XH2000 supercomputer and located in Bulgaria.

Sustained performance: 4.5 petaflops
Peak performance: 6 petaflops
Compute partitions: One partition providing 1128 nodes, 4,44 petaflops
Central Processing Unit (CPU): AMD EPYC 7H12 64core, 2.6GHz, 280W (Code name Rome)
Graphics Processing Unit (GPU): No
Storage capacity: petabytes
Applications: Traditional Computational
TOP500 ranking: #27 in EU; #91 globally (June 2021)
Other details: Topology - Dragonfly+ with 200Gbps (IB HDR) bandwidth per link




© Fujitsu

Deucalion will be supplied by Fujitsu and located in Portugal. It will combine a Fujitsu PRIMEHPC (ARM partition) and Atos Bull Sequana (x86 partitions).

Sustained performance: 7.22 petaflops
Peak performance: 10 petaflops
Compute partitions: ARM Partition: 1632 nodes, 3.8 PFLops ; x86 Partition: 500 nodes, 1,62 PFLops ; Accelerated: 33 nodes, 1,72 PFLops
Central Processing Unit (CPU):

A64FX (ARM partition), AMD EPYC (x86 partitions)

Graphics Processing Unit (GPU): NVidia Ampere
Storage capacity:

430 TB High-speed NVMe partition, 10.6 PB high-speed based Parallel File System partition.

Applications: Traditional Computational, AI, Big Data
Other details:

Deucalion will be installed at the Portuguese Foundation for Science and Technology (FCT) Minho Advanced Computing Centre (MACC), in close collaboration with the municipality of Guimarães, in the North of Portugal, as part of a fully sustainable computing infrastructure aiming at promoting new advancements in the digital and green transitions.


Benefits of supercomputing

Supercomputing is a critical tool for understanding and responding to complex challenges and transforming them into innovation opportunities.

Benefits for citizens

Supercomputing is starting to play a key role in medicine: for discovering new drugs, developing and targeting medical therapies for the individual needs and conditions of patients experiencing cancer, cardiovascular or Alzheimer’s diseases and rare genetic disorders. Today, supercomputers are actively involved in the quest for treatments for COVID-19 by testing drug candidate molecules or repositioning existing drugs for new diseases. Supercomputing is also crucial to understand the generation and evolution of epidemics and diseases.

Supercomputing is of critical importance to anticipate severe weather conditions: it can provide accurate simulations predicting the evolution of weather patterns, as well as the size and paths of storms and floods. This is key to activate early warning systems to save human lives and reduce damages to our properties and public infrastructures.

Supercomputers are also key to monitor the effects of the climate change. They do so by improving our knowledge of geophysical processes, monitoring earth resource evolution, reducing the environmental footprint of industry and society or supporting sustainable agriculture trough numerical simulations of plant growth.

Supercomputers are also vital for national security, defence and sovereignty, as they are used to increase cybersecurity and in the fight against cyber-criminality, in particular for the protection of critical infrastructures.

Benefits for industry

Supercomputing enables industrial sectors like automotive, aerospace, renewable energy and health to innovate, become more productive and to scale up to higher value products and services.

Supercomputing has a growing impact on industries and businesses by significantly reducing product design and production cycles, accelerating the design of new materials, minimising costs, increasing resource efficiency and shortening and optimising decision processes.

It paves the way to novel industrial applications: from safer and greener vehicles to more efficient photovoltaics, sustainable buildings and optimised turbines for electricity production.

In particular, the use of supercomputing over the cloud will make it easier for SMEs without the financial means to invest in in-house skills to develop and produce better products and services.

Benefits for science

Supercomputing is at the heart of the digital transformation of science. It enables deeper scientific understanding and breakthroughs in nearly every scientific field. 

The applications of supercomputing in science are countless: from fundamental physics (advancing the frontiers of knowledge of matter or exploring the universe) to material sciences (designing new critical components for the pharmaceutical or energy sectors) and earth science (modelling the atmospheric and oceanic phenomena at planetary level).

Many recent breakthroughs would not have been possible without access to the most advanced supercomputers. For example for the Chemistry Nobel Prize winners in 2013, supercomputers were used to develop powerful computing programs and software, to understand and predict complex chemical processes or for the Physics Nobel Prize in 2017 supercomputers helped to make complex calculations to detect hitherto theoretical gravitational waves.