Ongoing projects

MareNostrum Experimental Exascale Platform (MEEP)

MEEP logo with mosaic bird

 

Project coordinator:  Barcelona Supercomputing Center (BSC)
Start date: 1 January 2020
Duration: 3 years
Total budget: € 10 million
Participating organisations: 
  1. University of Zagreb, Faculty of Electrical Engineering and Computing, CROATIA
  2. The Scientific and Technological research council of Turkey, TURKEY
Website:  https://meep-project.eu/ 

MEEP is an exploratory platform, flexible FPGA (field-programmable gate array) based emulation, to develop, integrate, test and co-design hardware and software for exascale supercomputers and other hardware targets, based on European-developed intellectual property (IP).

MEEP provides two important functions:

  • an evaluation platform of pre-silicon IP and ideas, at speed and scale;
  • software development and experimentation platform to enable software readiness for new hardware.

MEEP enables software development, accelerating software maturity, compared to the limitations of software simulation. IP can be tested and validated before moving to silicon, saving time and money.

The objectives of MEEP are to leverage and extend projects like the European Processor Initiative (EPI) and the Performance Optimisation and Productivity Centre of Excellence (POP CoE).

The ultimate goal of the project is to create an open full-stack ecosystem that can be used for academic purposes and integrated into a functional accelerator or cores for traditional and emerging HPC applications. 

EuroCC

EuroCC logo in white on a black background

 

Project coordinator:  High-Performance Computing Centre Stuttgart (HLRS)
Start date: 1 September 2020
Duration: 2 years
Total budget: € 57 million
Participating organisations: 
  1. Universität Stuttgart (USTUTT) GERMANY,
  2. Gauss Centre for Supercomputing (GCS) GERMANY,
  3. Institute of Information and Communication Technologies at Bulgarian Academy of Sciences (IICT-BAS) BULGARIA,
  4. Universität Wien (UNIVIE) AUSTRIA,
  5. University of Zagreb University Computing Centre (SRCE) CROATIA,
  6. Computation-based Science and Technology Research Center, The Cyprus Institute (CaSToRC-CyI) CYPRUS,
  7. IT4Innovations National Supercomputing Center, VSB – Technical University of Ostrava (IT4I) CZECH REPUBLIC,
  8. Technical University of Denmark (DTU) DENMARK,
  9. University of Tartu HPC Center (UTHPC) ESTONIA,
  10. CSC – IT Center for Science Ltd (CSC) FINLAND,
  11. National Infrastructures for Research and Technology S.A. (GRNET S.A.) GREECE,
  12. Kormányzati Informatikai Fejlesztési Ügynökség (KIFÜ) HUNGARY,
  13. National University of Ireland, Galway – Irish Centre for High-End Computing (ICHEC) IRELAND,
  14. CINECA – Consorzio Interuniversitario ITALY,
  15. Vilnius University (LitGrid-HPC) LITHUANIA,
  16. Riga Technical University (RTU) LATVIA,
  17. UNINETT Sigma2 AS (Sigma2) NORWAY,
  18. Norwegian Research Centre AS (NORCE) NORWAY,
  19. SINTEF AS NORWAY,
  20. Academic Computer Centre Cyfronet AGH (CYFRONET) POLAND,
  21. Fundação para a Ciência e a Tecnologia (FCT) PORTUGAL,
  22. National Institute for Research-Development in Informatics – ICI Bucharest (ICIB) ROMANIA,
  23. Academic and Research Network of Slovenia (ARNES) SLOVENIA,
  24. Barcelona Supercomputing Center – Centro Nacional de Supercomputación (BSC) SPAIN,
  25. Uppsala University (UU) SWEDEN,
  26. Eidgenössische Technische Hochschule Zürich (ETH Zurich) SWITZERLAND,
  27. The Scientific and Technological Research Council of Turkey (TUBITAK) TURKEY,
  28. The University of Edinburgh (EPCC) UNITED KINGDOM,
  29. TERATEC FRANCE,
  30. SURFSARA BV THE NETHERLANDS,
  31. Centre de recherche en aéronautique a.s.b.l. (Cenaero) BELGIUM,
  32. Luxinnovation GIE (LXI) LUXEMBOURG,
  33. Center of Operations of the Slovak Academy of Sciences (CC SAS) SLOVAK REPUBLIC,
  34. University of Ss. Cyril and Methodius, Faculty of computer science and engineering (UKIM) REPUBLIC OF NORTH MACEDONIA,
  35. Háskóli Íslands – University of Iceland (UICE)  ICELAND,
  36. University of Donja Gorica (UDG) MONTENEGRO

Website:

https://www.eurocc-project.eu/

EuroCC aims to build a European network of 33 national HPC competence centres to bridge the existing HPC skills gaps while promoting cooperation across Europe.

To do so, each participating countries are tasked with establishing a single National Competence Centre (NCC) in the area of HPC in their respective countries. These NCCs will coordinate activities in all HPC-related fields at the national level and serve as a contact point for customers from industry, science, (future) HPC experts, and the general public alike. 

Each of the 33 national competence centres will act locally to map available HPC competencies and identify existing knowledge gaps. The competence centres will coordinate HPC expertise at national level and ease access to European HPC opportunities for research and scientific users, public administration but also in different industrial sectors, delivering tailored solutions for a wide variety of users.

CASTIEL

CASTIEL Logo

 

Project coordinator:

High-Performance Computing Centre Stuttgart (HLRS)
Start date: 1 September 2020
Duration: 2 years
Total budget:  € 2 million
Participating organisations: 
  1. Universität Stuttgart (USTUTT) GERMANY,
  2. Gauss Centre for Supercomputing e.V. (GCS) GERMANY,
  3. CINECA Consorzio Interuniversitario ITALY,
  4. TERATEC FRANCE,
  5. Barcelona Supercomputing Center – Centro Nacional De Supercomputación (BSC) SPAIN,
  6. Partnership for Advanced Computing in Europe AISBL (PRACE) BELGIUM
Website: https://www.castiel-project.eu/

The Coordination and Support Action (CSA) CASTIEL promotes interaction and exchange between National Competence Centres (NCCs) in HPC-related topics addressed through the EuroCC project.

CASTIEL emphasises training, industrial cooperation, business development, raising awareness of HPC-related technologies and expertise. As a hub for information exchange and training, CASTIEL promotes networking among NCCs and strengthens idea exchange by developing best practices. The identification of synergies, challenges, and possible solutions is implemented through the close cooperation of the NCCs at a European level.

 

FF4EuroHPC

 

FF4EuroHPC Logo

 

Project coordinator: University of Stuttgart 
Start date: 1 September 2020
Duration: 3 years
Total budget: € 9.9 million
Participating organisations: 
  1. Scapos AG, GERMANY
  2. Teratec, FRANCE
  3. CINECA, Consorzio Interuniversitario, ITALY
  4. CESGA, Centro de Supercomputación de Galicia, SPAIN 
  5. Arctur, SLOVENIA
Website:

https://www.ff4eurohpc.eu/

FF4EuroHPC aims at boosting the innovation potential and competitiveness of SMEs by facilitating access to HPC-related technologies and expertise. 

Whether it is running high-resolution simulations, doing large-scale data analyses, or incorporating AI applications into SMEs´ workflows, FF4EuroHPC connects business with cutting-edge technologies to develop unique products, innovative business opportunities and become more competitive.

Two open calls will be offered through the project, targeting the highest quality experiments involving innovative, agile SMEs. Proposals will address business challenges from European SMEs from varied application domains. Experiments that will be successful in open call will be carried out on HPC systems, clustered in two tranches. An experiment is an end-user-relevant case study demonstrating the use of HPC and the benefits it brings to the value chain from product design to the end-user. Experiments must address SME business problems by using HPC and complementary technologies such as High Performance Data Analytics (HPDA) and Artificial Intelligence (AI). 

LIGATE

LIGATE 

Project coordinator:  Dompé farmaceutici S.p.A. (Dompé)
Start date: 1 January 2021
Duration: 3 years
Total budget: € 5,9 million
Participating organisations:
  1. Politecnico di Milano, ITALY
  2. CINECAConsorzio Interuniversitario ITALY
  3. Kungliga Tekniska Högskolan, SWEDEN
  4. Università degli Studi di Salerno (UNISA), ITALY 
  5. Universität Innsbruck, AUSTRIA 
  6. E4 Computer Engineering SpA, ITALY 
  7. Chelonia SA, SWITZERLAND 
  8. tofmotion GmbH, AUSTRIA
  9. Vysoka Skola Banska - Technicka Univerzita Ostrava,CZECH REPUBLIC
  10. Universität Basel, SWITZERLAND 
Website: https://www.ligateproject.eu/

Today digital revolution is having a dramatic impact on the pharmaceutical industry and the entire healthcare system. The implementation of machine learning, extreme scale computer simulations, and big data analytics in the drug design and development process offer an excellent opportunity to lower the risk of investment and reduce the time to patent and time to patient.

LIGATE aims to integrate and co-design best in class European open-source components together with European Intellectual Properties (whose development has already been co-funded by previous Horizon 2020 projects). It will support Europe to keep worldwide leadership on Computer-Aided Drug Design (CADD) solutions, exploiting today’s high-end supercomputers and tomorrow’s exascale resources, while fostering the European competitiveness in this field.  The project will enhance the CADD technology of the drug discovery platform EXSCALATE.

The proposed LIGATE solution enables to deliver the result of a drug design campaign with the highest speed along with the highest accuracy. This predictability, together with the fully automation of the solution and the availability of the exascale system, will let run the full in silico drug discovery campaign in less than one day to respond promptly for example to worldwide pandemic crisis. The platform will also support European projects in repurposing drugs, natural products and nutraceuticals with therapeutic indications to answer high unmet medical needs like rare, metabolic, neurological and cancer diseases, and emerging ones as new non infective pandemics.  

Since the evolution of HPC architectures is heading toward specialization and extreme heterogeneity, including future exascale architectures, the LIGATE solution focuses also on code portability with the possibility to deploy the CADD platform on any available type of architecture in order not to have a legacy in the hardware.

The project plans to make the platform available and open to support the discovery a novel treatment to fight virus infections and multidrug-resistant bacteria. The project will also make available to the research community the outcome of a final simulation. 

 

SCALABLE

Logo of SCALABLE
Project coordinator:  CS GROUP - FRANCE 
Start date: 1 January 2021
Duration: 3 years
Total budget: € 3 million
Participating organisations:
  1. Friedrich-Alexander-Universität Erlangen-Nürnberg, GERMANY
  2. Centre européen de recherche et de formation avancée en calcul scientifique, FRANCE
  3. Forschungszentrum Jülich, GERMANY
  4. Neovia Innovation, FRANCE 
  5. Vysoka Skola Banska - Technicka Univerzita Ostrava,CZECH REPUBLIC
  6. AIRBUS OPERATIONS GMBH, GERMANY
  7. RENAULT SAS FRANCE 
Website: http://scalable-hpc.eu/

In SCALABLE, eminent industrials and academic partners will team up to improve the performance, scalability, and energy efficiency of an industrial LBM-based computational fluid dynamics (CFD) software. The project will directly benefit the European industry, while contributing to fundamental research.

Lattice Boltzmann methods (LBM) have already evolved to become trustworthy alternatives to conventional CFD. In several engineering applications, they are shown to be roughly an order of magnitude faster than Navier-Stokes approaches in a fair comparison and in comparable scenarios. In the context of EuroHPC, LBM is especially well suited to exploit advanced supercomputer architectures through vectorization, accelerators, and massive parallelization.

SCALABLE will make the most of two already existing CFD tools (waLBerla and LaBS, the ProLB software), breaking the silos between the scientific computing world and physical flow modelling world, to deliver improved efficiency and scalability for the upcoming European Exascale systems. In the public domain research code waLBerla, superb performance and outstanding scalability has been demonstrated, reaching more than a trillion lattice cells already on Petascale systems. waLBerla performance excels because of its uncompromising unique, architecture-specific automatic generation of optimized compute kernels, together with carefully designed parallel data structures.

 

eFlows4HPC

eFlows4HPC
Project coordinator:  Barcelona Supercomputing Center
Start date: 1 January 2021
Duration: 3 years
Total budget: € 7,6 million
Participating organisations:
  1. CIMNE -Centre Internacional de Mètodes Numèrics a l'Enginyeria, SPAIN  
  2. Forschungszentrum Jülich, GERMANY 
  3. Universidad Politécnica de Valencia, SPAIN 
  4. Bull SAS, FRANCE 
  5. DtoK Lab S.r.l. ITALY 
  6. Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, ITALY 
  7. Institut national de recherche en informatique et en automatique (INRIA) FRANCE 
  8. Scuola Internazionale Superiore di Studi Avanzati di Trieste, ITALY 
  9. Instytut Chemii Bioorganicznej Polskiej Akademii Nauk, POLAND  
  10. Universidad de Málaga SPAIN 
  11. Istituto nazionale di geofisica e vulcanologia ITALY 
  12. Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung (AWI) GERMANY 
  13. Eidgenössische Technische Hochschule Zürich, SWITZERLAND 
  14. SIEMENS AKTIENGESELLSCHAFT GERMANY 
  15. NGI - Norges Geotekniske Institutt, NORWAY 
Website: https://eflows4hpc.eu

Today, developers lack tools that enable the development of complex workflows involving HPC simulation and modelling with data analytics (DA) and machine learning (ML). The eFlows4HPC project aims to deliver a workflow software stack and an additional set of services to enable the integration of HPC simulation and modelling with big data analytics and machine learning in scientific and industrial applications.   

The software stack will allow to develop innovative adaptive workflows that efficiently use the computing resources and also innovative data management solutions. In order to attract first-time users of HPC systems, the project will provide HPC Workflows as a Service (HPCWaaS), an environment for sharing, reusing, deploying and executing existing workflows on HPC systems. The workflow technologies, which use with machine learning and big data libraries, leverages previous open-source European initiatives. Specific optimizations of application for the use of accelerators (FPGAs, GPUs) and the processors developed by the European Processor Initiative (EPI) will be performed. To demonstrate the workflow software stack, use cases from three thematic pillars have been selected. 

Pillar I focuses on the construction of DigitalTwins for the prototyping of complex manufactured objects integrating state-of-the-art adaptive solvers with machine learning and data-mining, contributing to the Industry 4.0 vision. 

Pillar II develops innovative adaptive workflows for climate and the study of Tropical Cyclones (TC) in the context of the CMIP6 experiment, including in-situ analytics. 

Pillar III explores the modelling of natural catastrophes - in particular, earthquakes and their associated tsunamis shortly after such an event is recorded. Leveraging two existing workflows, Pillar will work on integrating them with the eFlows4HPC software stack and on producing policies for urgent access to supercomputers. The results from the three Pillars will be demonstrated to the target community CoEs to foster adoption and receive feedback.

ACROSS

 

ACROSS

 

Project coordinator:  Fondazione LINKS - Leading Innovation & Knowledge for Society
Start date: 1 March 2021
Duration: 3 years
Total budget: € 8,8 million
Participating organisations:
  1. Bull SAS, FRANCE
  2. Vysoka Skola Banska - Technicka Univerzita Ostrava, CZECH REPUBLIC
  3. CINECA, Consorzio Interuniversitario ITALY 
  4. GE Avio Aero, ITALY  
  5. European Centre For Medium-Range Weather Foreccasts (ECMWF) UNITED KINGDOM 
  6. Consorzio Interuniversitario Nazionale per l’Informatica (CINI) ITALY 
  7. Institut National de Recherche en Informatique et en Automatique (INRIA), FRANCE 
  8. SINTEF AS, NORWAY 
  9. NEUROPUBLIC AE PLIROFORIKIS & EPIKOINONION, GREECE 
  10. Stichting DELTARES, NEDERLANDS 
  11. Max-Planck-Gesellschaft, GERMANY  
  12. MORFO DESIGN SRL , ITALY
Website: https://www.acrossproject.eu

Supercomputers have been extensively used to solve complex scientific and engineering problems, boosting the capability to design more efficient systems. The pace at which data are generated by scientific experiments and large simulations (e.g., multiphysics, climate, weather forecast, etc.) poses new challenges in terms of capability of efficiently and effectively analysing massive data sets. Artificial Intelligence (AI), and more specifically Machine Learning (ML) and Deep Learning (DL) recently gained momentum for boosting simulations’ speed. ML/DL techniques are part of simulation processes, used to early detect patterns of interests from less accurate simulation results.   

To address these challenges, the ACROSS project will codesign and develop an HPC, Big Data, AI convergent platform, supporting applications in the aeronautics, climate and weather, and energy domains. To this end, ACROSS will leverage on next generation of pre-exascale infrastructures, still being ready for exascale systems, and on effective mechanisms to easily describe and manage complex workflows in these three domains. Energy efficiency will be achieved by massive use of specialized hardware accelerators, monitoring running systems and applying smart mechanisms of scheduling jobs. 

ACROSS will combine traditional HPC techniques with AI (specifically ML/DL) and Big Data analytic techniques to enhance the application test case outcomes (e.g., improve the existing operational system for global numerical weather prediction, climate simulations, develop an environment for user-defined in-situ data processing, improve and innovate the existing turbine aero design system, speed up the design process, etc.). The performance of ML/DL will be accelerated by using dedicated hardware devices. 

ACROSS will promote cooperation with other EU initiatives (e.g. EPI) and future EuroHPC projects to foster the adoption of exascale-level computing among test case domain stakeholders.

HEROES

 

 

HEROES logo

 

 

Project coordinator:  UCit
Start date: 1 March 2021
Duration: 2 years
Total budget: € 890 000
Participating organisations:
  1. Neovia Innovation, FRANCE  
  2. HPCNow! CONSULTING SL, SPAIN 
  3. Do IT Systems S.r.l., ITALY 
  4. Barcelona Supercomputing Centre (BSC), SPAIN  
Website: https://heroes-project.eu/

Bridging the gap between HPC & AI/ML user communities and HPC Centres is key to unleash Europe’s innovation potential. A lot of effort is done to build the European technologies able to deliver centralised, petascale/exascale HPC & ML. Fostered by increasing access from the edge of the network, it is equally important to make such resources easily and responsibly consumable. 

The HEROES project aims at developing an innovative software solution addressing both the industrial and scientific user communities. The platform will allow end users to submit their complex Simulation and ML workflows to both HPC Data Centres and Cloud Infrastructures and provide them with the possibility to choose the best option to achieve their goals in time, within budget and with the best energy efficiency. 

Although the project will be able, in the future, to provide value creation across multiple industrial sectors, the initial focus for the demonstrator will address workflows of strategic importance in the field of “Renewable Energy” and in “Manufacturing” applications where HPC is involved for the design of more Energy efficient products (for example in the design of energy efficient vehicles).

HEROES major innovations reside in its platform selection decision module and its application of marketplace concepts to HPC. The consortium involves 4 European SMEs which bring HPC to their clients and are facing everyday this market demand. A major SuperComputing Research Centre complements the project with its specific expertise in energy management and resource optimisation. HEROES is supported by Meteosim, UL Renewables, EDF and Dallara which will advise on workflows relevant to such use cases. At the end of the project, its outcomes will be commercialised and further reinforce Europe's capacity to lead and nurture innovation in HPC. 

OPTIMA

 

OPTIMA logo

 

Project coordinator:  Telecommunication Systems Research Institute 
Start date: 1 March 2021
Duration: 33 months
Total budget: € 4,1 million
Participating organisations:
  1. Cyberbotics SARL, SWITZERLAND  
  2. Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V., GERMANY 
  3. Exapsys | Exascale Performance Systems, GREECE 
  4. Institute of Communications and Computer Systems, GREECE 
  5. M3E S.r.l. ITALY 
  6. Maxeler IoT-Labs B.V., NEDERLANDS 
  7. Forschungszentrum Jülich, GERMANY 
  8. EnginSoft SpA, ITALY 
  9. Appentra Solutions SL, SPAIN 
Website: https://optima-hpc.eu/

In order to support the expanding demands for processing power from emerging HPC applications, within a pragmatic energy envelope, the future HPC systems will incorporate accelerators. One promising approach, towards this end, is the utilization of FPGAs; the main advantage of those devices is that, since they can be reconfigured at any time so as to implement tailor-made application accelerators, their energy efficiency and/or performance, in most of the cases, is much higher than that of CPUs and GPUs. 

OPTIMA is a SME-driven project aiming to port and optimize a number of industrial applications as well as a set of open-source libraries, utilized in at least 3 different application domains, to two novel FPGA-populated HPC systems, utilizing several innovative programming environments. It is expected that the applications and the libraries will be executed, in those heterogeneous HPC systems at significantly higher energy-efficiency as described by the Energy Delay Product metric (EDP); in particular, the EDP of the OPTIMA applications and libraries when executed on the targeted FPGA-based HPC systems, is expected to be more than 10x higher than that triggered by CPU-based systems and more than 3x higher than the GPU-based ones. 

The main outcomes of OPTIMA will be that: 

  • the participating SMEs will gain a significant advantage since they will be able to execute their applications much more efficiently than the competition, 
  • it will be further proved that Europe is at the forefront of developing efficient FPGA-populated HPC systems and application/libraries taking advantage of them, 
  • the open-source libraries as well as the open-source applications developed within OPTIMA will allow third parties to easily target FPGA-based HPC systems for their application developments, 
  • there will be an open-to-use HPC infrastructure supported by a specially formed sustainability body.   

NextSim

NextSim logo
Project coordinator:  Barcelona Supercomputing Center (BSC) 
Start date: 1 March 2021
Duration: 3 years
Total budget: € 3,9 million
Participating organisations:
  1. Universidad Politécnica de Madrid, SPAIN 
  2. CIMNE -Centre Internacional de Mètodes Numèrics a l'Enginyeria, SPAIN 
  3. Office National d'Etudes et de Recherches Aérospatiales (ONERA), FRANCE 
  4. Deutsches Zentrum für Luft- und Raumfahrt, GERMANY 
  5. Centre européen de recherche et de formation avancée en calcul scientifique, FRANCE 
  6. Airbus Operations SAS, FRANCE  
Website: Under construction

NextSim partners, as fundamental European players in Aeronautics and Simulation, recognise that there is a need to increase the capabilities of current Computational Fluid Dynamics tools for aeronautical design by re-engineering them for extreme-scale parallel computing platforms. 

The backbone of NextSim is centred on the fact that, today, the capabilities of leading-edge emerging HPC architectures are not fully exploited by industrial simulation tools. Current state-of-the-art industrial solvers do not take sufficient advantage of the immense capabilities of new hardware architectures, such as streaming processors or many-core platforms. A combined research effort focusing on algorithms and HPC is the only way to make possible to develop and advance simulation tools to meet the needs of the European aeronautical industry. 

NextSim will focus on the development of the numerical flow solver CODA (Finite Volume and high-order discontinuous Galerkin schemes), that will be the new reference solver for aerodynamic applications inside AIRBUS group, having a significant impact in the aeronautical market. To demonstrate NextSim market impact, AIRBUS has defined a series of market relevant problems. The numerical simulation of those problems is still a challenge for the aeronautical industry and their solution, at a required accuracy and an affordable computational cost, is still not possible with the current industrial solvers. 

Following this idea, three additional working areas are proposed in NextSim: algorithms for numerical efficiency, algorithms for data management and the efficiency implementation of those algorithms in the most advanced HPC platforms. 

Finally, NextSim will provide access to project results through the “mini-apps” concept, small pieces of software, seeking synergies with open-source components, which demonstrate the use of the novel mathematical methods and algorithms developed in CODA but that will be freely distributed to the scientific community.  

DCoMEX

DCoMEX

 

Project coordinator:  National Technical University of Athens – NTUA 
Start date: 1 April 2021
Duration: 3 years
Total budget: € 2,9 million
Participating organisations:
  1. Eidgenössische Technische Hochschule Zürich, SWITZERLAND
  2. University of Cyprus, CYPRUS
  3. Technische Universität München, GERMANY
  4. National Infrastructures for Research and Technology, GREECE 
Website: http://mgroup.ntua.gr/dcomex

DCoMEX aims to provide unprecedented advances to the field of Computational Mechanics by developing novel numerical methods enhanced by Artificial Intelligence, along with a scalable software framework that enables exascale computing. A key innovation of our project is the development of AISolve, a novel scalable library of AI-enhanced algorithms for the solution of large scale sparse linear system that are the core of computational mechanics. Our methods fuse physics-constrained machine learning with efficient block-iterative methods and incorporate experimental data at multiple levels of fidelity to quantify model uncertainties. Efficient deployment of these methods in exascale supercomputers will provide scientists and engineers with unprecedented capabilities for predictive simulations of mechanical systems in applications ranging from bioengineering to manufacturing. DCoMEX exploits the computational power of modern exascale architectures, to provide a robust and user-friendly framework that can be adopted in many applications. This framework is comprised of AI-Solve library integrated in two complementary computational mechanics HPC libraries. The first is a general-purpose multiphysics engine and the second a Bayesian uncertainty quantification and optimisation platform. We will demonstrate DCoMEX potential by detailed simulations in two case studies: (i) patient-specific optimization of cancer immunotherapy treatment, and (ii) design of advanced composite materials and structures at multiple scales. We envision that software and methods developed in this project can be further customized and also facilitate developments in critical European industrial sectors like medicine, infrastructure, materials, automotive and aeronautics design.

 

MICROCARD

 

MICROCARD

 

 

Project coordinator:  Université de Bordeaux
Start date: 1 April 2021
Duration: 42 months
Total budget: € 5,8 million
Participating organisations:
  1. Université de Strasbourg, FRANCE 
  2. Simula Research Laboratory AS, NORWAY 
  3. Università degli Studi di Pavia, ITALY 
  4. USI: Università della Svizzera italiana, SWITZERLAND 
  5. KIT - Karlsruher Institut für Technologie, GERMANY 
  6. Konrad-Zuse-Zentrum für Informationstechnik Berlin, GERMANY
  7. MEGWARE Computer Vertrieb und Service GmbH, GERMANY
  8. NumeriCor GmbH, AUSTRIA 
  9. Orobix srl, ITALY 
Website: https://microcard.eu

Cardiovascular diseases are the most frequent cause of death worldwide and half of these deaths are due to cardiac arrhythmia, a disorder of the heart's electrical synchronization system. Numerical models of this complex system are highly sophisticated and widely used, but to match observations in aging and diseased hearts they need to move from a continuum approach to a representation of individual cells and their interconnections. This implies a different, harder numerical problem and a 10,000-fold increase in problem size. Exascale computers will be needed to run such models. 

The MICROCARD project will develop an exascale application platform for cardiac electrophysiology simulations that is usable for cell-by-cell simulations. The platform will be co-designed by HPC experts, numerical scientists, biomedical engineers, and biomedical scientists, from academia and industry. They will develop numerical schemes suitable for exascale parallelism, problem-tailored linear-system solvers and preconditioners, and a compiler to translate high-level model descriptions into optimized, energy-efficient system code for heterogeneous computing systems. The code will be resilient to hardware failures and will use an energy-aware task placement strategy. 

The platform will be applied to real-life use cases in the biomedical domain and will be made accessible for a wide range of users both as code and through a web interface. In addition the project will accelerate the development of parallel segmentation and (re)meshing software, necessary to create the extremely large and complex meshes needed from available large volumes of microscopy data. The platform will be adaptable to similar biological systems such as nerves, and components of the platform will be reusable in a wide range of applications. 

DEEP-SEA

 

 

DEEP-SEA

 

 

Project coordinator:  Forschungszentrum Jülich
Start date: 1 April 2021
Duration: 3 years
Total budget: € 15 million
Participating organisations:
  1. Bull SAS, FRANCE
  2. Commissariat à l'énergie atomique et aux énergies alternatives (CEA), FRANCE
  3. Barcelona Supercomputing Centre (BSC), SPAIN 
  4. Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V, GERMANY
  5. Idryma Technologias Kai Erevnas, GREECE
  6. Bayerische Akademie der Wissenschaften, GERMANY
  7. Technische Universität München, GERMANY
  8. Katholieke Universiteit Leuven, BELGIUM
  9. Eidgenössische Technische Hochschule Zürich, SWITZERLAND
  10. Technische Universität Darmstadt, GERMANY
  11. Kungliga Tekniska Högskolan (KTH), SWEDEN
  12. European Centre for Medium-Range Weather Forecasts, UNITED KINGDOM
Website: https://www.deep-projects.eu/

DEEP-SEA (“DEEP – Software for Exascale Architectures”) will deliver the programming environment for future European exascale systems, adapting all levels of the software (SW) stack – including low-level drivers, computation and communication libraries, resource management, and programming abstractions with associated runtime systems and tools – to support highly heterogeneous compute and memory configurations and to allow code optimisation across existing and future architectures and systems. At node-level the European Processor Initiative (EPI) will integrate general purpose CPUs and accelerators within the package and combine DDR and HBM memories. Consequently, DEEP-SEA will implement data placement policies for deep memory hierarchies, improving application performance on future EPI-based platforms. At system-level, CPUs and accelerators (e.g., various EPI chip configurations, or GPUs) are efficiently integrated following the Modular Supercomputer Architecture (MSA). The DEEP-SEA SW stack will enable dynamic resource allocation, application malleability, programming composability, and include tools to map applications to the MSA. Result is a SW environment enabling applications to run on the best suited hardware, in a scalable, and energy efficient manner.

Targeting a high Technology Readiness Level (TRL), the project builds upon SW developments from previous EU-projects and international open source packages widely used in the HPC community, extending them with focus on compute and memory heterogeneity. This enables close collaborations within the HPC community and Centres of Excellence (CoEs). The DEEP-SEA SW elements will be extended in a collaborative co-design approach with EU-applications, considering relations and dependencies between the various levels of the stack. Therefore, ambitious and highly-relevant EU-applications will drive the codesign, evaluate the DEEP-SEA software stack, and demonstrate its benefits for users of European compute centres.

RED-SEA

 

 

RED-SEA

 

 

Project coordinator:  Bull SAS
Start date: 1 April 2021
Duration: 3 years
Total budget: € 8 million
Participating organisations:
  1. Idryma Technologias Kai Erevnas, GREECE
  2. Commissariat à l'énergie atomique et aux énergies alternatives (CEA), FRANCE
  3. Forschungszentrum Jülich, GERMANY
  4. EXTOLL GmbH, GERMANY
  5. Eidgenössische Technische Hochschule Zürich, SWITZERLAND
  6. Universitat Politècnica de València, SPAIN
  7. Universidad de Castilla - La Manche, SPAIN
  8. Istituto Nazionale di Fisica Nucleare – INFN, ITALY
  9. Exapsys | Exascale Performance Systems, GREECE
  10. eXact lab SRL, ITALY
Website: https://redsea-project.eu/

In order to enable Exascale computing, next generation interconnection networks must scale to hundreds of thousands of nodes, and must provide features to also allow the HPC, HPDA, and AI applications to reach Exascale, while benefiting from new hardware and software trends.

To achieve this goal, the RED-SEA proposal gathers the key European R&D competences, by bringing together the top academic centres with the key European industrial forces in this domain. The project will leverage BXI, the key European Interconnect, which is in production and featured in Top 500 systems, with the aim to adapt it to the challenges of coming years.

RED-SEA will pave the way to the next generation of European Exascale interconnects, including the next generation of BXI, as follows:

i. specify the new architecture using hardware-software co-design and a set of applications representative of the new terrain of converging HPC, HPDA, and AI;

ii. test, evaluate, and/or implement the new architectural features at multiple levels, according to the nature of each of them, ranging from mathematical analysis and modelling, to simulation, or to emulation or implementation on FPGA testbeds;

iii. enable seamless communication within and between resource clusters, and therefore development of a high-performance low latency gateway, bridging seamlessly with Ethernet;

iv. add efficient network resource management, thus improving congestion resiliency, virtualization, adaptive routing, collective operations;

v. open the interconnect to new kinds of applications and hardware, with enhancements for end-to end network services – from programming models to reliability, security, low-latency, and new processors;

vi. leverage open standards and compatible APIs to develop innovative reusable libraries and Fabrics management solutions.

We will work together with the other projects resulting from the EuroHPC-01-2019 call, especially with IO-SEA (topic b) and DEEP-SEA (topic d) if they get approved.

IO-SEA

 

 

IO-SEA

 

Project coordinator:  Commissariat à l'énergie atomique et aux énergies alternatives (CEA)
Start date: 1 April 2021
Duration: 3 years
Total budget: € 7,9 million
Participating organisations:
  1. Bull SAS, FRANCE
  2. Forschungszentrum Jülich, GERMANY
  3. European Centre for Medium-Range Weather Forecasts, UNITED KINGDOM
  4. Seagate Systems (UK) Limited, UNITED KINGDOM
  5. National University of Ireland Galway, IRELAND
  6. Vysoka Skola Banska - Technicka Univerzita Ostrava, CZECH REPUBLIC
  7. Kungliga Tekniska Högskolan (KTH), SWEDEN
  8. Masaryk University, CZECH REPUBLIC
  9. Johannes Gutenberg-Universitat Mainz, GERMANY
Website: https://iosea-project.eu/

IO-SEA aims to provide a novel data management and storage platform for exascale computing based on hierarchical storage management (HSM) and on-demand provisioning of storage services. The platform will efficiently make use of storage tiers spanning NVMe and NVRAM at the top all the way down to tapebased technologies. System requirements are driven by data intensive use-cases, in a very strict co-design approach. The concept of ephemeral data nodes and data accessors is introduced that allow users to flexibly operate the system, using various well-known data access paradigms, such as POSIX namespaces, S3/Swift Interfaces, MPI-IO and other data formats and protocols. These ephemeral resources eliminate the problem of treating storage resources as static and unchanging system components – which is not a tenable proposition for data intensive exascale environments. The methods and techniques are applicable to exascale class data intensive applications and workflows that need to be deployed in highly heterogeneous computing environments.

Critical aspects of intelligent data placement are considered for extreme volumes of data. This ensures that the right resources among the storage tiers are used and accessed by data nodes as close as possible to compute nodes – optimising performance, cost, and energy at extreme scale. Advanced IO instrumentation and monitoring features will be developed to that effect leveraging the latest advancements in AI and machine learning to systematically analyse the telemetry records to make smart decisions on data placement. These ideas coupled with in-storage-computation remove unnecessary data movements within the system.

The IO-SEA project (EuroHPC-2019-1 topic b) has connections to the DEEP-SEA (topic d) and RED-SEA (topic c) project. It leverages technologies developed by the SAGE, SAGE2 and NextGEN-IO projects, and strengthens the TLR of the developed products and technologies.

exaFOAM

exaFOAM
Project coordinator:  ESI GROUP
Start date: 1 April 2021
Duration: 3 years
Total budget: € 5,4 million
Participating organisations:
  1. CINECA, Consorzio Interuniversitario ITALY
  2. E4 Computer Engineering SpA, ITALY 
  3. Politecnico di Milano, ITALY
  4. Fakultet strojarstva i brodogradnje, Sveučilište u Zagrebu, CROATIA
  5. Upstream CFD GmbH, GERMANY
  6. Technische Universität Darmstadt, GERMANY
  7. Universität Stuttgart, GERMANY
  8. Barcelona Supercomputing Centre (BSC), SPAIN
  9. Wikki Gesellschaft für numerische Kontinuumsmechanik mbH, GERMANY
  10. National Technical University of Athens - NTUA, GREECE
  11. Universidade do Minho, PORTUGAL
Website: https://exafoam.eu/

Computational Fluid Dynamics (CFD) has become a mature technology in engineering design, contributing strongly to industrial competitiveness and sustainability across a wide range of sectors (e.g. transportation, power generation, disaster prevention). Future growth depends upon the exploitation of massively parallel HPC architectures, however this is currently hampered by performance scaling bottlenecks.

The ambitious exaFOAM project aims to overcome these limitations through the development and validation of a range of algorithmic improvements. Improvements across the entire CFD process chain (preprocessing, simulation, I/O, post-processing) will be developed. Effectiveness will be demonstrated via a suite of HPC Grand Challenge and Industrial Application Challenge cases. All developments will be implemented in the open-source CFD software OpenFOAM, one of the most successful open-source projects in the area of computational modelling, with a large industrial and academic user base.

To ensure success, the project mobilises a highly-capable consortium of 12 beneficiaries consisting of experts in HPC CFD algorithms and industrial applications and includes universities, HPC centres, SMEs and code release authority OpenCFD Ltd (openfoam.com) as a linked third party to the PI. Project management will be facilitated by a clear project structure and quantified objectives enable tracking of the project progress.

Special emphasis will be placed on ensuring a strong impact of the exaFOAM project. The project has been conceived to address all expected impacts set out in the Work Programme. All developed code and validation cases will be released as open-source to the community in coordination with the OpenFOAM Governance structure. The involvement of 17 industrial supporters and stakeholders from outside the consortium underscores the industrial relevance of the project outcomes. A well-structured and multichannelled plan for dissemination and exploitation of the project outcomes further reinforces the expected impact.

SparCity

SPARCITY

 

Project coordinator:  Koç Üniversitesi
Start date: 1 April 2021
Duration: 3 years
Total budget: € 2,6 million
Participating organisations:
  1. Sabancı University,TURKEY
  2. Simula Research Laboratory AS, NORWAY 
  3. INESC ID - Instituto de Engenharia de Sistemas e Computadores, Investigação e Desenvolvimento em Lisboa, PORTUGAL
  4. Ludwig-Maximilians-Universität München, GERMANY
  5. Graphcore AS, NORWAY
Website: http://sparcity.eu/

Perfectly aligned with the vision of the EuroHPC Joint Undertaking, the SparCity project aims at creating a supercomputing framework that will provide efficient algorithms and coherent tools specifically designed for maximising the performance and energy efficiency of sparse computations on emerging HPC systems, while also opening up new usage areas for sparse computations in data analytics and deep learning. The framework enables comprehensive application characterization and modeling, performing synergistic node-level and system-level software optimizations. By creating a digital SuperTwin, the framework is also capable of evaluating existing hardware components and addressing what-if scenarios on emerging architectures and systems in a co-design perspective. To demonstrate the effectiveness, societal impact, and usability of the framework, the SparCity project will enhance the computing scale and energy efficiency of four challenging real-life applications that come from drastically different domains, namely, computational cardiology, social networks, bioinformatics and autonomous driving. By targeting this collection of challenging applications, SparCity will develop world-class, extreme scale and energy-efficient HPC technologies, and contribute to building a sustainable exascale ecosystem and increasing Europe’s competitiveness.

ADMIRE

 

ADMIRE

 

Project coordinator:  Universidad Carlos III De Madrid
Start date: 1 April 2021
Duration: 3 years
Total budget: € 7,9 million
Participating organisations:
  1. Johannes Gutenberg-Universitat Mainz, GERMANY
  2. Barcelona Supercomputing Centre (BSC), SPAIN 
  3. Technische Universität Darmstadt, GERMANY
  4. DataDirect Networks (DDN), FRANCE
  5. Institut National de Recherche en informatique et automatique, FRANCE
  6. ParaTools, SAS, FRANCE
  7. Forschungszentrum Jülich, GERMANY
  8. Consorzio Interuniversitario Nazionale per l’Informatica (CINI), ITALY
  9. CINECA, Consorzio Interuniversitario ITALY
  10. E4 Computer Engineering SpA, ITALY 
  11. Instytut Chemii Bioorganicznej Polskiej Akademii Nauk, POLAND 
  12. Kungliga Tekniska Högskolan (KTH), SWEDEN
Website: https://www.admire-eurohpc.eu/

The growing need to process extremely large data sets is one of the main drivers for building exascale HPC systems today. However, the flat storage hierarchies found in classic HPC architectures no longer satisfy the performance requirements of data-processing applications. Uncoordinated file access in combination with limited bandwidth make the centralised back-end parallel file system a serious bottleneck. At the same time, emerging multi-tier storage hierarchies come with the potential to remove this barrier. But maximising performance still requires careful control to avoid congestion and balance computational with storage performance. Unfortunately, appropriate interfaces and policies for managing such an enhanced I/O stack are still lacking.

The main objective of the ADMIRE project is to establish this control by creating an active I/O stack that dynamically adjusts computation and storage requirements through intelligent global coordination, malleability of computation and I/O, and the scheduling of storage resources along all levels of the storage hierarchy. To achieve this, we will develop a software-defined framework based on the principles of scalable monitoring and control, separated control and data paths, and the orchestration of key system components and applications through embedded control points.

Our software-only solution will allow the throughput of HPC systems and the performance of individual applications to be substantially increased – and consequently energy consumption to be decreased – by taking advantage of fast and power-efficient node-local storage tiers using novel, European ad-hoc storage systems and in-transit/in-situ processing facilities. Furthermore, our enhanced I/O stack will offer quality-of-service (QoS) and resilience. An integrated and operational prototype will be validated with several use cases from various domains, including climate/weather, life sciences, physics, remote sensing, and deep learning.

TEXTAROSSA

TEXTAROSSA Logo

 

Project coordinator:  Agenzia nazionale per le nuove tecnologie, l'energia e lo sviluppo economico sostenibile (ENEA)
Start date: 1 April 2021
Duration: 3 years
Total budget: € 6 million
Participating organisations:
  1. Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V., GERMANY
  2. Consorzio Interuniversitario Nazionale per l’Informatica (CINI) ITALY
  3. Institut National de Recherche en Informatique et en Automatique (INRIA), FRANCE
  4. Bull SAS, FRANCE
  5. E4 Computer Engineering SpA, ITALY 
  6. Barcelona Supercomputing Centre (BSC), SPAIN 
  7. Instytut Chemii Bioorganicznej Polskiej Akademii Nauk, POLAND 
  8. Istituto Nazionale di Fisica Nucleare – INFN, ITALY
  9. Consiglio Nazionale delle Ricerche (CNR), ITALY
  10. In Quattro Srl, ITALY
Website: https://textarossa.eu/

To achieve high performance and high energy efficiency on near-future exascale computing systems, a technology gap needs to be bridged: increase efficiency of computation with extreme efficiency in HW and new arithmetics, as well as providing methods and tools for seamless integration of reconfigurable accelerators in heterogeneous HPC multi-node platforms. TEXTAROSSA aims at tackling this gap through applying a co-design approach to heterogeneous HPC solutions, supported by the integration and extension of IPs, programming models and tools derived from European research projects, led by TEXTAROSSA partners. The main directions for innovation are towards: i) enabling mixed-precision computing, through the definition of IPs, libraries, and compilers supporting novel data types (including Posits), used also to boost the performance of AI accelerators; ii) implementing new multilevel thermal management and two-phase liquid cooling; iii) developing improved data movement and storage tools through compression; iv) ensure secure HPC operation through HW accelerated cryptography; v) providing RISC-V based IP for fast task scheduling and IPs for low-latency intra/inter-node communication. These technologies will be tested on the Integrated Development Vehicles mirroring and extending the European Processor Initiative ARM64-based architecture, and on an OpenSequana testbed. To drive the technology development and assess the impact of the proposed innovations TEXTAROSSA will use a selected but representative number of HPC, HPDA and AI demonstrators covering challenging HPC domains such as general-purpose numerical kernels, High Energy Physics (HEP), Oil & Gas, climate modelling, and emerging domains such as High Performance Data Analytics (HPDA) and High Performance Artificial Intelligence (HPC-AI).

The TEXTAROSSA consortium includes: three leading Italian universities (Politecnico di Milano, Università degli studi di Torino and Università di Pisa, Linked Third Parties of CINI), CINECA (ITALY, Inkind Third Party of ENEA) and Universitat Politècnica de Catalunya (UPC, SPAIN, Inkind Third Parties of INRIA).

MAELSTROM

Maelstrom

 

Project coordinator:  European Centre for Medium-Range Weather Forecasts
Start date: 1 April 2021
Duration: 3 years
Total budget: € 4,3 million
Participating organisations:
  1. 4Cast GmbH & Co. KG, GERMANY
  2. E4 Computer Engineering SpA, ITALY 
  3. Eidgenössische Technische Hochschule Zürich, SWITZERLAND
  4. Forschungszentrum Jülich, GERMANY
  5. Meteorologisk institutt, NORWAY
  6. Université du Luxembourg, LUXEMBOURG
Website: Under construction

To develop Europe’s computer architecture of the future, MAELSTROM will co-design bespoke compute system designs for optimal application performance and energy efficiency, a software framework to optimise usability and training efficiency for machine learning at scale, and large-scale machine learning applications for the domain of weather and climate science.

The MAELSTROM compute system designs will benchmark the applications across a range of computing systems regarding energy consumption, time-to-solution, numerical precision and solution accuracy.

Customised compute systems will be designed that are optimised for application needs to strengthen Europe’s high-performance computing portfolio and to pull recent hardware developments, driven by general machine learning applications, toward needs of weather and climate applications.

The MAELSTROM software framework will enable scientists to apply and compare machine learning tools and libraries efficiently across a wide range of computer systems. A user interface will link application developers with compute system designers, and automated benchmarking and error detection of machine learning solutions will be performed during the development phase. Tools will be published as open source.

The MAELSTROM machine learning applications will cover all important components of the workflow of weather and climate predictions including the processing of observations, the assimilation of observations to generate initial and reference conditions, model simulations, as well as post-processing of model data and the development of forecast products. For each application, benchmark datasets with up to 10 terabytes of data will be published online for training and machine learning tool-developments at the scale of the fastest supercomputers in the world. MAELSTROM machine learning solutions will serve as blueprint for a wide range of machine learning applications on supercomputers in the future.

eProcessor

eProcessor logo
Project coordinator:  Barcelona Supercomputing Center (BSC)
Start date: 1 April 2021
Duration: 3 years
Total budget: € 8 million
Participating organisations:
  1. Chalmers tekniska högskola, SWEDEN
  2. Idryma Technologias Kai Erevnas, Foundation for Research and Technology – Hellas, GREECE
  3. Università degli Studi di Roma "La Sapienza", ITALY
  4. cortus, FRANCE
  5. christmann informationstechnik + medien GmbH & Co. KG, GERMANY
  6. Universität Bielefeld, GERMANY
  7. EXTOLL GmbH, GERMANY
  8. Thales SA, FRANCE
  9. Exascale Performance Systems – EXAPSYS P.C., GREECE
Website: https://eprocessor.eu

The eProcessor ecosystem combines open source software (SW) and hardware (HW) to deliver the first completely open source European full stack ecosystem based on a new RISC-V CPU coupled to multiple diverse accelerators that target both traditional HPC as well as mixed precision workloads for High Performance Data Analytics (HPDA), (AI, ML, DL and Bioinformatics). eProcessor will be extendable (open source), energy-efficient (low power), extreme-scale (high performance), optimized for both HPC and embedded applications, and extensible (easy to add on-chip and/or off-chip components), hence, the “e” which acts as a wildcard in the project name.

eProcessor combines cutting edge research utilizing SW/HW co-design to achieve sustained processor and system performance for (sparse and mixed-precision) HPC and HPDA workloads by combining a high performance low power out-of-order processor core with novel, adaptive on-chip memory structures and management, as well as fault tolerance features. The implemented software hardware co-design approaches span the full stack from applications to runtimes, tools, OS, and the CPU and accelerators.

eProcessor is a real full stack (SW and HW) research project by leveraging and extending the work done in multiple European projects like: European Processor Initiative, Low-Energy Toolset for Heterogeneous Computing, MareNostrum Experimental Exascale Platform, POP2 CoE, Tulipp, EuroEXA, and ExaNeSt. By doing so, we improve the Technical Readiness Level and work with industrial partners that provide a direct path to commercialization. This can only be done with a combination of SW simulation, HW emulation using FPGAs, and real ASIC prototypes that demonstrate the full stack feasibility of the hardware and software. Finally, while the applications we use span the whole range from IoT to HPC, the ASIC implementation will be in a technology node that can easily be adopted for a near-future novel HPC implementation.

REGALE

Project coordinator:  Institute of Communication and Computer Systems (ICCS)
Start date: 1 April 2021
Duration: 3 years
Total budget: € 7,5 million
Participating organisations:
  1. Technische Universität München (TUM), GERMANY
  2. Bull SAS, FRANCE
  3. Université Grenoble Alpes, FRANCE
  4. Barcelona Supercomputing Centre (BSC), SPAIN
  5. Ryax Technologies, FRANCE
  6. National Technical University of Athens - NTUA, GREECE
  7. ANDRITZ HYDRO GMBH, AUSTRIA
  8. Alma Mater Studiorum - Università di Bologna, ITALY
  9. CINECA, Consorzio Interuniversitario ITALY
  10. Bayerische Akademie der Wissenschaften, GERMANY
  11. E4 Computer Engineering SpA, ITALY 
  12. Electricité de France (EDF), FRANCE
  13. SCiO Private Company, GREECE
  14. TWT GMBH SCIENCE & INNOVATION, GERMANY
  15. GIOUMPITEK MELETI SCHEDIASMOS YLOPOIISI KAI POLISI ERGON PLIROFORIKIS ETAIREIA PERIORISMENIS EFTHYNIS,GREECE
Website: https://www.iccs.gr/en/

With exascale systems almost outside our door, we need now to turn our attention on how to make the most out of these large investments towards societal prosperity and economic growth. REGALE aspires to pave the way of next-generation HPC applications to exascale systems. To accomplish this we define an open architecture, build a prototype system and incorporate in this system appropriate sophistication in order to equip supercomputing systems with the mechanisms and policies for effective resource utilization and execution of complex applications.

REGALE brings together leading supercomputing stakeholders, prestigeous academics, top European supercomputing centers and end users from five critical target sectors, covering the entire value chain in system software and applications for extreme scale technologies.

TIME-X

Project coordinator:  Katholieke Universiteit Leuven
Start date: 1 April 2021
Duration: 3 years
Total budget: € 3 million
Participating organisations:
  1. Sorbonne Université, FRANCE
  2. Forschungszentrum Jülich, GERMANY
  3. Technische Universität Hamburg (TUHH), GERMANY
  4. Technische Universität Darmstadt, GERMANY
  5. USI: Università della Svizzera italiana, SWITZERLAND
  6. Bergische Universität Wuppertal, GERMANY
  7. Technische Universität München (TUM), GERMANY
  8. Ecole nationale des ponts et chaussées, FRANCE
  9. Université de Genève, SWITZERLAND
Website: https://www.time-x.eu/

Recent successes have established the potential of parallel-in-time integration as a powerful algorithmic paradigm to unlock the performance of Exascale systems. However, these successes have mainly been achieved in a rather academic setting, without an overarching understanding. TIME-X will take the next leap in the development and deployment of this promising new approach for massively parallel HPC simulation, enabling efficient parallel-in-time integration for real-life applications. We will: Provide software for parallel-in-time integration on current and future Exascale HPC architectures, delivering substantial improvements in parallel scaling; Develop novel algorithmic concepts for parallel-in-time integration, deepening our mathematical understanding of their convergence behaviour and including advances in multi-scale methodology; Demonstrate the impact of parallel-in-time integration, showcasing the potential on problems that, to date, cannot be tackled with full parallel efficiency in three diverse and challenging application fields with high societal impact: weather and climate, medicine, drug design and electromagnetics. To realise these ambitious, yet achievable goals, the inherently inter-disciplinary TIME-X Consortium unites top researchers from numerical analysis and applied mathematics, computer science and the selected application domains. Europe is leading research in parallel-in-time integration. TIME-X unites all relevant actors at the European level for the first time in a joint strategic research effort. A strategic investment from the European Commission would enable taking the necessary next step: advancing parallel-in-time integration from an academic/mathematical methodology into a widely available technology with a convincing proof of concept, maintaining European leadership in this rapidly advancing field and paving the way for industrial adoption.