Skip to content

Visit us

Meet representatives of our projects on booth A-1416 to know more about our work, and maybe meet new partners for your own projects…

Stay tuned! We will have a prize draw on the booth, and a useful gadget to give away, more information to come!

Aspide

exAScale ProgramIng models for extreme Data processing

www.aspide-project.eu

Twitter: @ASPIDE_PROJECT

Extreme Data is an incarnation of Big Data concept distinguished by the massive amounts of data that must be queried, communicated and analysed in (near) real-time by using a very large number of memory/storage elements and Exascale computing systems. Immediate examples are the scientific data produced at a rate of hundreds of gigabits-per-second that must be stored, filtered and analysed, the millions of images per day that must be mined (analysed) in parallel, the one billion of social data posts queried in real-time on an in-memory components database. Traditional disks or commercial storage cannot handle nowadays the extreme scale of such application data.

Following the need of improvement of current concepts and technologies, ASPIDE’s activities focus on data-intensive applications running on systems composed of up to millions of computing elements (Exascale systems). Practical results will include the methodology and software prototypes that will be designed and used to implement Exascale applications.

The ASPIDE project will contribute with the definition of a new programming paradigms, APIs, runtime tools and methodologies for expressing data-intensive tasks on Exascale systems, which can pave the way for the exploitation of massive parallelism over a simplified model of the system architecture, promoting high performance and efficiency, and offering powerful operations and mechanisms for processing extreme data sources at high speed and/or real-time.

Coordinating Organisation

Universidad Carlos III de Madrid, Spain

Other partners

Institute e-Austria Timisoara, Romania
University of Calabria, Italy
Universität Klagenfurt, Austria
Institute of Bioorganic Chemistry of Polish Academy of Sciences, Poland
Servicio Madrileño de Salud, Spain
INTEGRIS SA Italy
Bull SAS (Atos Group), France

 

DEEP-EST

Dynamical Exascale Entry Platform

www.deep-projects.eu

Twitter: @DEEPprojects

The DEEP projects (DEEP, DEEP-ER and DEEP-EST) present an innovative solution for next generation supercomputers, aiming at most efficiently organising heterogeneous resources. This is achieved by addressing the main Exascale challenges – including scalability, programmability, end-to-end performance, resiliency, and energy efficiency – through a stringent co-design approach.

The DEEP projects developed the Cluster-Booster architecture – which combines a standard HPC Cluster with the Booster, a unique cluster of high-throughput many-core processors – and extended it by including a multi-level hierarchy based on innovative memory technologies. Additionally, a full software stack has been created by extending MPI – the de-facto standard programming model in HPC – and complementing it with task-based I/O, and resiliency functionalities. The next step in the DEEP project’s roadmap is the generalisation of the Cluster-Booster concept towards the so-called “Modular Supercomputing Architecture”, in which the Cluster and the Booster are complemented by further computing modules with characteristics tailored to the needs of new workloads, such as the ones present in high-performance data analytics (HPDA).

The developments cut across the complete HPC stack and amount to  a fully integrated system prototype combining hardware with system software, programming environments and highly tuned applications. The latter are a total of 16 ambitious and highly relevant applications from HPC and HPDA domains, which drive co-design and serve to evaluate the projects’ ideas and demonstrate their benefits.

Coordinating organisation

Forschungszentrum Jülich (Jülich Supercomputing Centre)

Other partners

Current partners in DEEP-EST:

Forschungszentrum Jülich
Intel
Leibniz Supercomputing Centre
Barcelona Supercomputing Center
Megware Computer Vertrieb und Service GmbH
Heidelberg University
EXTOLL
The University of Edinburgh
Fraunhofer ITWM
Astron
KU Leuven
National Center For Supercomputing Applications (Bulgaria)
Norges Miljo-Og Biovitenskaplige Universitet Haskoli Islands
European Organisation for Nuclear Research (CERN)
ParTec
Norwegian University of Life Sciences (NMBU)

 

ExaQUte

EXAscale Quantification of Uncertainties for Technology and Science Simulation

www.exaqute.eu

Twitter: @ExaQUteEU

The ExaQUte project aims at constructing a framework to enable Uncertainty Quantification (UQ) and Optimization Under Uncertainties (OUU) in complex engineering problems using computational simulations on Exascale systems.

The stochastic problem of quantifying uncertainties will be tackled by using a Multi Level MonteCarlo (MLMC) approach that allows a high number of stochastic variables. New theoretical developments will be carried out to enable its combination with adaptive mesh refinement, considering both, octree-based and anisotropic mesh adaptation.

Gradient-based optimization techniques will be extended to consider uncertainties by developing methods to compute stochastic sensitivities. This requires new theoretical and computational developments. With a proper definition of risk measures and constraints, these methods allow high-performance robust designs, also maximizing the solution reliability.

The description of complex geometries will be possible by employing embedded methods, which guarantee a high robustness in the mesh generation and adaptation steps, while allowing preserving the exact geometry representation.

The efficient exploitation of Exascale system will be addressed by combining State-of-the-Art dynamic task-scheduling technologies with space-time accelerated solution methods, where parallelism is harvested both in space and time.

The methods and tools developed in ExaQUte will be applicable to many fields of science and technology. The chosen application focuses on wind engineering, a field of notable industrial interest for which currently no reliable solution exists. This will include the quantification of uncertainties in the response of civil engineering structures to the wind action, and the shape optimization taking into account uncertainties related to wind loading, structural shape and material behaviour.

All developments in ExaQUte will be open-source and will follow a modular approach, thus maximizing future impact.

Coordinating Organisation

CIMNE- Centre Internacional de Metodes Numerics en Enginyeria, Spain

Other Partners

Barcelona Supercomputing Center, Spain
Technische Universität München, Germany
INRIA, Institut National de Recherche en Informatique et Automatique, France
Vysoka Skola Banska – Technicka Univerzita Ostrava, Czech Republic
Ecole Polytechnique Fédérale de Lausanne, Switzerland
Universitat Politecnica de Catalunya, Spain
str.ucture GmbH, Germany

ICEI/Fenix

Interactive Computing E-Infrastructure

fenix-ri.eu/

Twitter: @Fenix_RI_eu

The European supercomputing centres BSC (Spain), CEA (France), CINECA (Italy), ETHZ-CSCS (Switzerland) and JUELICH-JSC (Germany) have joined forces to design and build a federated data and computing infrastructure. The realization of the Fenix Research Infrastructure, which in the beginning will be used primarily by the Human Brain Project (HBP), started officially in January 2018 with the launch of the “Interactive Computing E-Infrastructure” (ICEI) project. ICEI/Fenix is co-funded by the European Commission through a Specific Grant Agreement (SGA) under the umbrella of the HBP Framework Partnership Agreement (FPA).

The five supercomputing centres—all of which are Hosting Members of PRACE—are creating a set of e-infrastructure services which serve the HBP and other communities as a basis for the development and operation of community-specific platform tools and services. To this end, the design and the implementation of the ICEI/Fenix infrastructure are driven by the needs of the HBP as well as other scientific communities with similar requirements (e.g., materials science). The key services provided by ICEI/Fenix encompass Interactive Computing, Scalable Computing and Virtual Machine services, as well as Active and Archival Data Repositories. The distinguishing characteristic of this e-infrastructure is that data repositories and scalable supercomputing systems are in close proximity and well integrated. First ICEI/Fenix infrastructure services are already available and used at ETHZ-CSCS. The deployment of ICEI/Fenix infrastructure components at the majority of the participating centres and the first demonstration of the key services are expected to start towards the end of 2019. More infrastructure services are planned to be added, with all infrastructure services expected to be operational in early 2021.

The objectives of ICEI/Fenix can be summarized as follows:

  • Perform a coordinated procurement of equipment and related maintenance services, licenses for software components, and R&D services for realizing elements of the ICEI/Fenix e-infrastructure;
  • Design a generic e-infrastructure for the HBP, driven by its scientific use-cases and usable by other scientific communities;
    Build the e-infrastructure having the following key characteristics
    • Interactive Computing Services
    • Elastic access to scalable compute resources
    • Federated data infrastructure;
  • Establish a suitable e-infrastructure governance
  • Develop a resource allocation mechanism to provide resources to HBP users and European researchers at large; and
  • Assist in the expansion of the e-infrastructure to other communities that provide additional resources.

Coordinating Organisations

Technical coordination: Forschungszentrum Jülich GmbH (JUELICH), Germany
Administrative coordination: Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

Other Partners

Barcelona Supercomputing Center (BSC), Spain
Commissariat à l’Energie Atomique et aux énergies alternatives (CEA), France
Cineca Consorzio Interuniversitario (CINECA), Italy
Eidgenössische Technische Hochschule Zürich (ETH Zürich), Switzerland

MAESTRO

Middleware for memory and data-awareness in workflows

www.maestro-data.eu/

Twitter: @maestrodata

Maestro will build a data-aware and memory-aware middleware framework that addresses ubiquitous problems of data movement in complex memory hierarchies and at many levels of the HPC software stack.

Though HPC and HPDA applications pose a broad variety of efficiency challenges, it would be fair to say that the performance of both has become dominated by data movement through the memory and storage systems, as opposed to floating point computational capability. Despite this shift, current software technologies remain severely limited in their ability to optimise data movement. The Maestro project addresses what it sees as the two major impediments of modern HPC software:

  1. Moving data through memory was not always the bottleneck. The software stack that HPC relies upon was built through decades of a different situation – when the cost of performing floating point operations (FLOPS) was paramount. Several decades of technical evolution built a software stack and programming models highly fit for optimising floating point operations but lacking in basic data handling functionality. We characterise the set of technical issues at missing data-awareness.
  2. Software rightfully insulates users from hardware details, especially as we move higher up the software stack. But HPC applications, programming environments and systems software cannot make key data movement decisions without some understanding of the hardware, especially the increasingly complex memory hierarchy. With the exception of runtimes, which treat memory in a domain-specific manner, software typically must make hardware-neutral decisions which can often leave performance on the table. We characterise this issue as missing memory-awareness.

Maestro proposes a middleware framework that enables memory- and data-awareness.

Coordinating Organisation

Forschungszentrum Jülich GmbH, Germany

Other Partners

CEA – Commissariat à l’Energie Atomique et aux énergies alternatives, France
Appentra Solutions SL, Spain
ETHZ – Eidgenössische Technische Hochschule Zürich, Switzerland
ECMWF – European Centre for Medium-range Weather Forecasts, United Kingdom
Seagate Systems UK Ltd, United Kingdom
Cray Computer GmbH, Switzerland

Mont-Blanc 2020

European scalable, modular and power-efficient HPC processor

www.montblanc-project.eu

Twitter: @MontBlanc_Eu

Following on from the three successive Mont-Blanc projects since 2011, the three core partners Arm, Barcelona Supercomputing Center and Bull (Atos Group) have united again to trigger the development of the next generation of industrial processor for Big Data and High Performance Computing. The Mont-Blanc 2020 consortium also includes CEA, Forschungszentrum Jülich, Kalray, and SemiDynamics.

The Mont-Blanc 2020 project intends to pave the way to the future low-power European processor for Exascale. To improve the economic sustainability of the processor generations that will result from the Mont-Blanc 2020 effort, the project includes the analysis of the requirements of other markets. The project’s strategy based on modular packaging would make it possible to create a family of SoCs targeting different markets, such as “embedded HPC” for autonomous driving. The project’s actual objectives are to:

  • define a low-power System-on-Chip architecture targeting Exascale;
  • implement new critical building blocks (IPs) and provide a blueprint for its first-generation implementation;
  • deliver initial proof-of-concept demonstration of its critical components on real life applications;
  • explore the reuse of the building blocks to serve other markets than HPC, with methodologies enabling a better time-predictability, especially for mixed-critical applications where guaranteed execution & response times are crucial.

The Mont-Blanc 2020 project is at the heart of the European exascale supercomputer effort, since most of the IP developed within the project will be reused and productized in the European Processor Initiative (EPI).

Coordinating organisation

Bull (Atos group), France

Other partners

Arm (United Kingdom)
BSC (Spain)
CEA (France)
Jülich Forschungszentrum (Germany)
Kalray (France)
SemiDynamics (Spain)

SAGE-2

Percipient Storage for Exascale Data Centric Computing 2

www.sagestorage.eu

Twitter: @SageStorage

The landscape for High Performance Computing is changing with the proliferation of enormous volumes of data created by scientific instruments and sensors, in addition to data from simulations. This data needs to be stored, processed and analysed, and existing storage system technologies in the realm of extreme computing need to be adapted to achieve reasonable efficiency in achieving higher scientific throughput. We started on the journey to address this problem with the SAGE project. The HPC use cases and the technology ecosystem is now further evolving and there are new requirements and innovations that are brought to the forefront. It is extremely critical to address them today without “reinventing the wheel” leveraging existing initiatives and know-how to build the pieces of the Exascale puzzle as quickly and efficiently as we can.

The SAGE paradigm already provides a basic framework to address the extreme scale data aspects of High Performance Computing on the path to Exascale. Sage2 (Percipient StorAGe for Exascale Data Centric Computing 2) intends to validate a next generation storage system building on top of the already existing SAGE platform to address new use case requirements in the areas of extreme scale computing scientific workflows and AI/deep learning leveraging the latest developments in storage infrastructure software and storage technology ecosystem.

Sage2 aims to provide significantly enhanced scientific throughput, improved scalability, and, time & energy to solution for the use cases at scale. Sage2 will also dramatically increase the productivity of developers and users of these systems.

Sage2 will provide a highly performant and resilient, QoS capable multi-tiered storage system, with data layouts across the tiers managed by the Mero Object Store, which is capable of handling in-transit/in-situ processing of data within the storage system, accessible through the Clovis API.

Coordinating Organisation

Seagate Systems UK Ltd, United Kingdom

Other Partners

Bull SAS (Atos Group), France
CEA – Commissariat à l’Energie Atomique et aux énergies alternatives, France
United Kingdom Atomic Energy Authority, United Kingdom
Kungliga Tekniska Hoegskolan, Sweden
Forschungszentrum Jülich GmbH, Germany
The University of Edinburgh, United Kingdom
Kitware SAS, France
Arm Ltd, United Kingdom