Skip to content

Visit us

Meet representatives of our projects on booth 2215 to know more about our work, and maybe meet new partners for your own projects…

Stay tuned! We will have a prize draw on the booth, more information to come!

Aspide

exAScale ProgramIng models for extreme Data processing

www.aspide-project.eu

Twitter: @ASPIDE_PROJECT

Extreme Data is an incarnation of Big Data concept distinguished by the massive amounts of data that must be queried, communicated and analysed in (near) real-time by using a very large number of memory/storage elements and Exascale computing systems. Immediate examples are the scientific data produced at a rate of hundreds of gigabits-per-second that must be stored, filtered and analysed, the millions of images per day that must be mined (analysed) in parallel, the one billion of social data posts queried in real-time on an in-memory components database. Traditional disks or commercial storage cannot handle nowadays the extreme scale of such application data.

Following the need of improvement of current concepts and technologies, ASPIDE’s activities focus on data-intensive applications running on systems composed of up to millions of computing elements (Exascale systems). Practical results will include the methodology and software prototypes that will be designed and used to implement Exascale applications.

The ASPIDE project will contribute with the definition of a new programming paradigms, APIs, runtime tools and methodologies for expressing data-intensive tasks on Exascale systems, which can pave the way for the exploitation of massive parallelism over a simplified model of the system architecture, promoting high performance and efficiency, and offering powerful operations and mechanisms for processing extreme data sources at high speed and/or real-time.

Coordinating Organisation

Universidad Carlos III de Madrid, Spain

Other partners

Institute e-Austria Timisoara, Romania
University of Calabria, Italy
Universität Klagenfurt, Austria
Institute of Bioorganic Chemistry of Polish Academy of Sciences, Poland
Servicio Madrileño de Salud, Spain
INTEGRIS SA Italy
Bull SAS (Atos Group), France

 

EVOLVE

Integrating the HPC, Cloud and Big Data worlds in a unique large-scale testbed applied in 7 domains in the fields of mobility, agriculture and urban planning

www.evolve-h2020.eu

Twitter: @evolve_h2020

EVOLVE is a pan European Innovation Action with 19 key partners from 11 European countries introducing important elements of High-Performance Computing (HPC) and Cloud in Big Data platforms taking advantage of recent technological advancements to enable cost-effective applications in 7 different pilots to keep up with the unprecedented data growth we are experiencing.

EVOLVE aims to build a large-scale testbed by integrating technology from:

  • The HPC world: An advanced computing platform with HPC features and systems software.
  • The Big Data world: A versatile big-data processing stack for end-to-end workflows.
  • The Cloud world: Ease of deployment, access, and use in a shared manner, while addressing data protection.

Therefore, one key differentiator of EVOLVE is the ambition to tackle heterogeneity: in the infrastructure itself, from data processing to data movement aspects, for workflow optimization. Heterogeneity is also addressed at the level of  user needs with support of cloud and big data tools on HPC platform.

EVOLVE aims to take concrete and decisive steps in bringing together the Big Data, HPC, and Cloud worlds, and to increase the ability to extract value from massive and demanding datasets. EVOLVE aims to bring the following benefits for processing large and demanding datasets:

  • Performance: Reduced turn-around time for domain-experts, industry (large and SMEs), and end-users.
  • Experts: Increased productivity when designing new products and services, by processing large datasets.
  • Businesses: Reduced capital and operational costs for acquiring and maintaining computing infrastructure.
  • Society: Accelerated innovation via faster design and deployment of innovative services that unleash creativity.

EVOLVE intends to build and demonstrate the proposed testbed with real-life, massive datasets from demanding applications areas. To realize this vision, EVOLVE brings together technologies and pilot partners from EU industry with demonstrated experience, established markets, and vested interest. Furthermore, EVOLVE will conduct a set of 10-15 Proof-of-Concepts with stakeholders from the Big Data value chain to build up digital ecosystems to achieve a broader market penetration.

 

Coordinating organisation

Datadirect Networks France

Other partners

AVL LIST GmbH, Austria
BMW – Bayerische Motoren Werke AG, Germany
Bull SAS (Atos group), France
Cybeletech, France
Globaz S.A., Portugal
IBM Ireland Ltd, Ireland
Idryma Technologias Kai Erevnas, Greece
ICCS (Institute of Communication and Computer Systems), Greece
KOOLA d.o.o., Bosnia and Herzegovina
Kompetenzzentrum – Das Virtuelle Fahrzeug, Forschungsgesellschaft mbH, Austria
MEMEX SRL, Italy
MEMOSCALE AS, Norway
NEUROCOM Luxembourg SA, Luxembourg
ONAPP Ltd, Gibraltar
Space Hellas SA, Greece
Thales Alenia Space France SAS, France
TIEMME SPA, Italy
WEBLYZARD Technology GmbH, Austria

 

ExaQUte

EXAscale Quantification of Uncertainties for Technology and Science Simulation

www.exaqute.eu

Twitter: @ExaQUteEU

The ExaQUte project aims at constructing a framework to enable Uncertainty Quantification (UQ) and Optimization Under Uncertainties (OUU) in complex engineering problems using computational simulations on Exascale systems.

The stochastic problem of quantifying uncertainties will be tackled by using a Multi Level MonteCarlo (MLMC) approach that allows a high number of stochastic variables. New theoretical developments will be carried out to enable its combination with adaptive mesh refinement, considering both, octree-based and anisotropic mesh adaptation.

Gradient-based optimization techniques will be extended to consider uncertainties by developing methods to compute stochastic sensitivities. This requires new theoretical and computational developments. With a proper definition of risk measures and constraints, these methods allow high-performance robust designs, also maximizing the solution reliability.

The description of complex geometries will be possible by employing embedded methods, which guarantee a high robustness in the mesh generation and adaptation steps, while allowing preserving the exact geometry representation.

The efficient exploitation of Exascale system will be addressed by combining State-of-the-Art dynamic task-scheduling technologies with space-time accelerated solution methods, where parallelism is harvested both in space and time.

The methods and tools developed in ExaQUte will be applicable to many fields of science and technology. The chosen application focuses on wind engineering, a field of notable industrial interest for which currently no reliable solution exists. This will include the quantification of uncertainties in the response of civil engineering structures to the wind action, and the shape optimization taking into account uncertainties related to wind loading, structural shape and material behaviour.

All developments in ExaQUte will be open-source and will follow a modular approach, thus maximizing future impact.

Coordinating Organisation

CIMNE- Centre Internacional de Metodes Numerics en Enginyeria, Spain

Other Partners

Barcelona Supercomputing Center, Spain
Technische Universität München, Germany
INRIA, Institut National de Recherche en Informatique et Automatique, France
Vysoka Skola Banska – Technicka Univerzita Ostrava, Czech Republic
Ecole Polytechnique Fédérale de Lausanne, Switzerland
Universitat Politecnica de Catalunya, Spain
str.ucture GmbH, Germany

ICEI/Fenix

Interactive Computing E-Infrastructure

fenix-ri.eu/

Twitter: @Fenix_RI_eu

The European supercomputing centres BSC (Spain), CEA (France), CINECA (Italy), ETHZ-CSCS (Switzerland) and JUELICH-JSC (Germany) have joined forces to design and build a federated data and computing infrastructure. The realization of the Fenix Research Infrastructure, which in the beginning will be used primarily by the Human Brain Project (HBP), started officially in January 2018 with the launch of the “Interactive Computing E-Infrastructure” (ICEI) project. ICEI/Fenix is co-funded by the European Commission through a Specific Grant Agreement (SGA) under the umbrella of the HBP Framework Partnership Agreement (FPA).

The five supercomputing centres—all of which are Hosting Members of PRACE—are creating a set of e-infrastructure services which serve the HBP and other communities as a basis for the development and operation of community-specific platform tools and services. To this end, the design and the implementation of the ICEI/Fenix infrastructure are driven by the needs of the HBP as well as other scientific communities with similar requirements (e.g., materials science). The key services provided by ICEI/Fenix encompass Interactive Computing, Scalable Computing and Virtual Machine services, as well as Active and Archival Data Repositories. The distinguishing characteristic of this e-infrastructure is that data repositories and scalable supercomputing systems are in close proximity and well integrated. First ICEI/Fenix infrastructure services are already available and used at ETHZ-CSCS. The deployment of ICEI/Fenix infrastructure components at the majority of the participating centres and the first demonstration of the key services are expected to start early 2020. More infrastructure services are planned to be added, with all infrastructure services expected to be operational in early 2021.

The objectives of ICEI/Fenix can be summarized as follows:

  • Perform a coordinated procurement of equipment and related maintenance services, licenses for software components, and R&D services for realizing elements of the ICEI/Fenix e-infrastructure;
  • Design a generic e-infrastructure for the HBP, driven by its scientific use-cases and usable by other scientific communities;
    Build the e-infrastructure having the following key characteristics

    • Interactive Computing Services
    • Elastic access to scalable compute resources
    • Federated data infrastructure;
  • Establish a suitable e-infrastructure governance
  • Develop a resource allocation mechanism to provide resources to HBP users and European researchers at large; and
  • Assist in the expansion of the e-infrastructure to other communities that provide additional resources.

Coordinating Organisations

Technical coordination: Forschungszentrum Jülich GmbH (JUELICH), Germany
Administrative coordination: Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

Other Partners

Barcelona Supercomputing Center (BSC), Spain
Commissariat à l’Energie Atomique et aux énergies alternatives (CEA), France
Cineca Consorzio Interuniversitario (CINECA), Italy
Eidgenössische Technische Hochschule Zürich (ETH Zürich), Switzerland

LEXIS

Large-scale EXecution for Industry & Society

lexis-project.eu

Twitter: @LexisProject

The increasing quantities of data generated by modern industrial and business processes pose enormous challenges for organizations seeking to glean knowledge and understanding from the data. Combinations of HPC, Cloud and Big Data technologies are key to meeting the increasingly diverse needs of large and small organizations alike. Critically, access to powerful compute platforms for SMEs – which has been difficult due to both technical and financial reasons – may now be possible.

LEXIS (Large-scale EXecution for Industry & Society) project will build an advanced engineering platform at the confluence of HPC, Cloud and Big Data which will leverage large-scale geographically-distributed resources from existing HPC infrastructure, employ Big Data analytics solutions and augment them with Cloud services. Driven by the requirements of the pilots, the LEXIS platform will build on best of breed data management solutions (EUDAT) and advanced, distributed orchestration solutions (TOSCA), augmenting them with new, efficient hardware capabilities in the form of Data Nodes and federation, usage monitoring and accounting/billing supports to realize an innovative solution.

The consortium will develop a demonstrator with a significant Open Source dimension including validation, test and documentation. It will be validated in the pilots – in the industrial and scientific sectors (Aeronautics, Earthquake and Tsunami, Weather and Climate) – where significant improvements in KPIs including job execution time and solution accuracy are anticipated.

LEXIS will promote the solution to the HPC, Cloud and Big Data sectors maximizing impact through targeted and qualified communications. LEXIS brings together a consortium with the skills and experience to deliver a complex multi-faceted project, spanning a range of complex technologies across seven European countries, including large industry, flagship HPC centres, industrial and scientific compute pilot users, technology providers and SMEs.

Coordinating Organisation

Vysoka Skola Banska – Technicka Univerzita Ostrava, Czechia

Other Partners

Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Germany
Associazione ITHACA – Information Technology for Humanitarian Assistance Cooperation and Action, Italy
Bayerische Akademie der Wissenschaften, Germany
Bayncore Labs Ltd, Ireland
Bull SAS (Atos group), France
CEA – Commissariat à l’Energie Atomique et aux énergies alternatives, France
CYCLOPS Labs GmbH, Switzerland
Centro Internazionale in Monitoraggio Ambientale – Fondazione CIMA, Italy
ECMWF (European Centre for Medium- Range Weather Forecasts), United Kingdom
Fondazione LINKS – Leading Innovation & Knowledge for Society, Italy
GE AVIO SRL, Italy
Helmholtz Zentrum Potsdam Deutsches Geoforschungszentrum GFZ, Germany
NUMTECH, France
Outpost 24 France, France
TESEO SPA – Tecnologie e Sistemi Elettronici ed Ottici, Italy

MAESTRO

Middleware for memory and data-awareness in workflows

www.maestro-data.eu/

Twitter: @maestrodata

Maestro will build a data-aware and memory-aware middleware framework that addresses ubiquitous problems of data movement in complex memory hierarchies and at many levels of the HPC software stack.

Though HPC and HPDA applications pose a broad variety of efficiency challenges, it would be fair to say that the performance of both has become dominated by data movement through the memory and storage systems, as opposed to floating point computational capability. Despite this shift, current software technologies remain severely limited in their ability to optimise data movement. The Maestro project addresses what it sees as the two major impediments of modern HPC software:

  1. Moving data through memory was not always the bottleneck. The software stack that HPC relies upon was built through decades of a different situation – when the cost of performing floating point operations (FLOPS) was paramount. Several decades of technical evolution built a software stack and programming models highly fit for optimising floating point operations but lacking in basic data handling functionality. We characterise the set of technical issues at missing data-awareness.
  2. Software rightfully insulates users from hardware details, especially as we move higher up the software stack. But HPC applications, programming environments and systems software cannot make key data movement decisions without some understanding of the hardware, especially the increasingly complex memory hierarchy. With the exception of runtimes, which treat memory in a domain-specific manner, software typically must make hardware-neutral decisions which can often leave performance on the table. We characterise this issue as missing memory-awareness.

Maestro proposes a middleware framework that enables memory- and data-awareness.

Coordinating Organisation

Forschungszentrum Jülich GmbH, Germany

Other Partners

CEA – Commissariat à l’Energie Atomique et aux énergies alternatives, France
Appentra Solutions SL, Spain
ETHZ – Eidgenössische Technische Hochschule Zürich, Switzerland
ECMWF – European Centre for Medium-range Weather Forecasts, United Kingdom
Seagate Systems UK Ltd, United Kingdom
Cray Computer GmbH, Switzerland

Mont-Blanc 2020

European scalable, modular and power-efficient HPC processor

www.montblanc-project.eu

Twitter: @MontBlanc_Eu

Following on from the three successive Mont-Blanc projects since 2011, the three core partners Arm, Barcelona Supercomputing Center and Bull (Atos Group) have united again to trigger the development of the next generation of industrial processor for Big Data and High Performance Computing. The Mont-Blanc 2020 consortium also includes CEA, Forschungszentrum Jülich, Kalray, and SemiDynamics.

The Mont-Blanc 2020 project intends to pave the way to the future low-power European processor for Exascale. To improve the economic sustainability of the processor generations that will result from the Mont-Blanc 2020 effort, the project includes the analysis of the requirements of other markets. The project’s strategy based on modular packaging would make it possible to create a family of SoCs targeting different markets, such as “embedded HPC” for autonomous driving. The project’s actual objectives are to:

  • define a low-power System-on-Chip architecture targeting Exascale;
  • implement new critical building blocks (IPs) and provide a blueprint for its first-generation implementation;
  • deliver initial proof-of-concept demonstration of its critical components on real life applications;
  • explore the reuse of the building blocks to serve other markets than HPC, with methodologies enabling a better time-predictability, especially for mixed-critical applications where guaranteed execution & response times are crucial.

The Mont-Blanc 2020 project is at the heart of the European exascale supercomputer effort, since most of the IP developed within the project will be reused and productized in the European Processor Initiative (EPI).

Coordinating organisation

Bull (Atos group), France

Other partners

Arm (United Kingdom)
BSC (Spain)
CEA (France)
Jülich Forschungszentrum (Germany)
Kalray (France)
SemiDynamics (Spain)