Jump to main navigation Jump to main navigation Jump to main content Jump to footer content

Research and development at LRZ

The LRZ supports science and research with IT and know-how, it also manages its own research and development projects and participates in international projects. The research focus is on developing and optimising hardware and software for supercomputing, on network, communication and storage technologies as well as on workloads for High-Performance Computing (HPC). Special attention is paid to sustainability: In co-design with tech firms, the LRZ has developed and further optimised the direct cooling of supercomputers using hot water, and we are also constantly exploring how to operate computing and supercomputing centres efficiently or how computers can reduce the amount of energy they consume.

Research programmes at LRZ

Environmental Computing

Whether it’s data from modelling and simulation or images of satellites or measured values from sensor data in nature: The environmental sciences produce a wide variety of digital data and often require technical support for the processing of data. The LRZ is exploring useful transmission and processing methods, as well as harmonising data from various sources. For example, for Artificial Intelligence applications or for developing systems for disaster mitigation. When it comes to environmental computing, the V2C focuses on the visualisation of simulation results.

Future computing and energy efficiency

The hardware for supercomputing continues to become more heterogeneous in order to achieve better energy-efficiency. Software needs to be adapted to systems so that they can compute efficiently. The LRZ has installed the Bavarian Energy, Architecture and Software Testbed or BEAST, for conducting research into innovative IT: This is where the future of computing is being prepared, e.g. tools for controlling and monitoring supercomputers, and strategies for reducing the energy requirements of computing.

Big Data and AI

Artificial intelligence accelerates calculations and expands the set of methods in all scientific disciplines. But how reliable is AI? What data can be used to train systems? And how can the energy requirements of AI systems be reduced? The Big Data & Artificial Intelligence (BDAI) team explores AI applications, data strategies and innovative AI technologies to provide researchers with specialist support.

Visualisation

A picture is worth a thousand words: Immersive virtual reality applications also promote understanding in the sciences and fascinating images ultimately support communication strategies. Specialists at the Centre for Virtual Reality and Visualisation (V2C) at the LRZ are investigating how to present research from a technical perspective. In collaboration with researchers and providers, they develop tools and workloads for recording and processing image data.

Managing research data

Research data may contain more information than planned: Therefore, it should be reliably archived, as easy to search as possible and ultimately reusable and accessible. The professional management of research data is also becoming increasingly important for efficiency reasons. The LRZ handles digital information in line with the FAIR data principles and develops tools for managing data in the area of high-performance computing as well as platforms for publishing research outcomes, such as the LRZ Fair Data Portal.


Storing data
Searching for data

Protecting sensitive data

In addition to tape and hard disk archives, the LRZ Cloud also stores research data. There, however, they require special protection against hacking and manipulation. In collaboration with research groups, the LRZ explores and develops tools and technology for securely storing research data or for creating a Trusted Research Environment, which includes, for example, technically implementing regulations for access rights to sensitive data or managing stored data.

IT management and security

How can openly accessible science be implemented from a technical standpoint? How can we guarantee the security of information in decentralised systems and infrastructures? And how do computing centres ensure their service quality? Research questions and projects of general importance have arisen as a result of LRZ’s certifications. Together with the Digitalverbund Bayern and universities, the LRZ is researching strategies and measures to secure digital information, to increase the resilience of computing centres, and to also maintain IT services even when crises strike, keyword: business continuity.

Research & Information Management

Efficient science and information management is becoming increasingly important for our research and innovation projects. The Research and Information Management Team develops and tests new tools for use within and beyond the LRZ. The team focuses on customising science management methodologies, a comprehensive Research Information System, and work on an open web search within the OpenWebSearch.EU project and Open Search Initiative.

Highlights from LRZ research

  • Conducting research using large language models

    The US start-up Cerebras Systems supplies special processors for working with Large Language Models (LLM). The LRZ researchers Michael Hoffmann and Jophin John have tested this system, specifically with regard to its ability to deal with hate speech, and compared it with other AI technologies. The result: The innovative processors work four times faster than other AI clusters.

    Click here to read more

  • Modeling turbulence

    When stars implode, the energy they produce sets the matter and gas clouds in space in motion. This creates turbulence, an environment in which new stars can form. LRZ researchers have developed tools and workloads to visualise the largest turbulence to date and have consequently attracted a great deal of attention in astrophysics.

    Watch here

  • Presenting movement

    Movement from sensor data can be presented spatially and with the time factor on point clouds, which, in turn, helps to determine the position and depth of moving bodies. This is the result of a study conducted by Simone Müller as part of her doctoral thesis at the LRZ, which was published in Nature Science Reports.

    Click here to read more

  • An energy-efficient exascale

    In the case of the European projects “REGALE” and “DEEP-SEA”, the LRZ, together with researchers from other partner institutions, investigated how innovative exascale-class supercomputers can compute efficiently and how the consumption of energy can be controlled. This resulted in programmes that are now used at HPC centres.

    Click here to read more

  • Interviewing contemporary witnesses

    Interviewing contemporary witnesses requires a great deal of empathy and the right recording technique. LRZ researcher Daniel Kolb supervised the LediZ project and explored which techniques scientists use and how they can interact with people and their stories.

    Watch here

Research at the LRZ: Key figures

The LRZ participates in research projects across the world and manages its own R&D projects. The computing centre is currently working on 56 projects in eight research programmes, which have led to a string of publications to date. The LRZ focuses on exploring and developing necessary technologies and also supports research teams in the areas of reserach coordination and research and information management.

R&D projects at LRZ

DEEP-SEA and REGALE Help Bolster Dynamic Software for Exascale

Future exascale high-performance computers require a new kind of software that allows dynamic workloads to run with maximum energy efficiency on the best-suited hardware available in the system. The Technical University of Munich (TUM) and the Leibniz Supercomputing Centre (LRZ) are working together to create a production-ready software stack to enable low- energy, high-performance exascale systems.

Exascale supercomputers are knocking at the door. They might be a game-changer to the way we design and use high-performance computing (HPC) systems. As exascale performance drives more HPC systems toward heterogenous architectures that mix traditional CPUs with accelerators like GPUs and FPGAs, computational scientists will have to design more dynamic applications and workloads in order to get massive performance increases for their applications.

The question is: How will applications leverage these different technologies efficiently and effectively Power management and dynamic resource allocation will become the most important aspects of this new area of HPC. Stated more simply: how do HPC centres ensure that users are getting the most science per Joule

Optimization is challenging

Optimizing application performance on heterogeneous systems under power and energy constraints poses some challenges. Some are quite sophisticated, like the dynamic phase behaviour of applications. And some are basic hardware issues like the variability of processors: Due to manufacturing limitations, low-power operation of CPUs can cause a wide variety of frequencies across the cores. Adding to these is the ever-growing complexity and heterogeneity on node-level.

A software stack for such heterogeneous exascale systems will have to meet some specific demands. It has to be dynamic, work with highly heterogeneous integrated systems, and adapt to existing hardware. TUM and LRZ are working closely together to build a software stack based on existing and proven solutions. Among others, MPI and its various implementations, SLURM, PMIx or DCDB are well-known parts of this Munich Software Stack.

“The basic stack is already running on the SuperMUC-NG supercomputer at the LRZ”, says Martin Schulz, Chair for Computer Architecture and Parallel Systems at the Technical University of Munich and Director at the Leibniz Supercomputing Centre. “Right now, we are engaged in two European research projects for further development of this stack on more heterogeneous, deeper integrated and dynamic systems, as they will become commonplace in the exascale era: REGALE and DEEP-SEA.” One of the foundations for the next generation of this software stack is the HPC PowerStack[1], an initiative, with TUM as one of the co-founders, for better standardization and homogenization of approaches for power and energy optimized systems.

EU projects address the questions

REGALE aims to define an open architecture, build a prototype system, and incorporate in this system appropriate sophistication in order to equip supercomputing systems with the mechanisms and policies for effective resource utilization and execution of complex applications. DEEP-SEA will deliver the programming environment for future European exascale systems, capable of adapting at all levels of the software stack. While the basic technologies will be implemented and used in DEEP-SEA, the control chain will play a major role in REGALE.

Both projects are focused on making existing codes more dynamic so they can leverage existing accelerators: Many codes today are static and might only be partially ready for more dynamics. This will require some refactoring, and in some cases, complete rewrites of certain parts of the codes. But it will also require novel and elaborate scheduling methods that must be developed by HPC centres themselves. Part of the upcoming research in DEEP-SEA and REGALE will be to find ways to determine where targeted efforts on top of an existing software stack can yield the greatest result. To this end, agile development approaches will play a role: Continuous Integration with elaborate testing and automation are being established on BEAST (Bavarian Energy-, Architecture- and Software-Testbed) at the LRZ, the testbed for the Munich Software Stack.

Need for holistic approach

“Most research in the field of power and energy management today is done site-specific,” Schulz said. “We see little integration of the components; we have a lack of standardized interfaces that work on all layers of the software stack. In the end, this leads to suboptimal performance of the applications and increases the power needed by the system.. With the Munich Software Stack, TUM and LRZ are working on an open, holistic, and scalable approach to an integrated power and energy management  in order to get the most out of supercomputers to come.”

Research reports and publications

From supercomputing or citizen science, from visualisations to reliable data management and optimising programming languages: Here you will find scientific publications involving LRZ researchers. Explore the wealth of possible research topics at the LRZ.

Publications

En route to LRZ?

Do you want to help develop the IT technology of the future? Find purpose working at LRZ and join our team. We are regularly looking for new colleagues.

R&D projects with the LRZ

The LRZ participates in research projects – as a technical partner and a service provider. We explore which computer resources are needed to collect, process and store data, support with managing research data or developing tools, interfaces, or software and platforms. 

As far as possible, the LRZ also supports research groups in the application process for national or international research projects and during execution phase.

Doing research with the LRZ

Do you have an idea for a research project but aren’t sure which institutes are eligible for funding or how to develop a plan? 

We would also be happy to advise you on management questions and help you with making applications and coordinating projects. Draw on our experience.

Dr. Megi Sharikadze

Research Project Manager

Dr. Stephan Hachinger

Head of the Research Team