An email has been sent to your address with instructions for changing your password.
There is no user registered with this email.
Sign Up
To create a free account, please fill out the form below.
Thank you for signing up!
A confirmation email has been sent to your email address. Please check your email and follow the instructions in the message to complete the registration process. If you do not receive the email, please check your spam folder or contact us for assistance.
Welcome to our platform!
Oops!
Something went wrong while trying to create your new account. Please try again and if the problem persist, Email Us to receive support.
Leibniz Supercomputing Centre Accelerates AI Innovation in Bavaria with Next-Generation AI System from Cerebras Systems and Hewlett Packard Enterprise
Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Very Positive)
Tags
Rhea-AI Summary
Hewlett Packard Enterprise (HPE) announced a collaboration with the Leibniz Supercomputing Centre (LRZ) and Cerebras Systems to deliver a groundbreaking AI system, enhancing scientific research in Bavaria. The system integrates the HPE Superdome Flex server and Cerebras CS-2 system, providing advanced capabilities for processing large datasets, crucial for AI-driven applications. Funded by Bavaria's Hightech Agenda, it aims to establish the region as a leading AI hotspot. Expected to be operational by summer, it supports diverse research initiatives including natural language processing and medical image analysis.
Positive
Collaboration with LRZ and Cerebras enables cutting-edge AI capabilities.
Supports diverse applications like natural language processing and medical imaging.
Funded by Bavaria's Hightech Agenda, enhancing the tech ecosystem.
First installation of the Cerebras CS-2 system in Europe.
Negative
None.
Leibniz Supercomputing Centre’s new advanced AI system will enable researchers to accelerate initiatives around machine learning, deep learning and neural networks and to process large amounts of data more quickly for advanced scientific research using the combined power of the Cerebras CS-2 system and the HPE Superdome Flex server
The new system is funded by the Free State of Bavaria through the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region’s mission to becoming an international AI hotspot. The new system is also an additional resource to Germany’s national supercomputing computing center, and part of LRZ’s Future Computing Program that represents a portfolio of heterogenous computing architectures across CPUs, GPUs, FPGSs and ASICs.
Empowering Bavaria’s scientific community to speed discovery and make breakthroughs
The new system is expected for delivery this summer and will be hosted at LRZ, an institute of the Bavarian Academy of Sciences and Humanities (BAdW).
The system will be used by local scientific and engineering communities, to support various research use cases. Some identified applications include Natural Language Processing (NLP), medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognosis, and computational fluid dynamics (CFD) to advance understanding in areas such as aerospace engineering and manufacturing.
Delivering next-generation AI with scalable and accelerated compute features
The new system is purpose-built to process large datasets to tackle complex scientific research. The system is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, which makes it the first solution in Europe to leverage the Cerebras CS-2 system. The HPE Superdome Flex server delivers a modular, scale-out solution to meet computing demands and features specialized capabilities to target large, in-memory processing required to process vast volumes of data. Additionally, the HPE Superdome Flex server’s specific pre-and post-data processing capabilities for AI model training and inference is ideal to support the Cerebras CS-2 system, which delivers the deep learning performance of 100s of graphics processing units (GPUs), with the programming ease of a single node. Powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 delivers greater AI-optimized compute cores, faster memory, and more fabric bandwidth than any other deep learning processor in existence.
"Currently, we observe that AI compute demand is doubling every three to four months with our users. With the high integration of processors, memory and on-board networks on a single chip, Cerebras enables high performance and speed. This promises significantly more efficiency in data processing and thus faster breakthrough of scientific findings," says Prof. Dr. Dieter Kranzlmüller, Director of the LRZ. "As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science. To ensure optimal use of the system, we will work closely with our users and our partners Cerebras and HPE to identify ideal use cases in the community and to help achieve groundbreaking results."
Cerebras CS-2 delivers the largest AI chip with 850,00 computing cores
AI methods and machine learning need computing power. Currently, the complexity of neural networks used to analyze large volumes of data is doubling in a matter of months. However, such applications have so far been run primarily on general purpose or graphics processors (CPU and GPU).
"We founded Cerebras to revolutionize compute," said Andrew Feldman, CEO and Co-Founder of Cerebras Systems. "We’re proud to partner with LRZ and HPE to give Bavaria’s researchers access to blazing fast AI, enabling them to try new hypotheses, train large language models and ultimately advance scientific discovery."
The Cerebras WSE-2 is 46,225 square millimeters of silicon, housing 2.6 trillion transistors and 850,000 AI-optimized computational cores as well as evenly distributed memory that hold up to 40 gigabytes of data and fast interconnects to transport them across the disk at 220 petabytes per second. This allows the WSE-2 to keep all the parameters of multi-layered neural networks on one chip during execution, which in turn reduces computation time and data processing. To date, the CS-2 system is being used in a number of U.S. research facilities and enterprises and is proving particularly effective in image and pattern recognition and NLP. Additional efficiency is also provided by water cooling, which reduces power consumption.
Offering a powerful system and software for AI development
To support the Cerebras CS-2 system, the HPE Superdome Flex server provides large-memory capabilities and unprecedented compute scalability to process the massive, data-intensive machine learning projects that the Cerebras CS-2 system targets. The HPE Superdome Flex server also manages and schedules jobs according to AI application needs, enables cloud access, and stages larger research datasets. In addition, the HPE Superdome Flex server includes a software stack with programs to build AI procedures and models.
“We are excited to extend our collaboration with Leibniz Supercomputing Centre (LRZ) by supplying next generation computing technology to its scientific community,” said Justin Hotard, executive vice president and general manager, HPC & AI, at HPE. “Through our work with LRZ and Cerebras, we are pleased to support the next wave of scientific and engineering innovation in Germany. As AI and machine learning become more prevalent and we move into the age of insight, highly optimized systems such as LRZ’s new system will accelerate scientific breakthroughs for the good of humanity.”
In addition to AI workloads, the combined technologies from HPE and Cerebras will also be considered for more traditional HPC workloads in support of larger, memory-intensive modeling and simulation needs.
"The future of computing is becoming more complex, with systems becoming more heterogeneous and tuned to specific applications. We should stop thinking in terms of HPC or AI systems," says Laura Schulz, Head of Strategy at LRZ. "AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras. We’re working towards a future where the underlying compute is complex, but doesn’t impact the user; that the technology–whether HPC, AI or quantum–is available and approachable for our researchers in pursuit of their scientific discovery.”
The Leibniz Supercomputing Centre (LRZ) proudly stands at the forefront of its field as a world-class IT service and computing user facility serving Munich’s top universities as well as research institutions in Bavaria, Germany and Europe. As an institute of the Bavarian Academy of Sciences and Humanities, LRZ has provided a robust, holistic IT infrastructure for its users throughout the scientific community for nearly sixty years. It offers the complete range of resources, services, consulting and support¬–from email, web servers and Internet access to virtual machines, cloud solutions, data storage and the Munich Scientific Network (MWN). Home to SuperMUC-NG, LRZ is part of Germany’s Gauss Centre for Supercomputing (GCS) and serves as part of the nation’s backbone for the advanced research and discovery possible through high-performance computing (HPC). In addition to current systems, LRZ’s Future Computing Group focuses on the evaluation of emerging Exascale-class architectures and technologies, development of highly scalable machine learning and artificial intelligence applications, and system integration of quantum acceleration with supercomputing systems.
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer system, designed for the singular purpose of accelerating AI and changing the future of AI work forever. Our flagship product, the CS-2 system is powered by the world’s largest processor – the 850,000 core Cerebras WSE-2, enables customers to accelerate their deep learning work by orders of magnitude over graphics processing units.
About Hewlett Packard Enterprise
Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com