Leibniz Supercomputing Center Accelerates AI Innovation in Bavaria with Next-Generation AI System from Cerebras Systems and Hewlett Packard Enterprise

GARCHING, Germany–(BUSINESS WIRE)–The Leibniz Supercomputing Center (LRZ), Cerebras Systems and Hewlett Packard Enterprise (HPE) today announced the joint development and delivery of a new system featuring next-generation AI technologies to dramatically accelerate research science and AI innovation in Bavaria. .

The new system is funded by the Free State of Bavaria as part of the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region’s mission to become an international AI hotspot. The new system is also an additional resource for Germany’s National Computing Center and is part of LRZ’s Future Computing program which represents a portfolio of heterogeneous computing architectures across CPUs, GPUs, FPGSs and ASICs.

Empowering the Bavarian scientific community to accelerate discoveries and achieve breakthroughs

The new system is expected to be delivered this summer and will be hosted at LRZ, an institute of the Bavarian Academy of Sciences and Humanities (BAdW).

The system will be used by the local scientific and engineering communities to support various research use cases. Some identified applications include natural language processing (NLP), medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognoses, and computational fluid dynamics ( CFD) to advance understanding in areas such as aerospace engineering and manufacturing.

Deliver next-generation AI with scalable and accelerated compute capabilities

The new system is specifically designed to process large data sets to tackle complex scientific research. The system is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, making it the first solution in Europe to leverage the Cerebras CS-2 system. The HPE Superdome Flex Server offers a modular and scalable solution to meet computing requirements and offers specialized capabilities to target the large in-memory processing required to process large volumes of data. Additionally, the HPE Superdome Flex server’s specific data pre-processing and post-processing capabilities for AI model training and inference are ideal for supporting the Cerebras CS-2 system, which delivers the performance of deep learning of hundreds of graphics processing units (GPUs), with the ease of programming a single node. Powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 offers larger compute cores optimized for AI , faster memory and more fabric bandwidth than any other deep learning processor in existence.

“Currently, we are seeing demand for AI computing double every three to four months with our users. This promises much more efficiency in data processing and thus a faster breakthrough of scientific discoveries,” says Prof. Dr. Dieter Kranzlmüller, Director of LRZ. “As a university computing center and a national center for supercomputing , we provide researchers with advanced and reliable computing services for their science. To ensure optimal use of the system, we will work closely with our users and Cerebras and HPE partners to identify ideal use cases in the community and help drive breakthrough results.

Cerebras CS-2 provides the largest AI chip with 850,000 compute cores

AI methods and machine learning need computing power. Currently, the complexity of neural networks used to analyze large volumes of data doubles in a few months. However, these applications have so far been run mostly on general purpose or graphics processors (CPU and GPU).

“We founded Cerebras to revolutionize computing,” said Andrew Feldman, CEO and co-founder of Cerebras Systems. “We are proud to partner with LRZ and HPE to give researchers in Bavaria access to lightning-fast AI, enabling them to try new hypotheses, train large language models, and ultimately advance scientific discovery.”

The Cerebras WSE-2 is 46,225 square millimeters of silicon, housing 2.6 trillion transistors and 850,000 AI-optimized compute cores, along with evenly distributed memory that can hold up to 40 gigabytes of data and data. fast interconnects to transport them to disk at 220 petabytes per second. This allows the WSE-2 to keep all the parameters of the multilayer neural networks on a single chip during execution, which reduces computation time and data processing. To date, the CS-2 system is in use at a number of US research facilities and companies and is proving particularly effective for image and pattern recognition and NLP. Additional efficiency is also provided by water cooling, which reduces energy consumption.

Offer powerful system and software for AI development

To support the Cerebras CS-2 system, the HPE Superdome Flex Server offers massive memory capabilities and unprecedented compute scalability to handle the massive, data-intensive machine learning projects targeted by the Cerebras CS-2 system . The HPE Superdome Flex Server also manages and schedules jobs based on the needs of AI applications, enables cloud access, and organizes larger research datasets. Additionally, the HPE Superdome Flex Server includes a software stack with programs for creating procedures and AI models.

“We are excited to expand our collaboration with Leibniz Supercomputing Center (LRZ) by delivering next-generation computing technology to its scientific community,” said Justin Hotard, executive vice president and general manager, HPC & AI, HPE. “Through our work with LRZ and Cerebras, we are happy to support the next wave of scientific and technical innovation in Germany. As AI and machine learning become more prevalent and we enter the age of insight, highly optimized systems such as LRZ’s new system will accelerate scientific breakthroughs for the good of humanity.

In addition to AI workloads, the combined technologies of HPE and Cerebras will also be considered for more traditional HPC workloads to support larger, memory-intensive modeling and simulation needs.

“The future of computing is becoming more complex, with systems increasingly heterogeneous and tailored to specific applications. We should stop thinking in terms of HPC or AI systems,” says Laura Schulz, Head of strategy at LRZ. “AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras. We are working towards a future where the underlying computation is complex, but does not impact the user; that the technology, whether HPC, AI or quantum, is available and accessible to our researchers in pursuit of their scientific discovery.”

To learn more about future updates, please visit: https://www.lrz.de.

About the Leibniz Supercomputing Center

The Leibniz Supercomputing Center (LRZ) proudly stands at the forefront of its field as a world-class computing service and computing user facility serving top universities in Munich as well as research institutes in Bavaria, Germany. and in Europe. As an institute of the Bavarian Academy of Sciences and Humanities, LRZ has been providing a robust and holistic IT infrastructure to its users in the scientific community for nearly sixty years. It offers a full range of resources, services, consulting and support, ranging from email, web servers and Internet access to virtual machines, cloud solutions, data storage and the scientific network of Munich (MWN). Home to SuperMUC-NG, LRZ is part of Germany’s Gauss Center for Supercomputing (GCS) and is part of the country’s backbone for advanced research and discovery possible through high-performance computing (HPC). In addition to current systems, LRZ’s Future Computing Group focuses on evaluating emerging Exascale-class architectures and technologies, developing highly scalable machine learning and artificial intelligence applications, and systems integration. quantum acceleration with supercomputing systems.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computing systems, designed with the sole purpose of accelerating AI and changing the future of AI work forever. Our flagship product, the CS-2 system, is powered by the world’s largest processor – the 850,000-core Cerebras WSE-2, enabling customers to accelerate their deep learning work by orders of magnitude over graphics processing units.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate results by unlocking the value of all their data, everywhere. Building on decades of reimagining the future and innovating to evolve the way people live and work, HPE delivers unique, open, and intelligent technology solutions as a service. With offerings spanning cloud services, compute, high-performance computing and AI, intelligent edge, software and storage, HPE delivers a consistent experience across all clouds and edges, helping customers develop new models sales, adopt new methods and increase operational performance. For more information, visit: www.hpe.com