Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory
- Supermicro's expansion into AI with support for NVIDIA's latest GPUs positions the company at the forefront of AI technology, enabling faster deployment of generative AI and HPC applications.
- The introduction of liquid-cooled 8-GPU systems demonstrates Supermicro's commitment to reducing energy costs and environmental impact, aligning with the growing demand for green computing in data centers.
- None.
Supermicro Extends 8-GPU, 4-GPU, and MGX Product Lines with Support for the NVIDIA HGX H200 and Grace Hopper Superchip for LLM Applications with Faster and Larger HBM3e Memory – New Innovative Supermicro Liquid Cooled 4U Server with NVIDIA HGX 8-GPUs Doubles the Computing Density Per Rack, and Up to 80kW/Rack, Reducing TCO
Supermicro is also introducing the industry's highest density server with NVIDIA HGX H100 8-GPUs systems in a liquid cooled 4U system, utilizing the latest Supermicro liquid cooling solution. The industry's most compact high performance GPU server enables data center operators to reduce footprints and energy costs while offering the highest performance AI training capacity available in a single rack. With the highest density GPU systems, organizations can reduce their TCO by leveraging cutting-edge liquid cooling solutions.
"Supermicro partners with NVIDIA to design the most advanced systems for AI training and HPC applications," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first to market with the latest technology, allowing customers to deploy generative AI faster than ever before. We can deliver these new systems to customers faster with our worldwide manufacturing facilities. The new systems, using the NVIDIA H200 GPU with NVIDIA® NVLink™ and NVSwitch™ high-speed GPU-GPU interconnects at 900GB/s, now provide up to 1.1TB of high-bandwidth HBM3e memory per node in our rack scale AI solutions to deliver the highest performance of model parallelism for today's LLMs and generative AI. We are also excited to offer the world's most compact NVIDIA HGX 8-GPU liquid cooled server, which doubles the density of our rack scale AI solutions and reduces energy costs to achieve green computing for today's accelerated data center."
Learn more about the Supermicro servers with NVIDIA GPUs
Supermicro designs and manufactures a broad portfolio of AI servers with different form factors. The popular 8U and 4U Universal GPU systems featuring four-way and eight-way NVIDIA HGX H100 GPUs are now drop-in ready for the new H200 GPUs to train even larger language models in less time. Each NVIDIA H200 GPU contains 141GB of memory with a bandwidth of 4.8TB/s.
"Supermicro's upcoming server designs using NVIDIA HGX H200 will help accelerate generative AI and HPC workloads, so that enterprises and organizations can get the most out of their AI infrastructure," said Dion Harris, director of data center product solutions for HPC, AI, and quantum computing at NVIDIA. "The NVIDIA H200 GPU with high-speed HBM3e memory will be able to handle massive amounts of data for a variety of workloads."
Additionally, the recently launched Supermicro MGX servers with the NVIDIA GH200 Grace Hopper Superchips are engineered to incorporate the NVIDIA H200 GPU with HBM3e memory.
The new NVIDIA GPUs allow acceleration of today's and future large language models (LLMs) with 100s of billions of parameters to fit in more compact and efficient clusters to train Generative AI with less time and also allow multiple larger models to fit in one system for real-time LLM inference to serve Generative AI for millions of users.
At SC23, Supermicro is showcasing the latest offering, a 4U Universal GPU System featuring the eight-way NVIDIA HGX H100 with its latest liquid-cooling innovations that further improve density and efficiency to drive the evolution of AI. With Supermicro's industry leading GPU and CPU cold plates, CDU(cooling distribution unit), and CDM (cooling distribution manifold) designed for green computing, the new liquid-cooled 4U Universal GPU System is also ready for the eight-way NVIDIA HGX H200, which will dramatically reduce data center footprints, power cost, and deployment hurdles through Supermicro's fully integrated liquid-cooling rack solutions and our L10, L11 and L12 validation testing.
For more information, visit the Supermicro booth at SC23
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-expands-ai-solutions-with-the-upcoming-nvidia-hgx-h200-and-mgx-grace-hopper-platforms-featuring-hbm3e-memory-301985332.html
SOURCE Super Micro Computer, Inc.
FAQ
What is Supermicro's ticker symbol?
What new products is Supermicro extending its product lines with?
How does Supermicro's liquid-cooled 4U server with NVIDIA HGX 8-GPUs impact computing density per rack?
What are the benefits of Supermicro's rack scale AI solutions?