Supermicro Introduces Rack Scale Plug-and-Play Liquid-Cooled AI SuperClusters for NVIDIA Blackwell and NVIDIA HGX H100/H200 - Radical Innovations in the AI Era to Make Liquid-Cooling Free with a Bonus
Supermicro has announced the launch of plug-and-play liquid-cooled AI SuperClusters optimized for NVIDIA's Blackwell and HGX H100/H200 GPUs.
These SuperClusters promise significant performance improvements, offering up to 20 PetaFLOPS on a single GPU, 4X better AI training, and 30X better inference performance compared to previous GPUs. The new systems, designed for rapid deployment in AI data centers, are integrated with NVIDIA AI Enterprise and NIM microservices.
Supermicro's solutions aim to reduce data center power usage by up to 40%, making liquid cooling virtually free. At COMPUTEX 2024, new systems featuring NVIDIA Blackwell GPUs, including 10U air-cooled and 4U liquid-cooled options, will be showcased.
Supermicro’s AI SuperClusters leverage NVIDIA AI Enterprise for seamless, scalable AI deployment, supporting various generative AI models. These systems are tailored for LLM training, deep learning, and high-volume inference, providing enterprises with efficient, easy-to-deploy AI infrastructure.
- Up to 20 PetaFLOPS on a single GPU
- 4X improvement in AI training performance
- 30X better inference performance
- Up to 40% reduction in data center power usage
- Integration with NVIDIA AI Enterprise and NIM microservices
- Rapid deployment and scalable AI infrastructure
- Support for a wide range of generative AI models
- Potentially high initial investment for new infrastructure
- Dependence on NVIDIA’s ecosystem for optimal performance
- Possible technical challenges in transitioning to new systems
Insights
The introduction of liquid-cooled AI SuperClusters by Supermicro represents a significant advancement in data center technology tailored for AI applications. Liquid-cooling is more efficient than traditional air-cooling, reducing
Supermicro's alignment with NVIDIA's AI Enterprise platform ensures compatibility with industry-leading AI tools, which makes the setup process more straightforward for enterprises. The integration with NVIDIA Blackwell GPUs, known for their high performance and efficiency, particularly in AI training and inference, means end-users can expect enhanced performance metrics without significantly increasing costs.
The emphasis on scalability and ease of deployment is another key highlight. By offering plug-and-play solutions, Supermicro reduces the complexity and time required to set up AI infrastructure, making it accessible even to smaller firms looking to scale their AI capabilities rapidly.
From a financial perspective, Supermicro's introduction of the liquid-cooled AI SuperClusters is likely to enhance the company's market position. Liquid-cooling as a selling point can attract clients looking to optimize operational costs through energy efficiency. This aligns with the broader industry trend towards sustainable tech solutions.
With the rising demand for AI capabilities, especially in generative AI models, the deployment of these advanced data center solutions positions Supermicro to capitalize on this growing market. This could translate into increased revenue streams as more enterprises adopt AI-driven processes.
The collaboration with NVIDIA further strengthens Supermicro's product offerings. Given NVIDIA's strong reputation and technological advancements, this partnership could lead to higher adoption rates and a competitive edge in the AI and data center markets.
Analyzing the market impact, Supermicro's new offerings cater to the growing need for AI infrastructure. As enterprises across industries— from healthcare to finance— increasingly seek AI solutions, the need for efficient and scalable infrastructure is critical. Supermicro's liquid-cooled AI SuperClusters meet this need, potentially driving significant market demand.
The focus on rapid deployment and scalability addresses a key pain point for companies: the time and resources needed to implement AI infrastructure. By providing turnkey solutions, Supermicro lowers the barrier to entry, enabling faster adoption and experimentation with AI technologies.
Additionally, the emphasis on compatibility with open-source AI models and platforms like Meta's Llama-3 and Mistral's Mixtral reflects a strategic move to appeal to a broader customer base, including those invested in open-source AI development.
Generative AI SuperClusters, Integrated with NVIDIA AI Enterprise and NIM Microservices, Offer Instant ROI Gains and More AI Work per Dollar Through a Massively Scalable Compute Unit, Simplifying AI for Rapid Deployment
"Supermicro continues to lead the industry in creating and deploying AI solutions with rack-scale liquid-cooling," said Charles Liang, president and CEO of Supermicro. "Data centers with liquid-cooling can be virtually free and provide a bonus value for customers, with the ongoing reduction in electricity usage. Our solutions are optimized with NVIDIA AI Enterprise software for customers across industries, and we deliver global manufacturing capacity with world-class efficiency. The result is that we can reduce the time to delivery of our liquid-cooled or air-cooled turnkey clusters with NVIDIA HGX H100 and H200, as well as the upcoming B100, B200, and GB200 solutions. From cold plates to CDUs to cooling towers, our rack-scale total liquid cooling solutions can reduce ongoing data center power usage by up to
Visit www.supermicro.com/ai for more information.
At COMPUTEX 2024, Supermicro is revealing its upcoming systems optimized for the NVIDIA Blackwell GPU, including a 10U air-cooled and a 4U liquid-cooled NVIDIA HGX B200-based system. In addition, Supermicro will be offering an 8U air-cooled NVIDIA HGX B100 system and Supermicro's NVIDIA GB200 NVL72 rack containing 72 interconnected GPUs with NVIDIA NVLink Switches, as well as the new NVIDIA MGX™ systems supporting NVIDIA H200 NVL PCIe GPUs and the newly announced NVIDIA GB200 NVL2 architecture.
"Generative AI is driving a reset of the entire computing stack — new data centers will be GPU-accelerated and optimized for AI," said Jensen Huang, founder and CEO of NVIDIA. "Supermicro has designed cutting-edge NVIDIA accelerated computing and networking solutions, enabling the trillion-dollar global data centers to be optimized for the era of AI."
The rapid development of large language models and the continuous new introductions of open-source models such as Meta's Llama-3 and Mistral's Mixtral 8x22B make today's state-of-the-art AI models more accessible for enterprises. The need to simplify the AI infrastructure and provide accessibility in the most cost-efficient way is paramount to supporting the current breakneck speed of the AI revolution. The Supermicro cloud-native AI SuperCluster bridges the gap between cloud convenience of instant access and portability, leveraging the NVIDIA AI Enterprise, allowing moving AI projects from pilot to production seamlessly at any scale. This provides the flexibility to run anywhere with securely managed data, including self-hosted systems or on-premises large data centers.
With enterprises across industries rapidly experimenting with generative AI use cases, Supermicro collaborates closely with NVIDIA to ensure a seamless and flexible transition from experimentation and piloting AI applications to production deployment and large-scale data center AI. This result is achieved through rack and cluster-level optimization with the NVIDIA AI Enterprise software platform, enabling a smooth journey from initial exploration to scalable AI implementation.
Managed services compromise infrastructure choices, data sharing, and generative AI strategy control. NVIDIA NIM microservices, part of NVIDIA AI Enterprise, offer managed generative AI and open-source deployment benefits without drawbacks. Its versatile inference runtime with microservices accelerates generative AI deployment across a wide range of models, from open-source to NVIDIA's foundation models. In addition, NVIDIA NeMo™ enables custom model development with data curation, advanced customization, and retrieval-augmented generation (RAG) for enterprise-ready solutions. Combined with Supermicro's NVIDIA AI Enterprise ready SuperClusters, NVIDIA NIM provides the fastest path to scalable, accelerated Generative AI production deployments.
Supermicro's current generative AI SuperCluster offerings include:
- Liquid-cooled Supermicro NVIDIA HGX H100/H200 SuperCluster with 256 H100/H200 GPUs as a scalable unit of compute in 5 racks (including 1 dedicated networking rack)
- Air-cooled Supermicro NVIDIA HGX H100/H200 SuperCluster with 256 HGX H100/H200 GPUs as a scalable unit of compute in 9 racks (including 1 dedicated networking rack)
- Supermicro NVIDIA MGX GH200 SuperCluster with 256 GH200 Grace™ Hopper Superchips as a scalable unit of compute in 9 racks (including 1 dedicated networking rack)
Supermicro SuperClusters are NVIDIA AI Enterprise ready with NVIDIA NIM microservices and NVIDIA NeMo platform for end-to-end generative AI customization and optimized for NVIDIA Quantum-2 InfiniBand as well as the new NVIDIA Spectrum-X Ethernet platform with 400Gb/s of networking speed per GPU for scaling out to a large cluster with tens of thousands of GPUs.
Supermicro's upcoming SuperCluster offerings include:
- Supermicro NVIDIA HGX B200 SuperCluster, liquid-cooled
- Supermicro NVIDIA HGX B100/B200 SuperCluster, air-cooled
- Supermicro NVIDIA GB200 NVL72 or NVL36 SuperCluster, liquid-cooled
Supermicro's SuperCluster solutions are optimized for LLM training, deep learning, and high volume and batch size inference. Supermicro's L11 and L12 validation testing and on-site deployment service provide customers with a seamless experience. Customers receive plug-and-play scalable units for easy deployment in a data center and faster time to results.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
SMCI-F
View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-introduces-rack-scale-plug-and-play-liquid-cooled-ai-superclusters-for-nvidia-blackwell-and-nvidia-hgx-h100h200--radical-innovations-in-the-ai-era-to-make-liquid-cooling-free-with-a-bonus-302163611.html
SOURCE Super Micro Computer, Inc.
FAQ
What are the key features of Supermicro's new AI SuperClusters?
How do Supermicro's SuperClusters improve AI performance?
What power savings can be expected with Supermicro's liquid-cooled systems?
When will the new Supermicro systems be available?
What benefits do Supermicro's solutions provide for generative AI deployment?