STOCK TITAN

Supermicro Grows AI Optimized Product Portfolio with a New Generation of Systems and Rack Architectures Featuring New NVIDIA Blackwell Architecture Solutions

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Very Positive)
Tags
AI
Rhea-AI Summary
Supermicro, Inc. announces new AI systems for large-scale generative AI featuring NVIDIA's next-generation data center products. The company is enhancing its current systems to support the latest NVIDIA GPUs, reducing time to delivery. Supermicro's focus on building block architecture and rack-scale IT for AI enables the design of next-gen systems optimized for NVIDIA Blackwell GPUs. The company anticipates being first-to-market in deploying full rack clusters with these GPUs, offering groundbreaking performance for AI training and inference.
Positive
  • None.
Negative
  • None.

Insights

The announcement by Supermicro regarding the integration of NVIDIA's next-generation GPUs and CPUs into their AI systems represents a significant advancement in the AI and high-performance computing (HPC) market. Supermicro's emphasis on liquid-cooled systems and rack-scale solutions for AI is indicative of the industry's push towards more efficient and powerful computational capabilities, particularly for generative AI and large-scale AI training.

From a market perspective, the readiness to deploy NVIDIA's Blackwell GPUs at scale could position Supermicro as a front-runner in the AI infrastructure space. This move is likely to attract interest from organizations involved in AI research, cloud computing services and those requiring substantial computational power, potentially increasing Supermicro's market share in these sectors.

However, the impact on Supermicro's financial performance will depend on the adoption rate of these new systems. The high costs associated with cutting-edge technology may limit the initial customer base to larger enterprises and research institutions with sufficient budgets. Over time, as the technology becomes more mainstream and costs decrease, a broader market could emerge.

Supermicro's adoption of NVIDIA's GB200 Grace Blackwell Superchip and B200 Tensor Core GPUs marks a technological leap in data center capabilities. The integration of advanced liquid-cooling technologies to manage increased thermal design power (TDP) and the emphasis on high-bandwidth memory are critical for meeting the demands of next-generation AI workloads. These developments suggest a shift towards more energy-efficient and thermally optimized data centers.

The doubling of NVLink interconnect speeds and the 3X faster training results for large language models (LLMs) represent a substantial improvement over the previous generation. Such advancements are likely to accelerate the development of AI models, potentially leading to breakthroughs in natural language processing and other AI-driven fields.

For businesses relying on AI, these systems could provide a competitive edge by enabling faster and more cost-effective model training and inference. However, the adoption of new technologies often requires significant capital investment and expertise, which could be a barrier for some organizations.

The strategic partnership between Supermicro and NVIDIA, as highlighted in this announcement, has the potential to enhance shareholder value for both companies. Supermicro's proactive move to be first-to-market with systems featuring NVIDIA's latest GPUs could lead to an early adopter advantage in a rapidly growing AI market, which is a positive signal for investors.

Investors should monitor the market response to Supermicro's new offerings, as successful deployment and customer satisfaction can result in increased sales, potentially improving the company's financial metrics such as revenue and earnings per share (EPS). Additionally, the development of rack-scale solutions signifies Supermicro's commitment to scalability, a important factor for clients looking to expand their AI capabilities.

While the announcement is promising, it is also important to consider the capital expenditure required for research and development (R&D) and the marketing of these new systems. The return on investment (ROI) for Supermicro will be contingent on the balance between these expenses and the revenue generated from the new product line.

Powerful and Energy Efficient Solutions for Large Scale CSPs and NSPs Incorporate a Full Stack of Next Gen NVIDIA GPUs and CPUs with the NVIDIA Quantum X800 Platform and NVIDIA AI Enterprise 5.0

SAN JOSE, Calif., March 18, 2024 /PRNewswire/ -- NVIDIA GTC 2024, -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new AI systems for large-scale generative AI featuring NVIDIA's next-generation of data center products, including the latest NVIDIA GB200 Grace™ Blackwell Superchip, the NVIDIA B200 Tensor Core and B100 Tensor Core GPUs. Supermicro is enhancing its current NVIDIA HGX™ H100/H200 8-GPU systems to be drop-in ready for the NVIDIA HGX™ B100 8-GPU and enhanced to support the B200, resulting in a reduced time to delivery. Additionally, Supermicro will further strengthen its broad NVIDIA MGX™ systems lineup with new offerings featuring the NVIDIA GB200, including the NVIDIA GB200 NVL72, a complete rack level solution with 72 NVIDIA Blackwell GPUs. Supermicro is also adding new systems to its lineup, including the 4U NVIDIA HGX B200 8-GPU liquid-cooled system.

"Our focus on building block architecture and rack-scale Total IT for AI has enabled us to design next-generation systems for the enhanced requirements of NVIDIA Blackwell architecture-based GPUs, such as our new 4U liquid-cooled NVIDIA HGX B200 8-GPU based system, as well as our fully integrated direct-to-chip liquid cooled racks with NVIDIA GB200 NVL72," said Charles Liang, president and CEO of Supermicro. "These new products are built upon Supermicro and NVIDIA's proven HGX and MGX system architecture, optimizing for the new capabilities of NVIDIA Blackwell GPUs. Supermicro has the expertise to incorporate 1kW GPUs into a wide range of air-cooled and liquid-cooled systems, as well as the rack scale production capacity of 5,000 racks/month and anticipates being first-to-market in deploying full rack clusters featuring NVIDIA Blackwell GPUs."

Supermicro's direct-to-chip liquid cooling technology will allow for the increased thermal design power (TDP)  of the latest GPUs and deliver the full potential of the NVIDIA Blackwell GPUs. Supermicro's HGX and MGX Systems with NVIDIA Blackwell are the building blocks for the future of AI infrastructure and will deliver groundbreaking performance for multi-trillion parameter AI training and real-time AI inference.

A wide range of GPU-optimized Supermicro systems will be ready for the NVIDIA Blackwell B200 and B100 Tensor Core GPU and validated for the latest NVIDIA AI Enterprise software, which adds support for NVIDIA NIM inference microservices. The Supermicro systems include:

  • NVIDIA HGX B100 8-GPU and HGX B200 8-GPU systems
  • 5U/4U PCIe GPU system with up to 10 GPUs
  • SuperBlade® with up to 20 B100 GPUs for 8U enclosures and up to 10 B100 GPUs in 6U enclosures
  • 2U Hyper with up to 3 B100 GPUs
  • Supermicro 2U x86 MGX systems with up to 4 B100 GPUs

For training massive foundational AI models, Supermicro is prepared to be the first-to-market to release NVIDIA HGX B200 8-GPU and HGX B100 8-GPU systems. These systems feature 8 NVIDIA Blackwell GPUs connected via a high-speed fifth-generation NVIDIA® NVLink® interconnect at 1.8TB/s, doubling the previous generation performance, with 1.5TB total high-bandwidth memory and will deliver 3X faster training results for LLMs, such as the GPT-MoE-1.8T model, compared to the NVIDIA Hopper architecture generation. These systems feature advanced networking to scale to clusters, supporting both NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet options with a 1:1 GPU-to-NIC ratio.

"Supermicro continues to bring to market an amazing range of accelerated computing platform servers that are tuned for AI training and inference that can address any need in the market today, said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. "We work closely with Supermicro to bring the most optimized solutions to customers."

For the most demanding LLM inference workloads, Supermicro is releasing several new MGX systems built with the NVIDIA GB200 Grace Blackwell Superchip, which combines an NVIDIA Grace CPU with two NVIDIA Blackwell GPUs. Supermicro's NVIDIA MGX with GB200 systems will deliver a vast leap in performance for AI inference with up to 30x speed-ups compared to the NVIDIA HGX H100. Supermicro and NVIDIA have developed a rack-scale solution with the NVIDIA GB200 NVL72, connecting 36 Grace CPUs and 72 Blackwell GPUs in a single rack. All 72 GPUs are interconnected with fifth-generation NVIDIA NVLink for GPU-to-GPU communication at 1.8TB/s. In addition, for inference workloads, Supermicro is announcing the ARS-221GL-NHIR, a 2U server based on the GH200 line of products, which will have two GH200 servers connected via a 900Gb/s high speed interconnect. Come to the Supermicro Booth at GTC to learn more.

Supermicro systems will also support the upcoming NVIDIA Quantum-X800 InfiniBand platform, consisting of the NVIDIA Quantum-X800 QM3400 switch and the SuperNIC800, and the NVIDIA Spectrum-X800 Ethernet platform, consisting of the NVIDIA Spectrum-X800 SN5600 switch and the SuperNIC800. Optimized for the NVIDIA Blackwell architecture, the NVIDIA Quantum-X800, and Spectrum-X800 will deliver the highest level of networking performance for AI infrastructures.

For more information on Supermicro NVIDIA solutions, visit https://www.supermicro.com/en/accelerators/nvidia 

Supermicro's upcoming systems lineup featuring NVIDIA B200 and GB200 consists of:

  • The Supermicro's NVIDIA HGX B200 8-GPU air-cooled and liquid-cooled systems are for the highest generative AI training performance. This system features 8 NVIDIA Blackwell GPUs connected via fifth generation NVLink with a pool of 1.5TB high-bandwidth memory (up to 60TB/s) to speed up AI training workloads.
  • Supermicro's best-selling AI Training System, the 4U/8U system with NVIDIA HGX H100/H200 8-GPU, will support NVIDIA's upcoming HGX B100 8-GPU.
  • A Supermicro Rack-Level Solution featuring GB200 Superchip systems as server nodes with 2 Grace CPUs and 4 NVIDIA Blackwell GPUs per node. Supermicro's direct-to-chip liquid-cooling maximizes density with 72 GB200 192GB GPUs (1200W TDP per GPU), all in a single 44U ORV3 rack.

Supermicro at GTC 2024

Supermicro will demonstrate a complete portfolio of GPU systems for AI at NVIDIA's GTC 2024 event from March 18-21 at the San Jose Convention Center. Visit Supermicro at booth #1016 to see solutions built for a wide range of AI applications, including training generative AI models, AI inference, and edge AI. Supermicro will also showcase two rack-level solutions, including a concept rack with systems featuring the upcoming NVIDIA GB200 with 72 liquid-cooled GPUs interconnected with fifth-generation NVLink.

Supermicro solutions that will be on display at GTC 2024 include:

  • Supermicro liquid-cooled AI training rack featuring 8 4U 8-GPU systems with NVIDIA HGX H200 8-GPUs
  • Supermicro concept ORV3 rack with liquid-cooled MGX system nodes, hosting a total of 72 NVIDIA GB200 Superchips connected via fifth-generation NVLink
  • Supermicro MGX systems, including the 1U Liquid-Cooled NVIDIA GH200 Grace Hopper Superchip system
  • Supermicro short-depth Hyper-E system for delivering GPU computing at the edge
  • Supermicro Petascale all-flash storage system for high-performance AI data pipelines

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enable our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names, and trademarks are the property of their respective owners.

SMCI-F

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-grows-ai-optimized-product-portfolio-with-a-new-generation-of-systems-and-rack-architectures-featuring-new-nvidia-blackwell-architecture-solutions-302092095.html

SOURCE Super Micro Computer, Inc.

FAQ

What new AI systems is Supermicro announcing for large-scale generative AI?

Supermicro is announcing new AI systems for large-scale generative AI featuring NVIDIA's next-generation data center products.

What is Supermicro enhancing its current systems to support?

Supermicro is enhancing its current systems to support the latest NVIDIA HGX B100 8-GPU and B200 GPUs, reducing time to delivery.

What is Supermicro's focus in designing next-gen systems for enhanced requirements of NVIDIA Blackwell architecture-based GPUs?

Supermicro's focus is on building block architecture and rack-scale Total IT for AI, optimizing for new capabilities of NVIDIA Blackwell GPUs.

What performance benefits do Supermicro's systems with NVIDIA Blackwell GPUs offer for AI training and inference?

Supermicro's systems with NVIDIA Blackwell GPUs deliver groundbreaking performance for multi-trillion parameter AI training and real-time AI inference.

What is the expected impact of Supermicro's systems with NVIDIA Blackwell GPUs on AI infrastructure?

Supermicro's systems with NVIDIA Blackwell GPUs are the building blocks for the future of AI infrastructure, offering groundbreaking performance.

Super Micro Computer, Inc.

NASDAQ:SMCI

SMCI Rankings

SMCI Latest News

SMCI Stock Data

10.14B
585.57M
14.49%
55.52%
17.22%
Computer Hardware
Electronic Computers
Link
United States of America
SAN JOSE