STOCK TITAN

Supermicro Adds Portfolio for Next Wave of AI with NVIDIA Blackwell Ultra Solutions, Featuring NVIDIA HGX™ B300 NVL16 and GB300 NVL72

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Positive)
Tags
AI

Supermicro (NASDAQ: SMCI) announces new AI systems powered by NVIDIA's Blackwell Ultra platform, featuring HGX B300 NVL16 and GB300 NVL72 platforms. The new solutions focus on AI reasoning, agentic AI, and video inference applications.

Key features include:

  • HGX B300 NVL16 system: 8U platform with 2.3TB of HBM3e per system, 800 Gb/s node-to-node speeds
  • GB300 NVL72: Integrates 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs, offering over 20TB of HBM3e memory
  • Advanced liquid-cooling solution reducing power consumption by up to 40%

The systems feature enhanced AI FLOPs, increased HBM3e capacity, and up to 800 Gb/s direct-to-GPU networking performance. Supermicro's liquid-cooling solution operates with 40℃ warm water in 8-node rack configuration or 35℃ in double-density 16-node rack configuration.

Supermicro (NASDAQ: SMCI) annuncia nuovi sistemi AI alimentati dalla piattaforma Blackwell Ultra di NVIDIA, con le piattaforme HGX B300 NVL16 e GB300 NVL72. Le nuove soluzioni si concentrano sul ragionamento AI, sull'AI agentica e sulle applicazioni di inferenza video.

Le caratteristiche principali includono:

  • Sistema HGX B300 NVL16: piattaforma 8U con 2,3TB di HBM3e per sistema, velocità node-to-node di 800 Gb/s
  • GB300 NVL72: integra 72 GPU NVIDIA Blackwell Ultra e 36 CPU Grace, offrendo oltre 20TB di memoria HBM3e
  • Soluzione di raffreddamento a liquido avanzata che riduce il consumo energetico fino al 40%

I sistemi presentano FLOPs AI potenziati, maggiore capacità HBM3e e prestazioni di rete diretta a GPU fino a 800 Gb/s. La soluzione di raffreddamento a liquido di Supermicro opera con acqua calda a 40℃ in configurazione rack da 8 nodi o a 35℃ in configurazione rack a doppia densità da 16 nodi.

Supermicro (NASDAQ: SMCI) anuncia nuevos sistemas de IA impulsados por la plataforma Blackwell Ultra de NVIDIA, que cuentan con las plataformas HGX B300 NVL16 y GB300 NVL72. Las nuevas soluciones se centran en el razonamiento de IA, la IA agente y las aplicaciones de inferencia de video.

Las características clave incluyen:

  • Sistema HGX B300 NVL16: plataforma de 8U con 2.3TB de HBM3e por sistema, velocidades de nodo a nodo de 800 Gb/s
  • GB300 NVL72: integra 72 GPU NVIDIA Blackwell Ultra y 36 CPU Grace, ofreciendo más de 20TB de memoria HBM3e
  • Solución de refrigeración líquida avanzada que reduce el consumo de energía hasta un 40%

Los sistemas cuentan con FLOPs de IA mejorados, mayor capacidad de HBM3e y rendimiento de red directo a GPU de hasta 800 Gb/s. La solución de refrigeración líquida de Supermicro opera con agua caliente a 40℃ en configuración de rack de 8 nodos o a 35℃ en configuración de rack de doble densidad de 16 nodos.

슈퍼마이크로 (NASDAQ: SMCI)는 NVIDIA의 블랙웰 울트라 플랫폼을 기반으로 한 새로운 AI 시스템을 발표했습니다. 이 시스템은 HGX B300 NVL16 및 GB300 NVL72 플랫폼을 특징으로 하며, AI 추론, 에이전틱 AI 및 비디오 추론 애플리케이션에 중점을 두고 있습니다.

주요 특징은 다음과 같습니다:

  • HGX B300 NVL16 시스템: 시스템당 2.3TB의 HBM3e를 갖춘 8U 플랫폼, 노드 간 속도 800 Gb/s
  • GB300 NVL72: 72개의 NVIDIA 블랙웰 울트라 GPU와 36개의 그레이스 CPU를 통합하여 20TB 이상의 HBM3e 메모리를 제공합니다
  • 전력 소비를 최대 40%까지 줄이는 고급 액체 냉각 솔루션

이 시스템은 향상된 AI FLOPs, 증가된 HBM3e 용량 및 최대 800 Gb/s의 GPU 직접 네트워킹 성능을 제공합니다. 슈퍼마이크로의 액체 냉각 솔루션은 8노드 랙 구성에서 40℃의 온수 또는 16노드 이중 밀도 랙 구성에서 35℃로 작동합니다.

Supermicro (NASDAQ: SMCI) annonce de nouveaux systèmes d'IA alimentés par la plateforme Blackwell Ultra de NVIDIA, comprenant les plateformes HGX B300 NVL16 et GB300 NVL72. Les nouvelles solutions se concentrent sur le raisonnement IA, l'IA agentique et les applications d'inférence vidéo.

Les caractéristiques clés incluent:

  • Système HGX B300 NVL16: plateforme 8U avec 2,3 To de HBM3e par système, vitesses de nœud à nœud de 800 Gb/s
  • GB300 NVL72: intègre 72 GPU NVIDIA Blackwell Ultra et 36 CPU Grace, offrant plus de 20 To de mémoire HBM3e
  • Solution de refroidissement liquide avancée réduisant la consommation d'énergie jusqu'à 40%

Les systèmes présentent des FLOPs IA améliorés, une capacité HBM3e accrue et des performances de mise en réseau directe vers GPU allant jusqu'à 800 Gb/s. La solution de refroidissement liquide de Supermicro fonctionne avec de l'eau chaude à 40℃ dans une configuration de rack de 8 nœuds ou à 35℃ dans une configuration de rack à double densité de 16 nœuds.

Supermicro (NASDAQ: SMCI) kündigt neue KI-Systeme an, die von der Blackwell Ultra-Plattform von NVIDIA betrieben werden und die Plattformen HGX B300 NVL16 und GB300 NVL72 umfassen. Die neuen Lösungen konzentrieren sich auf KI-Argumentation, agentische KI und Videoinferenzanwendungen.

Wichtige Merkmale sind:

  • HGX B300 NVL16-System: 8U-Plattform mit 2,3TB HBM3e pro System, 800 Gb/s Knoten-zu-Knoten-Geschwindigkeiten
  • GB300 NVL72: Integriert 72 NVIDIA Blackwell Ultra GPUs und 36 Grace CPUs und bietet über 20TB HBM3e-Speicher
  • Fortschrittliche Flüssigkeitskühlungslösung, die den Stromverbrauch um bis zu 40% senkt

Die Systeme verfügen über verbesserte KI-FLOPs, erhöhte HBM3e-Kapazität und eine direkte GPU-Netzwerkleistung von bis zu 800 Gb/s. Die Flüssigkeitskühlungslösung von Supermicro arbeitet mit 40℃ warmem Wasser in einer 8-Knoten-Rack-Konfiguration oder 35℃ in einer doppelt dichten 16-Knoten-Rack-Konfiguration.

Positive
  • Introduction of advanced AI systems with significantly increased memory capacity (2.3TB HBM3e per system)
  • Liquid cooling solution reducing power consumption by up to 40%
  • Enhanced networking performance with 800 Gb/s speeds
  • Expanded product portfolio strengthening market position in AI infrastructure
Negative
  • High power consumption requirements necessitating specialized cooling solutions
  • Complex deployment requirements potentially increasing implementation costs

Insights

Supermicro's announcement of new AI systems featuring NVIDIA's Blackwell Ultra platform represents a significant product advancement that strengthens the company's competitive position in the high-growth AI infrastructure market.

The technical specifications are commercially meaningful - with the HGX B300 NVL16 system offering 2.3TB of HBM3e memory per system and the GB300 NVL72 providing over 20TB in a single rack, Supermicro is addressing memory capacity bottlenecks that currently limit large-scale AI deployments. The 800 Gb/s networking performance further enhances system capabilities for cluster-scale AI applications.

Supermicro's liquid cooling technology, which can reduce power consumption by up to 40%, creates a meaningful differentiation point against competitors. This feature addresses two critical customer pain points: energy costs and environmental impact. In high-density AI deployments where power consumption represents a substantial operational expense, this efficiency improvement translates to significant TCO advantages.

The company's end-to-end deployment capabilities, including planning, design, and on-site installation services, position Supermicro as a complete solutions provider rather than merely a hardware vendor. This approach typically enables higher-margin engagements compared to standalone hardware sales.

Strategically, Supermicro's continued partnership with NVIDIA and their ability to rapidly integrate cutting-edge technologies reinforces their relevance in a fast-evolving market where technical leadership is a key competitive factor.

Supermicro's new systems based on NVIDIA's Blackwell Ultra platform represent a substantial technological leap that addresses fundamental constraints in AI infrastructure performance.

The HGX B300 NVL16 system's architecture with its 1.8TB/s 16-GPU NVLink domain creates an unprecedented unified memory pool, eliminating many of the data transfer bottlenecks that plague current AI applications. For context, this interconnect speed is critical for multi-GPU training workloads where model parallelism requires constant communication between processing units.

Even more impressive is the GB300 NVL72 configuration, which integrates 72 Blackwell Ultra GPUs and 36 Grace CPUs in a single rack. This exascale-capable system with over 20TB of HBM3e memory can handle model sizes that would require multiple racks with previous-generation hardware.

Supermicro's thermal engineering capabilities are particularly noteworthy. Their liquid cooling solution operating with 40℃ warm water in 8-node configurations (or 35℃ in 16-node setups) is technically sophisticated. Most liquid cooling systems require much colder input water, making Supermicro's approach more compatible with existing data center infrastructure and more energy-efficient.

The direct integration of NVIDIA ConnectX-8 NICs into the baseboard architecture eliminates PCIe bottlenecks and supports the full 800 Gb/s node-to-node speeds. This tight integration is important for distributed training at scale, where network performance often becomes the limiting factor in multi-node deployments.

Air- and Liquid-Cooled Optimized Solutions with Enhanced AI FLOPs and HBM3e Capacity, with up to 800 Gb/s Direct-to-GPU Networking Performance

SAN JOSE, Calif., March 18, 2025 /PRNewswire/ -- GTC 2025 Conference -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new systems and rack solutions powered by the NVIDIA's Blackwell Ultra platform, featuring the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro and NVIDIA's new AI solutions strengthen leadership in AI by delivering breakthrough performance for the most compute-intensive AI workloads, including AI reasoning, agentic AI, and video inference applications.

"At Supermicro, we are excited to continue our long-standing partnership with NVIDIA to bring the latest AI technology to market with the NVIDIA Blackwell Ultra Platforms," said Charles Liang, president and CEO, Supermicro. "Our Data Center Building Block Solutions® approach has streamlined the development of new air and liquid-cooled systems, optimized to the thermals and internal topology of the NVIDIA HGX B300 NVL16 and GB300 NVL72. Our advanced liquid-cooling solution delivers exceptional thermal efficiency, operating with 40℃ warm water in our 8-node rack configuration, or 35℃ warm water in double-density 16-node rack configuration, leveraging our latest CDUs. This innovative solution reduces power consumption by up to 40% while conserving water resources, providing both environmental and operational cost benefits for enterprise data centers."

For more information, please visit https://www.supermicro.com/en/accelerators/nvidia 

NVIDIA's Blackwell Ultra platform is built to conquer the most demanding cluster-scale AI applications by overcoming performance bottlenecks caused by limited GPU memory capacity and network bandwidth. NVIDIA Blackwell Ultra delivers an unprecedented 288GB HBM3e of memory per GPU, delivering drastic improvements in AI FLOPS for AI training and inference for the largest AI models. The networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X™ Ethernet doubles the compute fabric bandwidth, up to 800 Gb/s.

Supermicro integrates NVIDIA Blackwell Ultra with two types of solutions: Supermicro NVIDIA HGX B300 NVL16 systems, designed for every data center, and the NVIDIA GB300 NVL72, equipped with NVIDIA's next-generation Grace Blackwell architecture.

Supermicro NVIDIA HGX B300 NVL16 system

Supermicro NVIDIA HGX systems are the industry-standard building blocks for AI training clusters, with an 8-GPU NVIDIA NVLink™ domain and 1:1 GPU-to-NIC ratio for high-performance clusters. Supermicro's new NVIDIA HGX B300 NVL16 system builds upon this proven architecture with thermal design advancements in both a liquid-cooled and air-cooled version.

For B300 NVL16, Supermicro introduces a brand new 8U platform to maximize the output of the NVIDIA HGX B300 NVL16 board. Each GPU is connected in a 1.8TB/s 16-GPU NVLink domain providing a massive 2.3TB of HBM3e per system. Supermicro NVIDIA HGX B300 NVL16 improves upon performance in the network domain by integrating 8 NVIDIA ConnectX®-8 NICs directly into the baseboard to support 800 Gb/s node-to-node speeds via NVIDIA Quantum-X800 InfiniBand or Spectrum-X™ Ethernet.

Supermicro NVIDIA GB300 NVL72

The NVIDIA GB300 NVL72 integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs in a single rack with exascale computing capacity, featuring upgraded HBM3e memory capacity for over 20TB of HBM3e memory interconnected in a 1.8TB/s 72-GPU NVLink domain. NVIDIA ConnectX®-8 SuperNIC provides 800Gb/s speeds for both GPU-to-NIC and NIC-to-network communication, drastically improving cluster-level performance of the AI compute fabric.

Liquid-Cooled AI Data Center Building Block Solutions

Expertise in liquid cooling, data center deployment, and building block approach positions Supermicro to deliver NVIDIA Blackwell Ultra with industry-leading time-to-deployment. Supermicro offers a complete liquid cooling portfolio, including newly developed direct-to-chip cold plates, a 250kW in-rack CDU, and cooling tower.

Supermicro's on-site rack deployment helps enterprises build data center from the ground up, including the planning, design, power-up, validation, testing, installation and configuration of racks, servers, switches and other networking equipment to meet the organization's specific needs.

8U Supermicro NVIDIA HGX B300 NVL16 system – Designed for every data center with a streamlined thermally-optimized chassis and 2.3TB HBM3e memory per system.

NVIDIA GB300 NVL72 – Exascale AI supercomputer in a single rack with essentially double the HBM3e memory capacity and networking speeds over its predecessor.

Supermicro at GTC 2025

GTC visitors can find Supermicro at San Jose, CA from March 17-21, 2025. Visit us at booth #1115 to see the X14/H14 B200, B300, and GB300 systems on display along with our rack-scaled liquid-cooled solutions.

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).  

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names, and trademarks are the property of their respective owners.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-adds-portfolio-for-next-wave-of-ai-with-nvidia-blackwell-ultra-solutions-featuring-nvidia-hgx-b300-nvl16-and-gb300-nvl72-302405122.html

SOURCE Super Micro Computer, Inc.

FAQ

What are the key features of Supermicro's new NVIDIA Blackwell Ultra systems?

The systems feature HGX B300 NVL16 with 2.3TB HBM3e per system, GB300 NVL72 with 20TB+ HBM3e memory, 800 Gb/s networking speeds, and liquid cooling reducing power consumption by 40%.

How much memory capacity does the SMCI GB300 NVL72 system offer?

The GB300 NVL72 features over 20TB of HBM3e memory interconnected in a 1.8TB/s 72-GPU NVLink domain.

What are the cooling options available for Supermicro's new AI systems?

Supermicro offers both air-cooled and liquid-cooled options, with liquid cooling operating at 40℃ warm water in 8-node configuration or 35℃ in 16-node configuration.

What networking performance do the new SMCI Blackwell Ultra systems achieve?

The systems achieve up to 800 Gb/s direct-to-GPU networking performance via NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet.
Super Micro Computer Inc

NASDAQ:SMCI

SMCI Rankings

SMCI Latest News

SMCI Stock Data

23.14B
509.17M
14.49%
52.83%
13.17%
Computer Hardware
Electronic Computers
Link
United States
SAN JOSE