NVIDIA Blackwell Ultra DGX SuperPOD Delivers Out-of-the-Box AI Supercomputer for Enterprises to Build AI Factories
NVIDIA has unveiled its most advanced enterprise AI infrastructure - the DGX SuperPOD powered by Blackwell Ultra GPUs. The announcement includes two new systems: the DGX GB300 and DGX B300, designed for AI factory supercomputing.
The DGX GB300 features 36 NVIDIA Grace CPUs and 72 Blackwell Ultra GPUs, delivering up to 70x more AI performance than Hopper systems, with 38TB of memory. The air-cooled DGX B300 provides 11x faster AI inference and 4x faster training compared to Hopper, with 2.3TB of HBM3e memory.
NVIDIA also introduced Instant AI Factory, a managed service featuring the Blackwell Ultra-powered DGX SuperPOD. Equinix will be the first to offer these systems in 45 markets globally. The new infrastructure includes NVIDIA Mission Control software for data center operations. Both systems are expected to be available from partners later in 2025.
NVIDIA ha svelato la sua infrastruttura AI aziendale più avanzata - il DGX SuperPOD alimentato da Blackwell Ultra GPUs. L'annuncio include due nuovi sistemi: il DGX GB300 e il DGX B300, progettati per il supercalcolo nell'AI factory.
Il DGX GB300 è dotato di 36 CPU NVIDIA Grace e 72 GPU Blackwell Ultra, offrendo fino a 70 volte le prestazioni AI rispetto ai sistemi Hopper, con 38TB di memoria. Il DGX B300, raffreddato ad aria, fornisce 11 volte una maggiore inferenza AI e 4 volte un addestramento più veloce rispetto a Hopper, con 2,3TB di memoria HBM3e.
NVIDIA ha anche presentato Instant AI Factory, un servizio gestito che include il DGX SuperPOD alimentato da Blackwell Ultra. Equinix sarà il primo a offrire questi sistemi in 45 mercati a livello globale. La nuova infrastruttura include il software NVIDIA Mission Control per le operazioni dei data center. Entrambi i sistemi dovrebbero essere disponibili dai partner entro la fine del 2025.
NVIDIA ha revelado su infraestructura de IA empresarial más avanzada: el DGX SuperPOD impulsado por Blackwell Ultra GPUs. El anuncio incluye dos nuevos sistemas: el DGX GB300 y el DGX B300, diseñados para el supercomputo en fábricas de IA.
El DGX GB300 cuenta con 36 CPUs NVIDIA Grace y 72 GPUs Blackwell Ultra, ofreciendo hasta 70 veces más rendimiento en IA que los sistemas Hopper, con 38TB de memoria. El DGX B300, refrigerado por aire, proporciona 11 veces una inferencia de IA más rápida y 4 veces un entrenamiento más veloz en comparación con Hopper, con 2.3TB de memoria HBM3e.
NVIDIA también presentó Instant AI Factory, un servicio gestionado que presenta el DGX SuperPOD impulsado por Blackwell Ultra. Equinix será el primero en ofrecer estos sistemas en 45 mercados a nivel global. La nueva infraestructura incluye el software NVIDIA Mission Control para las operaciones de centros de datos. Se espera que ambos sistemas estén disponibles a través de socios a finales de 2025.
NVIDIA는 Blackwell Ultra GPUs로 구동되는 가장 진보된 기업 AI 인프라인 DGX SuperPOD를 공개했습니다. 이번 발표에는 AI 공장 슈퍼컴퓨팅을 위해 설계된 두 개의 새로운 시스템: DGX GB300과 DGX B300이 포함됩니다.
DGX GB300은 36개의 NVIDIA Grace CPU와 72개의 Blackwell Ultra GPU를 갖추고 있으며, Hopper 시스템보다 최대 70배 더 높은 AI 성능을 제공하며, 38TB의 메모리를 지원합니다. 공랭식 DGX B300은 Hopper에 비해 11배 더 빠른 AI 추론과 4배 더 빠른 훈련 속도를 제공하며, 2.3TB의 HBM3e 메모리를 갖추고 있습니다.
NVIDIA는 또한 Instant AI Factory를 소개했으며, 이는 Blackwell Ultra로 구동되는 DGX SuperPOD를 특징으로 하는 관리형 서비스입니다. Equinix는 전 세계 45개 시장에서 이러한 시스템을 처음으로 제공할 예정입니다. 새로운 인프라에는 데이터 센터 운영을 위한 NVIDIA Mission Control 소프트웨어가 포함됩니다. 두 시스템 모두 2025년 말에 파트너를 통해 제공될 것으로 예상됩니다.
NVIDIA a dévoilé son infrastructure AI d'entreprise la plus avancée - le DGX SuperPOD alimenté par des Blackwell Ultra GPUs. L'annonce comprend deux nouveaux systèmes : le DGX GB300 et le DGX B300, conçus pour le supercalcul dans les usines d'IA.
Le DGX GB300 est équipé de 36 CPU NVIDIA Grace et de 72 GPU Blackwell Ultra, offrant jusqu'à 70 fois plus de performances en IA que les systèmes Hopper, avec 38 To de mémoire. Le DGX B300, refroidi par air, fournit une inférence IA 11 fois plus rapide et un entraînement 4 fois plus rapide par rapport à Hopper, avec 2,3 To de mémoire HBM3e.
NVIDIA a également introduit Instant AI Factory, un service géré présentant le DGX SuperPOD alimenté par Blackwell Ultra. Equinix sera le premier à proposer ces systèmes sur 45 marchés dans le monde. La nouvelle infrastructure comprend le logiciel NVIDIA Mission Control pour les opérations des centres de données. Les deux systèmes devraient être disponibles chez des partenaires d'ici fin 2025.
NVIDIA hat seine fortschrittlichste Unternehmens-AI-Infrastruktur vorgestellt - den DGX SuperPOD, der von Blackwell Ultra GPUs angetrieben wird. Die Ankündigung umfasst zwei neue Systeme: den DGX GB300 und den DGX B300, die für das Supercomputing in AI-Fabriken konzipiert sind.
Der DGX GB300 verfügt über 36 NVIDIA Grace CPUs und 72 Blackwell Ultra GPUs und bietet bis zu 70-mal mehr AI-Leistung als Hopper-Systeme, mit 38TB Speicher. Der luftgekühlte DGX B300 bietet 11-mal schnellere AI-Inferenz und 4-mal schnellere Schulung im Vergleich zu Hopper, mit 2,3TB HBM3e-Speicher.
NVIDIA hat auch Instant AI Factory eingeführt, einen verwalteten Dienst, der den von Blackwell Ultra betriebenen DGX SuperPOD umfasst. Equinix wird der erste Anbieter dieser Systeme in 45 Märkten weltweit sein. Die neue Infrastruktur umfasst die NVIDIA Mission Control-Software für den Betrieb von Rechenzentren. Beide Systeme sollen Ende 2025 über Partner verfügbar sein.
- Launch of next-generation AI infrastructure with significant performance improvements
- 70x AI performance increase in DGX GB300 vs previous Hopper systems
- Global availability through Equinix in 45 markets
- New managed service offering (Instant AI Factory) for easier deployment
- Enhanced memory capabilities with 38TB in GB300 and 2.3TB in B300
- Systems not immediately available - delayed until later in 2025
- Requires specialized liquid-cooling infrastructure for GB300 model
- High power requirements and cooling infrastructure may increase operational costs
Insights
NVIDIA's announcement of its Blackwell Ultra DGX SuperPOD represents a quantum leap in AI computing capabilities that significantly extends their technological advantage. The 70x performance improvement over Hopper-based systems for the DGX GB300 and 11x faster inference for the DGX B300 are not incremental improvements but transformative advancements that redefine what's possible in enterprise AI.
The architecture innovations are particularly noteworthy. The integration of 72 Blackwell Ultra GPUs into a unified memory space through NVLink Switch technology addresses one of the fundamental bottlenecks in large model training and inference. The 38TB of fast memory enables running substantially larger models with complex multi-step reasoning capabilities that were previously impractical.
NVIDIA's networking upgrades with ConnectX-8 SuperNICs delivering 800Gb/s - double the previous generation - alongside BlueField-3 DPUs address the critical data movement challenges in scaled AI systems. This holistic approach to system design demonstrates why competitors struggle to match NVIDIA's full-stack optimization.
The dual liquid-cooled (GB300) and air-cooled (B300) offerings create flexibility for different data center environments, expanding NVIDIA's addressable market. The Equinix partnership for NVIDIA Instant AI Factory service represents a strategic expansion of their business model, moving from hardware provider to infrastructure-as-a-service enabler.
NVIDIA's Blackwell Ultra announcement represents a critical strategic evolution in how enterprises can deploy advanced AI capabilities. The introduction of out-of-the-box AI supercomputing fundamentally changes the accessibility equation for organizations seeking to implement reasoning and agentic AI workloads.
The business impact extends beyond raw performance gains. By focusing on "AI factories" that handle the full spectrum of AI workloads (pretraining, post-training, and inference-time scaling), NVIDIA is positioning these systems as essential infrastructure for next-generation AI deployment. The emphasis on agentic AI and reasoning capabilities aligns perfectly with where enterprise AI applications are heading - toward autonomous systems that can reason through complex problems.
The Equinix partnership to deliver NVIDIA Instant AI Factory across 45 global markets significantly reduces deployment friction. Enterprises can now access preconfigured Blackwell-powered infrastructure without the months of planning typically required, dramatically accelerating time-to-value for AI investments.
NVIDIA's software strategy with Mission Control and the AI Enterprise platform enhances their competitive moat. By delivering the full stack from hardware to application deployment tools, they're creating substantial switching costs for customers who adopt their ecosystem.
This announcement solidifies NVIDIA's position as the default provider for organizations building advanced AI capabilities, addressing both technical performance needs and operational deployment challenges.
- NVIDIA Blackwell Ultra-Powered DGX Systems Supercharge AI Reasoning for Real-Time AI Agent Responses
- Equinix First to Offer NVIDIA Instant AI Factory Service, With Preconfigured Space in Blackwell-Ready Facilities for DGX GB300 and DGX B300 Systems to Meet Global Demand for AI Infrastructure
SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) -- GTC—NVIDIA today announced the world’s most advanced enterprise AI infrastructure — NVIDIA DGX SuperPOD™ built with NVIDIA Blackwell Ultra GPUs — which provides enterprises across industries with AI factory supercomputing for state-of-the-art agentic AI reasoning.
Enterprises can use new NVIDIA DGX™ GB300 and NVIDIA DGX B300 systems, integrated with NVIDIA networking, to deliver out-of-the-box DGX SuperPOD AI supercomputers that offer FP4 precision and faster AI reasoning to supercharge token generation for AI applications.
AI factories provide purpose-built infrastructure for agentic, generative and physical AI workloads, which can require significant computing resources for AI pretraining, post-training and test-time scaling for applications running in production.
“AI is advancing at light speed, and companies are racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling,” said Jensen Huang, founder and CEO of NVIDIA. “The NVIDIA Blackwell Ultra DGX SuperPOD provides out-of-the-box AI supercomputing for the age of agentic and physical AI.”
DGX GB300 systems feature NVIDIA Grace Blackwell Ultra Superchips — which include 36 NVIDIA Grace™ CPUs and 72 NVIDIA Blackwell Ultra GPUs — and a rack-scale, liquid-cooled architecture designed for real-time agent responses on advanced reasoning models.
Air-cooled NVIDIA DGX B300 systems harness the NVIDIA B300 NVL16 architecture to help data centers everywhere meet the computational demands of generative and agentic AI applications.
To meet growing demand for advanced accelerated infrastructure, NVIDIA also unveiled NVIDIA Instant AI Factory, a managed service featuring the Blackwell Ultra-powered NVIDIA DGX SuperPOD. Equinix will be first to offer the new DGX GB300 and DGX B300 systems in its preconfigured liquid- or air-cooled AI-ready data centers located in 45 markets around the world.
NVIDIA DGX SuperPOD With DGX GB300 Powers Age of AI Reasoning
DGX SuperPOD with DGX GB300 systems can scale up to tens of thousands of NVIDIA Grace Blackwell Ultra Superchips — connected via NVIDIA NVLink™, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum-X™ Ethernet networking — to supercharge training and inference for the most compute-intensive workloads.
DGX GB300 systems deliver up to 70x more AI performance than AI factories built with NVIDIA Hopper™ systems and 38TB of fast memory to offer unmatched performance at scale for multistep reasoning on agentic AI and reasoning applications.
The 72 Grace Blackwell Ultra GPUs in each DGX GB300 system are connected by fifth-generation NVLink technology to become one massive, shared memory space through the NVLink Switch system.
Each DGX GB300 system features 72 NVIDIA ConnectX®-8 SuperNICs, delivering accelerated networking speeds of up to 800Gb/s — double the performance of the previous generation. Eighteen NVIDIA BlueField®-3 DPUs pair with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-X Ethernet to accelerate performance, efficiency and security in massive-scale AI data centers.
DGX B300 Systems Accelerate AI for Every Data Center
The NVIDIA DGX B300 system is an AI infrastructure platform designed to bring energy-efficient generative AI and AI reasoning to every data center.
Accelerated by NVIDIA Blackwell Ultra GPUs, DGX B300 systems deliver 11x faster AI performance for inference and a 4x speedup for training compared with the Hopper generation.
Each system provides 2.3TB of HBM3e memory and includes advanced networking with eight NVIDIA ConnectX-8 SuperNICs and two BlueField-3 DPUs.
NVIDIA Software Accelerates AI Development and Deployment
To enable enterprises to automate the management and operations of their infrastructure, NVIDIA also announced NVIDIA Mission Control™ — AI data center operation and orchestration software for Blackwell-based DGX systems.
NVIDIA DGX systems support the NVIDIA AI Enterprise software platform for building and deploying enterprise-grade AI agents. This includes NVIDIA NIM™ microservices, such as the new NVIDIA Llama Nemotron open reasoning model family announced today, and NVIDIA AI Blueprints, frameworks, libraries and tools used to orchestrate and optimize performance of AI agents.
NVIDIA Instant AI Factory to Meet Infrastructure Demand
NVIDIA Instant AI Factory offers enterprises an Equinix managed service featuring the Blackwell Ultra-powered NVIDIA DGX SuperPOD with NVIDIA Mission Control software.
With dedicated Equinix facilities around the globe, the service will provide businesses with fully provisioned, intelligence-generating AI factories optimized for state-of-the-art model training and real-time reasoning workloads — eliminating months of pre-deployment infrastructure planning.
Availability
NVIDIA DGX SuperPOD with DGX GB300 or DGX B300 systems are expected to be available from partners later this year.
NVIDIA Instant AI Factory is planned to be available starting later this year.
Learn more by watching the NVIDIA GTC keynote and register to attend sessions from NVIDIA and industry leaders at the show, which runs through March 21.
About NVIDIA
NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.
For further information, contact:
Allie Courtney
NVIDIA Corporation
+1-408-706-8995
acourtney@nvidia.com
Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; and AI advancing at light speed, and companies racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.
Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.
© 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, ConnectX, DGX, NVIDIA DGX SuperPOD, NVIDIA Grace, NVIDIA Hopper, NVIDIA Mission Control, NVIIDA NIM, NVIDIA Spectrum-X and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/4f5747e8-5b3d-4764-9d34-6d63cbfb18c2
