NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing and Generative AI
NVIDIA has unveiled the next-generation GH200 Grace Hopper platform, featuring the world's first HBM3e processor, designed for accelerated computing and generative AI. The platform offers:
- Up to 3.5x more memory capacity and 3x more bandwidth than the current generation
- 144 Arm Neoverse cores
- 8 petaflops of AI performance
- 282GB of HBM3e memory
The GH200 Grace Hopper Superchip can connect with additional Superchips via NVIDIA NVLink, allowing for deployment of giant AI models. In dual configuration, it provides 1.2TB of fast memory and 10TB/sec of combined bandwidth. The platform is compatible with the NVIDIA MGX server specification, enabling quick adoption by system manufacturers.
Leading manufacturers are expected to deliver systems based on the platform in Q2 2024.
NVIDIA ha svelato la prossima generazione della piattaforma GH200 Grace Hopper, caratterizzata dal primo processore HBM3e al mondo, progettato per il calcolo accelerato e l'IA generativa. La piattaforma offre:
- Fino a 3,5 volte la capacità di memoria e 3 volte la larghezza di banda rispetto alla generazione attuale
- 144 core Arm Neoverse
- 8 petaflop di prestazioni AI
- 282GB di memoria HBM3e
Il Superchip GH200 Grace Hopper può connettersi con ulteriori Superchip tramite NVIDIA NVLink, consentendo il deployment di modelli AI di grande dimensione. In configurazione dual, offre 1.2TB di memoria veloce e 10TB/sec di larghezza di banda combinata. La piattaforma è compatibile con le specifiche del server NVIDIA MGX, permettendo una rapida adozione da parte dei produttori di sistemi.
I principali produttori si prevede che forniscano sistemi basati sulla piattaforma nel Q2 2024.
NVIDIA ha presentado la plataforma GH200 Grace Hopper de próxima generación, que cuenta con el primer procesador HBM3e del mundo, diseñado para la computación acelerada y la IA generativa. La plataforma ofrece:
- Hasta 3,5 veces más capacidad de memoria y 3 veces más ancho de banda que la generación actual
- 144 núcleos Arm Neoverse
- 8 petaflops de rendimiento en IA
- 282GB de memoria HBM3e
El Superchip GH200 Grace Hopper puede conectarse con otros Superchips a través de NVIDIA NVLink, permitiendo el despliegue de modelos de IA gigantes. En configuración dual, proporciona 1.2TB de memoria rápida y 10TB/seg de ancho de banda combinado. La plataforma es compatible con la especificación del servidor NVIDIA MGX, facilitando una rápida adopción por parte de los fabricantes de sistemas.
Se espera que los principales fabricantes entreguen sistemas basados en la plataforma en Q2 2024.
NVIDIA는 차세대 GH200 그레이스 호퍼 플랫폼을 공개했으며, 이는 전 세계 최초의 HBM3e 프로세서를 특징으로 하여 가속 컴퓨팅 및 생성 AI를 위해 설계되었습니다. 이 플랫폼은 다음과 같은 기능을 제공합니다:
- 현재 세대 대비 최대 3.5배의 메모리 용량 및 3배의 대역폭
- 144개의 Arm Neoverse 코어
- 8 페타플롭의 AI 성능
- 282GB의 HBM3e 메모리
GH200 그레이스 호퍼 슈퍼칩은 NVIDIA NVLink를 통해 추가 슈퍼칩과 연결할 수 있어 대형 AI 모델의 배포를 지원합니다. 이중 구성에서는 1.2TB의 빠른 메모리와 10TB/sec의 결합 대역폭을 제공합니다. 이 플랫폼은 NVIDIA MGX 서버 사양과 호환되어 시스템 제조업체의 빠른 채택을 가능하게 합니다.
주요 제조업체들은 2024년 2분기에 이 플랫폼을 기반으로 한 시스템을 제공할 것으로 예상됩니다.
NVIDIA a dévoilé la plateforme GH200 Grace Hopper de nouvelle génération, comprenant le premier processeur HBM3e au monde, conçu pour le calcul accéléré et l'IA générative. La plateforme offre :
- Jusqu'à 3,5 fois plus de capacité de mémoire et 3 fois plus de bande passante que la génération actuelle
- 144 cœurs Arm Neoverse
- 8 pétaflops de performance AI
- 282 Go de mémoire HBM3e
Le Superchip GH200 Grace Hopper peut se connecter à d'autres Superchips via NVIDIA NVLink, permettant le déploiement de modèles AI gigantesques. En configuration dual, il offre 1,2 To de mémoire rapide et 10 To/s de bande passante combinée. La plateforme est compatible avec les spécifications des serveurs NVIDIA MGX, permettant une adoption rapide par les fabricants de systèmes.
Il est prévu que les principaux fabricants livrent des systèmes basés sur la plateforme au cours du Q2 2024.
NVIDIA hat die nächste Generation der GH200 Grace Hopper-Plattform vorgestellt, die den weltweit ersten HBM3e-Prozessor bietet, der für beschleunigtes Rechnen und generative KI konzipiert ist. Die Plattform bietet:
- Bis zu 3,5-mal mehr Speicherkapazität und 3-mal mehr Bandbreite als die aktuelle Generation
- 144 Arm Neoverse-Kerne
- 8 Petaflops KI-Leistung
- 282GB HBM3e-Speicher
Der GH200 Grace Hopper Superchip kann über NVIDIA NVLink mit zusätzlichen Superchips verbunden werden, was die Bereitstellung großer KI-Modelle ermöglicht. In der dualen Konfiguration bietet er 1,2 TB schnellen Speicher und 10 TB/s kombinierte Bandbreite. Die Plattform ist mit der NVIDIA MGX-Server-Spezifikation kompatibel, was eine schnelle Übernahme durch Systemhersteller ermöglicht.
Führende Hersteller werden voraussichtlich im Q2 2024 Systeme basierend auf der Plattform bereitstellen.
- Introduction of next-generation GH200 Grace Hopper platform with HBM3e processor
- 3.5x more memory capacity and 3x more bandwidth than current generation
- 8 petaflops of AI performance
- Ability to connect multiple GPUs for enhanced performance
- Compatible with NVIDIA MGX server specification for quick adoption
- Expected availability from leading manufacturers in Q2 2024
- None.
World’s First HBM3e Processor Offers Groundbreaking Memory, Bandwidth; Ability to Connect Multiple GPUs for Exceptional Performance; Easily Scalable Server Design
LOS ANGELES, Aug. 08, 2023 (GLOBE NEWSWIRE) -- —SIGGRAPH--NVIDIA today announced the next-generation NVIDIA GH200 Grace Hopper™ platform — based on a new Grace Hopper Superchip with the world’s first HBM3e processor — built for the era of accelerated computing and generative AI.
Created to handle the world’s most complex generative AI workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations.
The dual configuration — which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering — comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance and 282GB of the latest HBM3e memory technology.
“To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs,” said Jensen Huang, founder and CEO of NVIDIA. “The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center.”
The new platform uses the Grace Hopper Superchip, which can be connected with additional Superchips by NVIDIA NVLink™, allowing them to work together to deploy the giant models used for generative AI. This high-speed, coherent technology gives the GPU full access to the CPU memory, providing a combined 1.2TB of fast memory when in dual configuration.
HBM3e memory, which is
Growing Demand for Grace Hopper
Leading manufacturers are already offering systems based on the previously announced Grace Hopper Superchip. To drive broad adoption of the technology, the next-generation Grace Hopper Superchip platform with HBM3e is fully compatible with the NVIDIA MGX™ server specification unveiled at COMPUTEX earlier this year. With MGX, any system manufacturer can quickly and cost-effectively add Grace Hopper into over 100 server variations.
Availability
Leading system manufacturers are expected to deliver systems based on the platform in Q2 of calendar year 2024.
Watch Huang’s SIGGRAPH keynote address on demand to learn more about Grace Hopper.
About NVIDIA
Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.
For further information, contact:
Kristin Uchiyama
NVIDIA Corporation
+1-408-313-0448
kuchiyama@nvidia.com
Certain statements in this press release, including, but not limited to, statements as to: the benefits, impact, performance, features and availability of our products, services and technologies, including the NVIDIA GH200 Grace Hopper platform, Grace Hopper Superchip, NVIDIA NVLink and NVIDIA MGX; surging demand for generative AI; data centers requiring accelerated computing platforms with specialized needs; and leading system manufacturers delivering systems based on GH200 Grace Hopper Superchip platform in Q2 of calendar year 2024 are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.
© 2023 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Grace Hopper, NVIDIA MGX and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/3fdf158b-a8ac-4fdb-8fd2-1e3f864b2da9
FAQ
What are the key features of NVIDIA's new GH200 Grace Hopper platform?
When will systems based on the NVIDIA GH200 Grace Hopper platform be available?
How does the NVIDIA GH200 Grace Hopper platform improve AI model deployment?