Marvell Announces Breakthrough Custom HBM Compute Architecture to Optimize Cloud AI Accelerators
Marvell Technology (NASDAQ: MRVL) has unveiled a groundbreaking custom HBM compute architecture for cloud AI accelerators. The new technology enables XPUs to achieve up to 25% more compute capacity and 33% greater memory while improving power efficiency. The architecture features advanced die-to-die interfaces and delivers up to 70% lower interface power compared to standard HBM interfaces.
Marvell is collaborating with leading HBM manufacturers - Micron, Samsung Electronics, and SK hynix - to develop custom HBM solutions for next-generation XPUs. The optimized interfaces reduce required silicon real estate in each die, allowing HBM support logic integration onto the base die, resulting in improved performance and lower TCO for cloud operators.
Marvell Technology (NASDAQ: MRVL) ha presentato un'innovativa architettura di calcolo HBM personalizzata per acceleratori AI nel cloud. La nuova tecnologia consente agli XPU di raggiungere fino al 25% in più di capacità di calcolo e un 33% di memoria in più migliorando l'efficienza energetica. L'architettura è dotata di avanzate interfacce die-to-die e offre fino a il 70% di riduzione nella potenza delle interfacce rispetto alle interfacce HBM standard.
Marvell sta collaborando con i principali produttori di HBM - Micron, Samsung Electronics e SK hynix - per sviluppare soluzioni HBM personalizzate per i prossimi XPU di generazione. Le interfacce ottimizzate riducono lo spazio richiesto dal silicio in ciascun die, consentendo l'integrazione della logica di supporto HBM nel die di base, con conseguente miglioramento delle prestazioni e una riduzione del TCO per gli operatori del cloud.
Marvell Technology (NASDAQ: MRVL) ha presentado una innovadora arquitectura de computación HBM personalizada para aceleradores de IA en la nube. La nueva tecnología permite que los XPU alcancen hasta un 25% más de capacidad de cómputo y un 33% más de memoria, mejorando la eficiencia energética. La arquitectura cuenta con interfaces avanzadas die-to-die y ofrece hasta un 70% menos de potencia en las interfaces en comparación con las interfaces HBM estándar.
Marvell está colaborando con los principales fabricantes de HBM - Micron, Samsung Electronics y SK hynix - para desarrollar soluciones HBM personalizadas para la próxima generación de XPU. Las interfaces optimizadas reducen el espacio de silicio requerido en cada die, permitiendo la integración de la lógica de soporte HBM en el die base, lo que resulta en un mejor rendimiento y un menor TCO para los operadores de la nube.
Marvell Technology (NASDAQ: MRVL)는 클라우드 AI 가속기를 위한 혁신적인 맞춤형 HBM 컴퓨팅 아키텍처를 공개했습니다. 이 새로운 기술은 XPU가 최대 25% 더 많은 컴퓨팅 용량과 33% 더 많은 메모리를 달성할 수 있도록 하며, 전력 효율성을 개선합니다. 이 아키텍처는 고급 다이-투-다이 인터페이스를 특징으로 하며, 표준 HBM 인터페이스와 비교하여 인터페이스 전력을 최대 70%까지 줄입니다.
Marvell은 Micron, Samsung Electronics 및 SK hynix와 같은 주요 HBM 제조업체와 협력하여 차세대 XPU를 위한 맞춤형 HBM 솔루션을 개발하고 있습니다. 최적화된 인터페이스는 각 다이에서 필요한 실리콘 공간을 줄여 HBM 지원 로직을 기본 다이에 통합할 수 있게 하여, 클라우드 운영자에게 성능 향상과 TCO 절감을 제공합니다.
Marvell Technology (NASDAQ: MRVL) a dévoilé une architecture de calcul HBM personnalisée révolutionnaire pour les accélérateurs d'IA dans le cloud. La nouvelle technologie permet aux XPUs d'atteindre jusqu'à 25% de capacité de calcul en plus et 33% de mémoire en plus, tout en améliorant l'efficacité énergétique. L'architecture dispose d'interfaces avancées die-to-die et offre jusqu'à 70% de puissance d'interface en moins par rapport aux interfaces HBM standard.
Marvell collabore avec les principaux fabricants de HBM - Micron, Samsung Electronics et SK hynix - pour développer des solutions HBM sur mesure pour les XPUs de prochaine génération. Les interfaces optimisées réduisent l'espace en silicium requis dans chaque die, permettant l'intégration de la logique de prise en charge HBM dans le die de base, ce qui se traduit par de meilleures performances et un coût total de possession (TCO) réduit pour les opérateurs de cloud.
Marvell Technology (NASDAQ: MRVL) hat eine bahnbrechende maßgeschneiderte HBM-Berechnungsarchitektur für Cloud-AI-Beschleuniger vorgestellt. Die neue Technologie ermöglicht es XPUs, bis zu 25% mehr Rechenkapazität und 33% mehr Speicher zu erreichen und gleichzeitig die Energieeffizienz zu verbessern. Die Architektur verfügt über fortschrittliche Die-to-Die-Schnittstellen und liefert bis zu 70% weniger Schnittstellenleistung im Vergleich zu standard HBM-Schnittstellen.
Marvell kooperiert mit führenden HBM-Herstellern - Micron, Samsung Electronics und SK hynix - um maßgeschneiderte HBM-Lösungen für die nächste Generation von XPUs zu entwickeln. Die optimierten Schnittstellen reduzieren den benötigten Siliziumraum in jedem Die, was die Integration der HBM-Unterstützungslogik in das Grunddie ermöglicht, was zu einer verbesserten Leistung und niedrigeren TCO für Cloud-Betreiber führt.
- New architecture enables 25% more compute capacity
- Achieves 33% greater memory capacity
- Delivers 70% lower interface power consumption
- Strategic partnerships with major HBM manufacturers
- Reduced silicon real estate requirements improving cost efficiency
- None.
Insights
- New Marvell AI accelerator (XPU) architecture enables up to
25% more compute,33% greater memory while improving power efficiency. - Marvell collaborating with Micron, Samsung and SK hynix on custom high-bandwidth memory (HBM) solutions to deliver custom XPUs.
- Architecture comprises advanced die-to-die interfaces, HBM base dies, controller logic and advanced packaging for new XPU designs.
HBM is a critical component integrated within the XPU using advanced 2.5D packaging technology and high-speed industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. The new Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs. This approach considers the compute silicon, HBM stacks, and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell is advancing customization in cloud data center infrastructure. Marvell is collaborating with major HBM makers to implement this new architecture and meet cloud data center operators' needs.
The Marvell custom HBM compute architecture enhances XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies. This results in greater performance and up to
"The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered," said Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell. "We're very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era."
"Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era," said Raj Narasimhan, senior vice president and general manager of Micron's Compute and Networking Business Unit. "Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron's industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI."
"Optimizing HBM for specific XPUs and software environments will greatly improve the performance of cloud operators' infrastructure and ensure efficient power use," said Harry Yoon, corporate executive vice president of Samsung Electronics and head of
"By collaborating with Marvell, we can help our customers produce a more optimized solution for their workloads and infrastructure," said Sunny Kang, VP of DRAM Technology, SK hynix America. "As one of the leading pioneers of HBM, we look forward to shaping this next evolutionary stage for the technology."
"Custom XPUs deliver superior performance and performance per watt compared to merchant, general-purpose solutions for specific, cloud-unique workloads," said Patrick Moorhead, CEO and Founder of Moor Insights & Strategy. "Marvell, already a player in custom compute silicon, is already delivering tailored solutions to leading cloud companies. Their latest custom compute HBM architecture platform provides an additional lever to enhance the TCO for custom silicon. Through strategic collaboration with leading memory makers, Marvell is poised to empower cloud operators in scaling their XPUs and accelerated infrastructure, thereby paving the way for them to enable the future of AI."
Marvell and the M logo are trademarks of Marvell or its affiliates. Please visit www.marvell.com for a complete list of Marvell trademarks. Other names and brands may be claimed as the property of others.
This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events, results or achievements. Actual events, results or achievements may differ materially from those contemplated in this press release. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the "Risk Factors" section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.
For further information, contact:
Kim Markle
pr@marvell.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/marvell-announces-breakthrough-custom-hbm-compute-architecture-to-optimize-cloud-ai-accelerators-302328144.html
SOURCE Marvell
FAQ
What are the key benefits of Marvell's (MRVL) new HBM compute architecture?
Which companies is Marvell (MRVL) partnering with for HBM solutions?