Supermicro Accelerates the Era of AI and the Metaverse with Top-of-the-Line Servers for AI Training, Deep Learning, HPC, and Generative AI, Featuring NVIDIA HGX and PCIe-Based H100 8-GPU Systems
Supermicro (NASDAQ: SMCI) has announced the shipment of its new GPU servers featuring NVIDIA's HGX H100 8-GPU system, aimed at optimizing AI, ML, and HPC workloads. The servers promise a 9x performance increase for AI training compared to previous models, while innovative designs reduce power consumption and noise. The new systems support liquid cooling and various configurations utilizing NVIDIA’s L4 GPU for enhanced efficiency and performance. Supermicro's advancements position it as a leader in providing comprehensive IT solutions, further supporting the growing demand for AI and metaverse applications.
- Launch of high-performance GPU servers, featuring NVIDIA HGX H100 systems.
- 9x performance increase for AI training applications.
- Innovative designs reducing power consumption and noise levels.
- Liquid cooling support for enhanced operational efficiency.
- Support for a broad range of AI applications with the new NVIDIA L4 GPU.
- None.
Most Comprehensive Portfolio of Systems from the Cloud to the Edge Supporting NVIDIA HGX H100 Systems, L40, and L4 GPUs, and OVX 3.0 Systems
"Supermicro offers the most comprehensive portfolio of GPU systems in the industry, including servers in 8U, 6U, 5U, 4U, 2U, and 1U form factors, as well as workstations and SuperBlade systems that support the full range of new NVIDIA H100 GPUs," said
Supermicro's most powerful new 8U GPU server is now shipping in volume. Optimized for AI, DL, ML, and HPC workloads, this new Supermicro 8U server is powered by the NVIDIA HGX H100 8-GPU featuring the highest GPU-to-GPU communication using the fastest NVIDIA NVLink® 4.0 technology, NVSwitch interconnects, and NVIDIA Quantum-2 InfiniBand and Spectrum-4 Ethernet networking to break through the barriers of AI at scale. In addition, Supermicro offers several performance-optimized configurations of GPU servers, including direct-connect/single-root/dual-root CPUs to GPUs and front or rear I/O models with AC and DC power in standard and OCP DC rack configurations. The Supermicro X13 SuperBlade® enclosure accommodates 20 NVIDIA H100 Tensor Core PCIe GPUs or 40 NVIDIA L40 GPUs in an 8U enclosure. In addition, up to 10 NVIDIA H100 PCIe GPUs or 20 NVIDIA L4 Tensor Core GPUs can be used in a 6U enclosure. These new systems deliver the optimized acceleration ideal for running NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform.
Liquid cooling of these servers is also supported on many GPU servers. In addition, Supermicro is announcing a liquid-cooled AI development system (as a tower or rack-mounted configuration) containing two CPUs and four NVIDIA A100 Tensor Core GPUs, which are ideal for office and home office environments and capable of being deployed in departmental and corporate clusters.
Supermicro systems support the new NVIDIA L4 GPU, which delivers multi-fold acceleration gain and energy efficiency compared to previous generations. The same applies to AI inferencing, video streaming, virtual workstations, and graphics applications in the enterprise, in the cloud, and at the edge. With NVIDIA's AI platform and full-stack approach, the L4 is optimized for inference at scale for a broad range of AI applications, including recommendations, voice-based AI avatar assistants, chatbots, visual search, and contact center automation to deliver the best-personalized experiences. As the most efficient NVIDIA accelerator for mainstream servers, L4 has up to 4x higher AI performance, increased energy efficiency, and over 3x more video streaming capacity and efficiency supporting AV1 encoding/decoding. The L4 GPU's versatility for inference and visualization and its small, energy-efficient, single-slot, low-profile, low-power-use 72W form factor make it ideal for global deployments, including at edge locations.
"Equipping Supermicro servers with the unmatched power of the new NVIDIA L4 Tensor Core GPU is enabling customers to accelerate their workloads efficiently and sustainably," said
Supermicro's new PCIe accelerated solutions empower the creation of 3D worlds, digital twins, 3D simulation models, and the industrial metaverse. In addition to supporting the previous generations of NVIDIA OVX™ systems, Supermicro offers an OVX 3.0 configuration featuring four NVIDIA L40 GPUs, two NVIDIA ConnectX®-7 SmartNICs, an NVIDIA BlueField®-3 DPU, and the latest NVIDIA Omniverse Enterprise™ software.
To learn more about all of Supermicro advanced new GPU systems, please visit: https://www.supermicro.com/en/accelerators/nvidia
See more about Supermicro at NVIDIA GTC 2023 - https://register.nvidia.com/events/widget/nvidia/gtcspring2023/sponsorcatalog/exhibitor/1564778120132001ghs2/?ncid=ref-spo-128510
About
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in
Supermicro, Server
All other brands, names, and trademarks are the property of their respective owners.
View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-accelerates-the-era-of-ai-and-the-metaverse-with-top-of-the-line-servers-for-ai-training-deep-learning-hpc-and-generative-ai-featuring-nvidia-hgx-and-pcie-based-h100-8-gpu-systems-301776769.html
SOURCE
FAQ
What new products has Supermicro launched related to NVIDIA GPUs?
What performance improvements can be expected from the new Supermicro servers?
When was the announcement made about Supermicro's new GPU servers?
How does the new NVIDIA L4 GPU enhance Supermicro's offerings?