STOCK TITAN

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI
Rhea-AI Summary
AMD (NASDAQ: AMD) launches new AMD Instinct MI300X and MI300A data center AI accelerators for training and inference solutions. The company also introduces ROCm 6 software stack with significant performance optimizations and new features for Large Language Models and Ryzen 8040 Series notebook processors for AI PCs. AMD is collaborating with industry giants like Microsoft, Dell Technologies, HPE, Lenovo, Meta, Oracle, Supermicro, and others to deliver advanced AI solutions across cloud, enterprise, and PCs.
Positive
  • None.
Negative
  • None.

— Microsoft, Dell Technologies, HPE, Lenovo, Meta, Oracle, Supermicro and others adopt new AMD Instinct MI300X and MI300A data center AI accelerators for training and inference solutions —

— AMD also launches ROCm 6 software stack with significant performance optimizations and new features for Large Language Models and Ryzen 8040 Series notebook processors for AI PCs —

SAN JOSE, Calif., Dec. 06, 2023 (GLOBE NEWSWIRE) -- Today at the “Advancing AI” event, AMD (NASDAQ: AMD) was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm™ 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen™ 8040 Series processors with Ryzen AI.

“AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs,” said AMD Chair and CEO Dr. Lisa Su. “We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry’s top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem1.”

Advancing Data Center AI from the Cloud to Enterprise Data Centers and Supercomputers
AMD was joined by multiple partners during the event to highlight the strong adoption and growing momentum for the AMD Instinct data center AI accelerators.

  • Microsoft detailed how it is deploying AMD Instinct MI300X accelerators to power the new Azure ND MI300x v5 Virtual Machine (VM) series optimized for AI workloads.
  • Meta shared that the company is adding AMD Instinct MI300X accelerators to its data centers in combination with ROCm 6 to power AI inferencing workloads and recognized the ROCm 6 optimizations AMD has done on the Llama 2 family of models.
  • Oracle unveiled plans to offer OCI bare metal compute solutions featuring AMD Instinct MI300X accelerators as well as plans to include AMD Instinct MI300X accelerators in their upcoming generative AI service.
  • The largest data center infrastructure providers announced plans to integrate AMD Instinct MI300 accelerators across their product portfolios. Dell announced the integration of AMD Instinct MI300X accelerators with their PowerEdge XE9680 server solution to deliver groundbreaking performance for generative AI workloads in a modular and scalable format for customers. HPE announced plans to bring AMD Instinct MI300 accelerators to its enterprise and HPC offerings. Lenovo shared plans to bring AMD Instinct MI300X accelerators to the Lenovo ThinkSystem platform to deliver AI solutions across industries including retail, manufacturing, financial services and healthcare. Supermicro announced plans to offer AMD Instinct MI300 GPUs across their AI solutions portfolio. Asus, Gigabyte, Ingrasys, Inventec, QCT, Wistron and Wiwynn also all plan to offer solutions powered by AMD Instinct MI300 accelerators.
  • Specialized AI cloud providers including Aligned, Arkon Energy, Cirrascale, Crusoe, Denvr Dataworks and Tensorwaves all plan to provide offerings that will expand access to AMD Instinct MI300X GPUs for developers and AI startups.

Bringing an Open, Proven and Ready AI Software Platform to Market
AMD highlighted significant progress expanding the software ecosystem supporting AMD Instinct data center accelerators.

  • AMD unveiled the latest version of the open-source software stack for AMD Instinct GPUs, ROCm 6, which has been optimized for generative AI, particularly large language models. ROCm 6 boasts support for new data types, advanced graph and kernel optimizations, optimized libraries and state of the art attention algorithms, which together with MI300X deliver an ~8x performance increase for overall latency in text generation on Llama 2 compared to ROCm 5 running on the MI250.2
  • Databricks, Essential AI and Lamini, three AI startups building emerging models and AI solutions, joined AMD on stage to discuss how they’re leveraging AMD Instinct MI300X accelerators and the open ROCm 6 software stack to deliver differentiated AI solutions for enterprise customers.
  • OpenAI is adding support for AMD Instinct accelerators to Triton 3.0, providing out-of-the-box support for AMD accelerators that will allow developers to work at a higher level of abstraction on AMD hardware.

Read here for more information about AMD Instinct MI300 Series accelerators, ROCm 6 and the growing ecosystem of AMD-powered AI solutions.

Continued Leadership in Advancing AI PCs
With millions of AI PCs shipped to date, AMD announced new leadership mobile processors with the launch of the latest AMD Ryzen 8040 Series processors that deliver robust AI compute capability. AMD also launched Ryzen AI 1.0 Software, a software stack that enables developers to easily deploy apps that use pretrained models to add AI capabilities for Windows applications. AMD also disclosed that the upcoming next-gen “Strix Point” CPUs, planned to launch in 2024, will include the AMD XDNA™ 2 architecture designed to deliver more than a 3x increase in AI compute performance compared to the prior generation3 that will enable new generative AI experiences. Microsoft also joined to discuss how they are working closely with AMD on future AI experiences for Windows PCs.

Supporting Resources

  • Watch the full AMD Advancing AI Keynote
  • Read more about the announcements made during Advancing AI here
  • Learn more about the AMD 30x25 energy efficiency initiative here
  • Follow AMD on X
  • Connect with AMD on LinkedIn

About AMD
For more than 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblogLinkedIn and X pages.

Cautionary Statement
This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing, expected benefits of, and expected demand for AMD products and technology including the AMD Instinct™ MI300 Series data center AI accelerators, Ryzen™ 8040 Series processors, and ROCm™ 6 open software stack, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; economic uncertainty; cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; loss of a significant customer; impact of the COVID-19 pandemic on AMD’s business, financial condition and results of operations; competitive markets in which AMD’s products are sold; quarterly and seasonal sales patterns; AMD's ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD's products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD's ability to introduce products on a timely basis with expected features and performance levels; AMD's ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyber-attacks; potential difficulties in operating AMD’s newly upgraded enterprise resource planning system; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products in a timely manner; AMD's reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD's reliance on Microsoft and other software vendors' support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD's supply chain; AMD's ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; impact of government actions and regulations such as export regulations, tariffs and trade protection measures; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals-related provisions and other laws or regulations; impact of acquisitions, joint ventures and/or investments on AMD’s business and AMD’s ability to integrate acquired businesses;  impact of any impairment of the combined company’s assets; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes and the revolving credit facility; AMD's indebtedness; AMD's ability to generate sufficient cash to meet its working capital requirements or generate sufficient revenue and operating cash flow to make all of its planned R&D or strategic investments; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain qualified personnel; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.

AMD, the AMD Arrow logo, AMD Instinct, Ryzen and combinations thereof, are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

____________________________

1 Measurements conducted by AMD Performance Labs as of November 11th, 2023 on the AMD Instinct™ MI300X (750W) GPU designed with AMD CDNA™ 3 5nm | 6nm FinFET process technology at 2,100 MHz peak boost engine clock resulted in 163.4 TFLOPs peak theoretical double precision Matrix (FP64 Matrix), 81.7 TFLOPs peak theoretical double precision (FP64), 163.4 TFLOPs peak theoretical single precision Matrix (FP32 Matrix), 163.4 TFLOPs peak theoretical single precision (FP32), 653.7 TFLOPs peak theoretical TensorFloat-32 (TF32), 1307.4 TFLOPs peak theoretical half precision (FP16), 1307.4 TFLOPs peak theoretical Bfloat16 format precision (BF16), 2614.9 TFLOPs peak theoretical 8-bit precision (FP8), 2614.9 TOPs INT8 floating-point performance.
Published results on Nvidia H100 SXM (80GB) GPU resulted in 66.9 TFLOPs peak theoretical double precision tensor (FP64 Tensor), 33.5 TFLOPs peak theoretical double precision (FP64), 66.9 TFLOPs peak theoretical single precision (FP32), 494.7 TFLOPs peak TensorFloat-32 (TF32)*, 989.4 TFLOPs peak theoretical half precision tensor (FP16 Tensor), 133.8 TFLOPs peak theoretical half precision (FP16), 989.4 TFLOPs peak theoretical Bfloat16 tensor format precision (BF16 Tensor), 133.8 TFLOPs peak theoretical Bfloat16 format precision (BF16), 1,978.9 TFLOPs peak theoretical 8-bit precision (FP8), 1,978.9 TOPs peak theoretical INT8 floating-point performance.
Nvidia H100 source:
https://resources.nvidia.com/en-us-tensor-core/

* Nvidia H100 GPUs don’t support FP32 Tensor.
MI300-18
2 Text generated with Llama2-70b chat using input sequence length of 4096 and 32 output token comparison using custom docker container for each system based on AMD internal testing as of 11/17/2023. Configurations: 2P Intel Xeon Platinum CPU server using 4x AMD Instinct™ MI300X (192GB, 750W) GPUs, ROCm® 6.0 pre-release, PyTorch 2.2.0, vLLM for ROCm, Ubuntu® 22.04.2. Vs. 2P AMD EPYC 7763 CPU server using 4x AMD Instinct™ MI250 (128 GB HBM2e, 560W) GPUs, ROCm® 5.4.3, PyTorch 2.0.0., HuggingFace Transformers 4.35.0, Ubuntu 22.04.6.
4 GPUs on each system was used in this test. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI300-33

3 An AMD Ryzen “Strix point” processor is projected to offer 3x faster NPU performance for AI workloads when compared to an AMD Ryzen 7040 series processor. Performance projection by AMD engineering staff. Engineering projections are not a guarantee of final performance. Specific projections are based on reference design platforms and are subject to change when final products are released in market. STX-01.

Contact:
Brandi Martina
AMD Communications
(512) 705-1720
Brandi.Martina@AMD.com

Suresh Bhaskaran
AMD Investor Relations
(408) 749-2845
Suresh.Bhaskaran@amd.com


FAQ

What new products did AMD launch at the 'Advancing AI' event?

AMD launched the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack, and Ryzen 8040 Series processors with Ryzen AI.

Which companies are collaborating with AMD for advanced AI solutions?

Industry leaders like Microsoft, Dell Technologies, HPE, Lenovo, Meta, Oracle, Supermicro, Arista, Broadcom, and Cisco are working with AMD to deliver advanced AI solutions.

What are the plans of Microsoft, Meta, and Oracle regarding the deployment of AMD Instinct MI300X accelerators?

Microsoft is deploying AMD Instinct MI300X accelerators for the new Azure ND MI300x v5 Virtual Machine (VM) series, Meta is adding these accelerators to its data centers for AI inferencing workloads, and Oracle plans to offer OCI bare metal compute solutions featuring AMD Instinct MI300X accelerators.

What are the plans of Dell, HPE, and Lenovo regarding the integration of AMD Instinct MI300X accelerators?

Dell, HPE, and Lenovo plan to integrate AMD Instinct MI300X accelerators into their server solutions and platforms to deliver groundbreaking performance for AI workloads.

What are the main features of the ROCm 6 software stack?

ROCm 6 has been optimized for generative AI, particularly large language models, and boasts support for new data types, advanced graph and kernel optimizations, optimized libraries, and state-of-the-art attention algorithms.

Which AI startups are leveraging AMD Instinct MI300X accelerators and the ROCm 6 software stack?

Databricks, Essential AI, and Lamini are leveraging these technologies to deliver differentiated AI solutions for enterprise customers.

Advanced Micro Devices

NASDAQ:AMD

AMD Rankings

AMD Latest News

AMD Stock Data

192.21B
1.61B
0.49%
72.02%
3.04%
Semiconductors
Semiconductors & Related Devices
Link
United States of America
SANTA CLARA