STOCK TITAN

NVIDIA AI Foundry Builds Custom Llama 3.1 Generative AI Models for the World’s Enterprises

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Neutral)
Tags
AI

NVIDIA has unveiled its AI Foundry service and NIM inference microservices to enhance generative AI for enterprises using the newly introduced Llama 3.1 models. This service allows companies to create custom 'supermodels' for specific industry use cases, leveraging proprietary and synthetic data. NVIDIA AI Foundry is powered by the DGX Cloud AI platform, offering scalable compute resources.

The announcement includes NIM microservices for Llama 3.1 models, providing up to 2.5x higher throughput for inference. Accenture is the first to adopt NVIDIA AI Foundry to build custom Llama 3.1 models. The service combines NVIDIA's software, infrastructure, and expertise with open community models and ecosystem support.

NVIDIA ha presentato il suo servizio AI Foundry e i microservizi di inferenza NIM per migliorare l'IA generativa per le imprese utilizzando i nuovi modelli Llama 3.1. Questo servizio consente alle aziende di creare 'supermodelli' personalizzati per casi d'uso specifici del settore, sfruttando dati proprietari e sintetici. NVIDIA AI Foundry è alimentato dalla piattaforma AI del DGX Cloud, offrendo risorse computazionali scalabili.

L'annuncio include microservizi NIM per i modelli Llama 3.1, che forniscono fino a 2,5 volte un throughput più elevato per l'inferenza. Accenture è la prima a adottare NVIDIA AI Foundry per costruire modelli Llama 3.1 personalizzati. Il servizio combina il software, l'infrastruttura e l'esperienza di NVIDIA con i modelli della comunità aperta e il supporto dell'ecosistema.

NVIDIA ha presentado su servicio AI Foundry y microservicios de inferencia NIM para mejorar la IA generativa para empresas utilizando los nuevos modelos Llama 3.1. Este servicio permite a las empresas crear 'supermodelos' personalizados para casos de uso específicos de la industria, aprovechando datos propietarios y sintéticos. NVIDIA AI Foundry está impulsado por la plataforma AI del DGX Cloud, ofreciendo recursos computacionales escalables.

El anuncio incluye microservicios NIM para modelos Llama 3.1, proporcionando hasta un 2.5x mayor rendimiento para la inferencia. Accenture es la primera en adoptar NVIDIA AI Foundry para construir modelos Llama 3.1 personalizados. El servicio combina el software, la infraestructura y la experiencia de NVIDIA con modelos abiertos de la comunidad y soporte del ecosistema.

NVIDIA는 AI Foundry 서비스NIM 추론 마이크로서비스를 발표하여, 새롭게 도입된 Llama 3.1 모델을 사용하여 기업의 생성형 AI를 개선합니다. 이 서비스는 기업들이 특정 산업 사용 사례에 맞춘 맞춤형 '슈퍼모델'을 만들 수 있도록 하며, 독점적이고 합성된 데이터를 활용합니다. NVIDIA AI FoundryDGX 클라우드 AI 플랫폼에 의해 구동되며, 확장 가능한 컴퓨팅 자원을 제공합니다.

이번 발표에는 Llama 3.1 모델에 대한 NIM 마이크로서비스가 포함되어 있으며, 추론을 위한 최대 2.5배 더 높은 처리량을 제공합니다. Accenture는 맞춤형 Llama 3.1 모델을 구축하기 위해 NVIDIA AI Foundry를 최초로 채택한 기업입니다. 이 서비스는 NVIDIA의 소프트웨어, 인프라 및 전문성과 오픈 커뮤니티 모델 및 생태계 지원을 결합합니다.

NVIDIA a dévoilé son service AI Foundry et ses microservices d'inférence NIM pour améliorer l'IA générative pour les entreprises en utilisant les nouveaux modèles Llama 3.1. Ce service permet aux entreprises de créer des 'supermodèles' personnalisés pour des cas d'utilisation spécifiques à l'industrie, en tirant parti de données propriétaires et synthétiques. NVIDIA AI Foundry est alimenté par la plateforme AI du DGX Cloud, offrant des ressources informatiques évolutives.

L'annonce inclut des microservices NIM pour les modèles Llama 3.1, offrant jusqu'à 2,5 fois un débit supérieur pour l'inférence. Accenture est le premier à adopter NVIDIA AI Foundry pour construire des modèles Llama 3.1 personnalisés. Le service combine le logiciel, l'infrastructure et l'expertise de NVIDIA avec des modèles communautaires ouverts et un soutien de l'écosystème.

NVIDIA hat seinen AI Foundry Service und NIM-Inferenz-Mikrodienste vorgestellt, um die generative KI für Unternehmen mithilfe der neu eingeführten Llama 3.1 Modelle zu verbessern. Dieser Service ermöglicht es Unternehmen, maßgeschneiderte 'Supermodelle' für spezifische Anwendungsfälle in der Industrie zu erstellen, wobei proprietäre und synthetische Daten genutzt werden. NVIDIA AI Foundry wird von der DGX Cloud AI-Plattform unterstützt und bietet skalierbare Rechenressourcen.

Die Ankündigung umfasst NIM-Mikrodienste für Llama 3.1 Modelle, die eine bis zu 2,5-fache höhere Durchsatzrate für die Inferenz bieten. Accenture ist das erste Unternehmen, das NVIDIA AI Foundry einsetzt, um maßgeschneiderte Llama 3.1 Modelle zu erstellen. Der Service kombiniert die Software, Infrastruktur und Expertise von NVIDIA mit offenen Community-Modellen und Ökosystemunterstützung.

Positive
  • Introduction of AI Foundry service and NIM inference microservices for Llama 3.1 models
  • NIM microservices offer up to 2.5x higher throughput for inference
  • Partnership with Accenture as the first adopter of AI Foundry
  • Collaboration with Meta for Llama 3.1 model distillation recipe
  • Adoption of NIM microservices by industry leaders like Aramco, AT&T, and Uber
Negative
  • None.

Insights

NVIDIA's introduction of AI Foundry, leveraging the Llama 3.1 models, is a milestone event in the technology sector. The ability to create custom 'supermodels' using the DGX Cloud AI platform offers enterprises substantial computational power that can be scaled according to AI demands. This is particularly significant for companies looking to integrate domain-specific AI applications into their operations. The provision of synthetic data from the Llama 3.1 405B and NVIDIA Nemotron models allows for the creation of highly customized solutions, aiding in boosting model accuracy for specific business needs. The immediate adoption by giants like Accenture, Aramco, AT&T and Uber indicates widespread industry trust and the potential for rapid deployment across various sectors. Such integrations are likely to enhance operational efficiencies and innovation, driving competitiveness in their respective markets.

From a financial perspective, NVIDIA's new AI offerings could drive significant revenue growth. The AI Foundry's capability to build custom generative AI models tailored to specific industry needs positions NVIDIA as a key player in the enterprise AI market. The strategic partnerships and early adoption by major corporations suggest a strong demand and potential for large-scale contracts. This will likely reflect positively on NVIDIA's financial performance, increasing investor confidence. The emphasis on scalability and the use of NVIDIA's DGX Cloud AI platform ensures that these solutions can cater to a broad range of enterprises, potentially leading to a diversified and stable revenue stream. Moreover, as more sectors like healthcare, energy and telecommunications begin integrating these technologies, NVIDIA could see expanded market penetration and long-term growth prospects.

The introduction of NVIDIA's AI Foundry and Llama 3.1 models marks a significant shift in the AI landscape. This development positions NVIDIA not just as a hardware provider but as a comprehensive AI solutions provider. By offering tools for creating custom, domain-specific AI models, NVIDIA taps into the growing demand for specialized AI applications. The adoption by entities like Accenture and others highlights the market's readiness and the anticipated rapid adoption across industries. This move can potentially disrupt current market dynamics, as enterprises might prefer NVIDIA's integrated solutions over piecemeal approaches from multiple vendors. For retail investors, this signifies a strategic diversification and strengthening of NVIDIA's market position, which could translate into sustained stock performance and potential appreciation in share value.

  • Enterprises and Nations Can Now Build ‘Supermodels’ With NVIDIA AI Foundry Using Their Own Data Paired With Llama 3.1 405B and NVIDIA Nemotron Models
  • NVIDIA AI Foundry Offers Comprehensive Generative AI Model Service Spanning Curation, Synthetic Data Generation, Fine-Tuning, Retrieval, Guardrails and Evaluation to Deploy Custom Llama 3.1 NVIDIA NIM Microservices With New NVIDIA NeMo Retriever Microservices for Accurate Responses
  • Accenture First to Use New Service to Build Custom Llama 3.1 Models for Clients; Aramco, AT&T, Uber and Other Industry Leaders Among First to Access New Llama NVIDIA NIM Microservices

SANTA CLARA, Calif., July 23, 2024 (GLOBE NEWSWIRE) -- NVIDIA today announced a new NVIDIA AI Foundry service and NVIDIA NIM™ inference microservices to supercharge generative AI for the world’s enterprises with the Llama 3.1 collection of openly available models, also introduced today.

With NVIDIA AI Foundry, enterprises and nations can now create custom “supermodels” for their domain-specific industry use cases using Llama 3.1 and NVIDIA software, computing and expertise. Enterprises can train these supermodels with proprietary data as well as synthetic data generated from Llama 3.1 405B and the NVIDIA Nemotron™ Reward model.

NVIDIA AI Foundry is powered by the NVIDIA DGX™ Cloud AI platform, which is co-engineered with the world’s leading public clouds, to give enterprises significant compute resources that easily scale as AI demands change.

The new offerings come at a time when enterprises, as well as nations developing sovereign AI strategies, want to build custom large language models with domain-specific knowledge for generative AI applications that reflect their unique business or culture.

“Meta’s openly available Llama 3.1 models mark a pivotal moment for the adoption of generative AI within the world’s enterprises,” said Jensen Huang, founder and CEO of NVIDIA. “Llama 3.1 opens the floodgates for every enterprise and industry to build state-of-the-art generative AI applications. NVIDIA AI Foundry has integrated Llama 3.1 throughout and is ready to help enterprises build and deploy custom Llama supermodels.”

“The new Llama 3.1 models are a super-important step for open source AI,” said Mark Zuckerberg, founder and CEO of Meta. “With NVIDIA AI Foundry, companies can easily create and customize the state-of-the-art AI services people want and deploy them with NVIDIA NIM. I’m excited to get this in people’s hands.”

To supercharge enterprise deployments of Llama 3.1 models for production AI, NVIDIA NIM inference microservices for Llama 3.1 models are now available for download from ai.nvidia.com. NIM microservices are the fastest way to deploy Llama 3.1 models in production and power up to 2.5x higher throughput than running inference without NIM.

Enterprises can pair Llama 3.1 NIM microservices with new NVIDIA NeMo Retriever NIM microservices to create state-of-the-art retrieval pipelines for AI copilots, assistants and digital human avatars.

Accenture Pioneers Custom Llama Supermodels for Enterprises With AI Foundry
Global professional services firm Accenture is first to adopt NVIDIA AI Foundry to build custom Llama 3.1 models using the Accenture AI Refinery™ framework, both for its own use as well as for clients seeking to deploy generative AI applications that reflect their culture, languages and industries.

“The world’s leading enterprises see how generative AI is transforming every industry and are eager to deploy applications powered by custom models,” said Julie Sweet, chair and CEO of Accenture. “Accenture has been working with NVIDIA NIM inference microservices for our internal AI applications, and now, using NVIDIA AI Foundry, we can help clients quickly create and deploy custom Llama 3.1 models to power transformative AI applications for their own business priorities.”

NVIDIA AI Foundry provides an end-to-end service for quickly building custom supermodels. It combines NVIDIA software, infrastructure and expertise with open community models, technology and support from the NVIDIA AI ecosystem.

With NVIDIA AI Foundry, enterprises can create custom models using Llama 3.1 models and the NVIDIA NeMo platform — including the NVIDIA Nemotron-4 340B Reward model, ranked first on the Hugging Face RewardBench.

Once custom models are created, enterprises can create NVIDIA NIM inference microservices to run them in production using their preferred MLOps and AIOps platforms on their preferred cloud platforms and NVIDIA-Certified Systems™ from global server manufacturers.

NVIDIA AI Enterprise experts and global system integrator partners work with AI Foundry customers to accelerate the entire process, from development to deployment.

NVIDIA Nemotron Powers Advanced Model Customization
Enterprises that need additional training data for creating a domain-specific model can use Llama 3.1 405B and Nemotron-4 340B together to generate synthetic data to boost model accuracy when creating custom Llama supermodels.

Customers that have their own training data can customize Llama 3.1 models with NVIDIA NeMo for domain-adaptive pretraining, or DAPT, to further increase model accuracy.

NVIDIA and Meta have also teamed to provide a distillation recipe for Llama 3.1 that developers can use to build smaller custom Llama 3.1 models for generative AI applications. This enables enterprises to run Llama-powered AI applications on a broader range of accelerated infrastructure, such as AI workstations and laptops.

Industry-Leading Enterprises Supercharge AI With NVIDIA and Llama
Companies across healthcare, energy, financial services, retail, transportation and telecommunications are already working with NVIDIA NIM microservices for Llama. Among the first to access the new NIM microservices for Llama 3.1 are Aramco, AT&T and Uber.

Trained on over 16,000 NVIDIA H100 Tensor Core GPUs and optimized for NVIDIA accelerated computing and software — in the data center, in the cloud and locally on workstations with NVIDIA RTX™ GPUs or PCs with GeForce RTX GPUs — the Llama 3.1 collection of multilingual LLMs is a collection of generative AI models in 8B-, 70B- and 405B-parameter sizes.

New NeMo Retriever RAG Microservices Boost Accuracy and Performance
Using new NVIDIA NeMo Retriever NIM inference microservices for retrieval-augmented generation (RAG), organizations can enhance response accuracy when deploying customized Llama supermodels and Llama NIM microservices in production.

Combined with NVIDIA NIM inference microservices for Llama 3.1 405B, NeMo Retriever NIM microservices deliver the highest open and commercial text Q&A retrieval accuracy for RAG pipelines.

Enterprise Ecosystem Ready to Power Llama 3.1 and NeMo Retriever NIM Deployments
Hundreds of NVIDIA NIM partners providing enterprise, data and infrastructure platforms can now integrate the new microservices in their AI solutions to supercharge generative AI for the NVIDIA community of more than 5 million developers and 19,000 startups.

Production support for Llama 3.1 NIM and NeMo Retriever NIM microservices is available through NVIDIA AI Enterprise. Members of the NVIDIA Developer Program will soon be able to access NIM microservices for free for research, development and testing on their preferred infrastructure.

About NVIDIA
NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

For further information, contact:
Natalie Hereth
NVIDIA Corporation
+1-360-581-1088
nhereth@nvidia.com

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features, and availability of NVIDIA’s products and technologies, including NVIDIA AI Foundry, NVIDIA Nemotron models, NVIDIA Nemotron-4 models, NVIDIA DGX Cloud, NVIDIA NeMo Retriever NIM microservices, NVIDIA NeMo platform, NVIDIA-Certified Systems, NVIDIA Tensor Core GPUs, NVIDIA RTX GPUs and GeForce RTX GPUs; third parties’ use or adoption of NVIDIA products, technologies and platforms, and the benefits and impacts thereof; our collaboration with third parties and the benefits and impacts thereof; Llama 3.1 opening the floodgates for every enterprise and industry to build state-of-the-art generative AI applications; and NVIDIA AI Foundry being ready to help enterprises build and deploy custom Llama supermodels are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements hereto are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

© 2024 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, DGX, NVIDIA Certified-Systems, NVIDIA Nemotron, NVIDIA NIM and NVIDIA RTX are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/f5da35f5-cf1b-4848-8df8-0972343438af


FAQ

What is NVIDIA's new AI Foundry service for Llama 3.1 models?

NVIDIA's AI Foundry is a comprehensive service that allows enterprises to create custom 'supermodels' using Llama 3.1 models. It combines NVIDIA's software, infrastructure, and expertise to help businesses build domain-specific AI models for their unique use cases.

How do NVIDIA NIM microservices enhance Llama 3.1 model performance?

NVIDIA NIM microservices for Llama 3.1 models offer up to 2.5x higher throughput for inference compared to running inference without NIM. This allows for faster and more efficient deployment of Llama 3.1 models in production environments.

Which company is the first to adopt NVIDIA AI Foundry for Llama 3.1 models?

Accenture is the first company to adopt NVIDIA AI Foundry to build custom Llama 3.1 models, both for its own use and for clients seeking to deploy generative AI applications tailored to their specific needs.

What are the key features of NVIDIA's new NeMo Retriever NIM microservices?

NVIDIA's NeMo Retriever NIM microservices enhance response accuracy when deploying customized Llama supermodels. When combined with NIM inference microservices for Llama 3.1 405B, they deliver the highest open and commercial text Q&A retrieval accuracy for RAG pipelines.

Nvidia Corp

NASDAQ:NVDA

NVDA Rankings

NVDA Latest News

NVDA Stock Data

3.19T
23.44B
4.29%
66.17%
1%
Semiconductors
Semiconductors & Related Devices
Link
United States of America
SANTA CLARA