Nutanix Extends AI Platform to Public Cloud
Nutanix (NTNX) has expanded its AI infrastructure platform with Nutanix Enterprise AI (NAI), a cloud-native solution deployable on any Kubernetes platform across edge, data centers, and major public cloud services. The offering enables organizations to run generative AI workloads with NVIDIA NIM optimization, supporting large language model deployment in minutes.
NAI provides a consistent multicloud operating model with transparent, resource-based pricing, avoiding token-based costs. The platform includes security features, role-based access controls, and supports both on-premises and public cloud deployments through AWS EKS, Azure AKS, and Google GKE.
Nutanix (NTNX) ha ampliato la sua piattaforma di infrastruttura AI con Nutanix Enterprise AI (NAI), una soluzione nativa del cloud che può essere implementata su qualsiasi piattaforma Kubernetes, sia edge che nei data center, oltre che nei principali servizi cloud pubblici. Questa offerta consente alle organizzazioni di gestire carichi di lavoro di intelligenza artificiale generativa con ottimizzazione NVIDIA NIM, supportando il deploy di modelli di linguaggio di grandi dimensioni in pochi minuti.
NAI fornisce un modello operativo multicloud coerente con una struttura di prezzi trasparente e basata sulle risorse, evitando costi basati su token. La piattaforma include funzionalità di sicurezza, controlli di accesso basati sui ruoli e supporta sia implementazioni on-premises che su cloud pubblico tramite AWS EKS, Azure AKS e Google GKE.
Nutanix (NTNX) ha ampliado su plataforma de infraestructura AI con Nutanix Enterprise AI (NAI), una solución nativa de la nube que se puede implementar en cualquier plataforma Kubernetes en el borde, en centros de datos y en los principales servicios de nube pública. Esta oferta permite a las organizaciones ejecutar cargas de trabajo de IA generativa con optimización NVIDIA NIM, soportando el despliegue de modelos de lenguaje de gran tamaño en minutos.
NAI proporciona un modelo operativo multicloud consistente con precios transparentes basados en recursos, evitando costos basados en tokens. La plataforma incluye características de seguridad, controles de acceso basados en roles y soporta implementaciones tanto locales como en la nube pública a través de AWS EKS, Azure AKS y Google GKE.
Nutanix (NTNX)는 Nutanix Enterprise AI (NAI)와 함께 AI 인프라 플랫폼을 확장했습니다. 이 클라우드 네이티브 솔루션은 에지, 데이터 센터 및 주요 공공 클라우드 서비스 전반에 걸쳐 모든 Kubernetes 플랫폼에서 배포할 수 있습니다. 이 솔루션은 조직이 NVIDIA NIM 최적화를 통해 생성 AI 작업을 실행할 수 있도록 하며, 대규모 언어 모델을 몇 분 안에 배포할 수 있도록 지원합니다.
NAI는 리소스 기반 가격 책정의 투명성을 갖춘 일관된 멀티 클라우드 운영 모델을 제공하며, 토큰 기반 비용을 피합니다. 이 플랫폼에는 보안 기능, 역할 기반 액세스 제어가 포함되어 있으며, AWS EKS, Azure AKS 및 Google GKE를 통해 온프레미스 및 공공 클라우드 배포를 모두 지원합니다.
Nutanix (NTNX) a élargi sa plateforme d'infrastructure AI avec Nutanix Enterprise AI (NAI), une solution native du cloud déployable sur n'importe quelle plateforme Kubernetes, qu'il s'agisse de l'edge, des centres de données ou des principaux services de cloud public. Cette offre permet aux organisations d'exécuter des charges de travail d'IA générative avec une optimisation NVIDIA NIM, permettant le déploiement de grands modèles de langage en quelques minutes.
NAI fournit un modèle opérationnel multicloud cohérent avec une tarification transparente basée sur les ressources, évitant les coûts basés sur des jetons. La plateforme comprend des fonctionnalités de sécurité, des contrôles d'accès basés sur des rôles et prend en charge à la fois les déploiements sur site et dans le cloud public via AWS EKS, Azure AKS et Google GKE.
Nutanix (NTNX) hat seine AI-Infrastrukturplattform mit Nutanix Enterprise AI (NAI) erweitert, einer cloud-nativen Lösung, die auf jeder Kubernetes-Plattform in Edge-, Rechenzentren und großen öffentlichen Cloud-Diensten implementiert werden kann. Das Angebot ermöglicht es Organisationen, generative AI-Workloads mit NVIDIA NIM-Optimierung auszuführen und große Sprachmodelle in wenigen Minuten bereitzustellen.
NAI bietet ein konsistentes Multicloud-Betriebsmodell mit transparenten, ressourcenbasierten Preisen, wodurch tokenbasierte Kosten vermieden werden. Die Plattform umfasst Sicherheitsfunktionen, rollenbasierte Zugriffskontrollen und unterstützt sowohl lokale als auch öffentliche Cloud-Implementierungen über AWS EKS, Azure AKS und Google GKE.
- Launch of new cloud-native AI infrastructure platform with flexible deployment options
- Partnership with NVIDIA for optimized AI performance
- Transparent resource-based pricing model versus token-based pricing
- Enhanced security features with role-based access controls
- Quick deployment capability of LLM inference endpoints
- None.
Insights
This cloud-native AI infrastructure expansion marks a significant strategic move for Nutanix. The new Enterprise AI platform addresses three critical market needs: hybrid cloud flexibility, simplified AI deployment and predictable cost structures.
The partnership with NVIDIA and integration of NIM microservices positions Nutanix competitively in the enterprise AI infrastructure market. Key differentiators include multi-cloud deployment capabilities, support for air-gapped environments and transparent resource-based pricing versus token-based models.
The platform's ability to run on major cloud providers (AWS, Azure, Google) while maintaining operational consistency with on-premises deployments could significantly reduce complexity for enterprises adopting AI. The integration with Hugging Face's open models also provides valuable flexibility for customers exploring different AI implementation strategies.
This product launch strengthens Nutanix's position in the rapidly growing AI infrastructure market. The timing is strategic, as enterprises are actively seeking solutions that bridge the gap between cloud and on-premises AI deployments while maintaining security and cost control.
The emphasis on simplified management and predictable pricing addresses major pain points for enterprises adopting AI technology. The platform's ability to support both edge and cloud deployments, combined with NVIDIA's enterprise-grade technology, could drive significant revenue growth as organizations scale their AI initiatives.
Nutanix Enterprise AI provides an easy-to-use, unified generative AI experience on-premises, at the edge, and now in public clouds
SALT LAKE CITY, Nov. 12, 2024 (GLOBE NEWSWIRE) -- KubeCon – Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, today announced that it extended the company's AI infrastructure platform with a new cloud native offering, Nutanix Enterprise AI (NAI), that can be deployed on any Kubernetes platform, at the edge, in core data centers, and on public cloud services like AWS EKS, Azure AKS, and Google GKE. The NAI offering delivers a consistent hybrid multicloud operating model for accelerated AI workloads, enabling organizations to leverage their models and data in a secure location of their choice while improving return on investment (ROI). Leveraging NVIDIA NIM for optimized performance of foundation models, Nutanix Enterprise AI helps organizations securely deploy, run, and scale inference endpoints for large language models (LLMs) to support the deployment of generative AI (GenAI) applications in minutes, not days or weeks.
Generative AI is an inherently hybrid workload, with new applications often built in the public cloud, fine-tuning of models using private data occurring on-premises, and inferencing deployed closest to the business logic, which could be at the edge, on-premises or in the public cloud. This distributed hybrid GenAI workflow can present challenges for organizations concerned about complexity, data privacy, security, and cost.
Nutanix Enterprise AI provides a consistent multicloud operating model and a simple way to securely deploy, scale, and run LLMs with NVIDIA NIM optimized inference microservices as well as open foundation models from Hugging Face. This enables customers to stand up enterprise GenAI infrastructure with the resiliency, day 2 operations, and security they require for business-critical applications, on-premises or on AWS Elastic Kubernetes Service (EKS), Azure Managed Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).
Additionally, Nutanix Enterprise AI delivers a transparent and predictable pricing model based on infrastructure resources, which is important for customers looking to maximize ROI from their GenAI investments. This is in contrast to hard-to-predict usage or token-based pricing.
Nutanix Enterprise AI is a component of Nutanix GPT-in-a-Box 2.0. GPT-in-a-Box also includes Nutanix Cloud Infrastructure, Nutanix Kubernetes Platform, and Nutanix Unified Storage along with services to support customer configuration and sizing needs for on-premises training and inferencing. For customers looking to deploy in public cloud, Nutanix Enterprise AI can be deployed in any Kubernetes environment but is operationally consistent with on-premises deployments.
“With Nutanix Enterprise AI, we're helping our customers simply and securely run GenAI applications on-premises or in public clouds. Nutanix Enterprise AI can run on any Kubernetes platform and allows their AI applications to run in their secure location, with a predictable cost model,” said Thomas Cornely, SVP, Product Management, Nutanix.
Nutanix Enterprise AI can be deployed with the NVIDIA full-stack AI platform and is validated with the NVIDIA AI Enterprise software platform, including NVIDIA NIM, a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing. Nutanix-GPT-in-a-Box is also an NVIDIA-Certified System, also ensuring reliability of performance.
"Generative AI workloads are inherently hybrid, with training, customization, and inference occurring across public clouds, on-premises systems, and edge locations," said Justin Boitano, vice president of enterprise AI at NVIDIA. "Integrating NVIDIA NIM into Nutanix Enterprise AI provides a consistent multicloud model with secure APIs, enabling customers to deploy AI across diverse environments with the high performance and security needed for business-critical applications."
Nutanix Enterprise AI can help customers:
- Address AI skill shortages. Simplicity, choice, and built-in features mean IT admins can be AI admins, accelerating AI development by data scientists and developers adapting quickly using the latest models and NVIDIA accelerated computing.
- Remove barriers to building an AI-ready platform. Many organizations looking to adopt GenAI struggle with building the right platform to support AI workloads, including maintaining consistency across their on-premises infrastructure and multiple public clouds. Nutanix Enterprise AI addresses this with a simple UI-driven workflow that can help customers deploy and test LLM inference endpoints in minutes, offering customer choice with support for NVIDIA NIM microservices which run anywhere, ensuring optimized model performance across cloud and on prem environments. Hugging Face and other model standards are also supported. Additionally, native integration with Nutanix Kubernetes Platform keeps alignment with the ability to leverage the entire Nutanix Cloud Platform or provide customers with the option to run on any Kubernetes runtime, including AWS EKS, Azure AKS, or Google Cloud GKE with NVIDIA accelerated computing.
- Mitigate data privacy and security concerns. Helping mitigate privacy and security risks is built into Nutanix Enterprise AI by enabling customers to run models and data on compute resources they control. Additionally, Nutanix Enterprise AI delivers an intuitive dashboard for troubleshooting, observability, and utilization of resources used for LLMs, as well as quick and secure role-based access controls (RBAC) to ensure LLM accessibility is controllable and understood. Organizations requiring hardened security will also be able to deploy in air-gapped or dark-site environments.
- Bring enterprise infrastructure to GenAI workloads. Customers running Nutanix Cloud Platform for business-critical applications can now bring the same resiliency, Day 2 operations, and security to GenAI workloads for an enterprise infrastructure experience.
Key use cases for customers leveraging Nutanix Enterprise AI include: enhancing customer experience with GenAI through analysis of customer feedback and documents; accelerating code and content creation by leveraging co-pilots and intelligent document processing; leveraging fine-tuning models on domain-specific data to accelerate code and content generation; strengthening security, including leveraging AI models for fraud detection, threat detection, alert enrichment, and automatic policy creation; and improving analytics by leveraging fine-tuned models on private data.
Nutanix Enterprise AI, running on-premises, at the edge or in public cloud, and Nutanix GPT-in-a-Box 2.0 are currently available to customers. For more information, please visit Nutanix.com/enterprise-ai.
Supporting Quotes:
- "Thanks to the deep collaboration between the Nutanix and Hugging Face teams, customers of Nutanix Enterprise AI are able to seamlessly deploy the most popular open models in an easy to use, fully tested stack – now also on public clouds," said Jeff Boudier, Head of Product at Hugging Face.
- "By providing a consistent experience from the enterprise to public cloud, Nutanix Enterprise AI aims to provide a user-friendly infrastructure platform to support organizations at every step of their AI journey, from public cloud to the edge," said Dave Pearson, Infrastructure Research VP at IDC.
About Nutanix
Nutanix is a global leader in cloud software, offering organizations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at www.nutanix.com or follow us on social media @nutanix.
© 2024 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo, and all Nutanix product and service names mentioned herein are registered trademarks or unregistered trademarks of Nutanix, Inc. (“Nutanix”) in the United States and other countries. Other brand names or marks mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). This press release is for informational purposes only and nothing herein constitutes a warranty or other binding commitment by Nutanix. This release contains express and implied forward-looking statements, which are not historical facts and are instead based on Nutanix’s current expectations, estimates and beliefs, including statements about the benefits and capabilities of our new Nutanix Enterprise AI offering and our other products, services, and technology. The accuracy of such statements involves risks and uncertainties and depends upon future events, including those that may be beyond Nutanix’s control, and actual results may differ materially and adversely from those anticipated or implied by such statements. Any forward-looking statements included herein speak only as of the date hereof and, except as required by law, Nutanix assumes no obligation to update or otherwise revise any of such forward-looking statements to reflect subsequent events or circumstances.
FAQ
What is Nutanix Enterprise AI's new cloud deployment capability for NTNX?
How does Nutanix (NTNX) price its Enterprise AI platform?
What security features does Nutanix Enterprise AI (NTNX) include?