STOCK TITAN

JFrog Unveils Secure AI Model Delivery Accelerated by NVIDIA NIM Microservices

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI

JFrog (NASDAQ: FROG) announces general availability of its integration with NVIDIA NIM microservices, offering the industry's only unified DevSecOps and MLOps solution with native NVIDIA NIM integration. This collaboration enables secure deployment of GPU-optimized machine learning models and large language models (LLMs) including Meta's Llama 3 and Mistral AI.

The integration addresses key challenges in AI implementation by providing:

  • Unified ML & DevOps workflows for versioning, securing, and deploying models
  • End-to-end security scanning and threat detection for AI models
  • Optimized performance using NVIDIA accelerated computing infrastructure
  • Seamless access to NVIDIA NGC for GPU-optimized models

According to IDC, by 2028, 65% of organizations will use DevOps tools combining MLOps, LLMOps, DataOps, CloudOps, and DevOps capabilities to optimize AI value in software delivery processes.

JFrog (NASDAQ: FROG) annuncia la disponibilità generale della sua integrazione con NVIDIA NIM microservices, offrendo l'unica soluzione unificata di DevSecOps e MLOps del settore con integrazione nativa di NVIDIA NIM. Questa collaborazione consente il deployment sicuro di modelli di machine learning ottimizzati per GPU e di grandi modelli linguistici (LLM) tra cui Llama 3 di Meta e Mistral AI.

L'integrazione affronta le principali sfide nell'implementazione dell'IA fornendo:

  • Flussi di lavoro unificati di ML e DevOps per il versioning, la sicurezza e il deployment dei modelli
  • Scansione della sicurezza end-to-end e rilevamento delle minacce per i modelli di IA
  • Prestazioni ottimizzate utilizzando l'infrastruttura di calcolo accelerato NVIDIA
  • Accesso senza soluzione di continuità a NVIDIA NGC per modelli ottimizzati per GPU

Secondo IDC, entro il 2028, il 65% delle organizzazioni utilizzerà strumenti DevOps combinando MLOps, LLMOps, DataOps, CloudOps e capacità DevOps per ottimizzare il valore dell'IA nei processi di consegna del software.

JFrog (NASDAQ: FROG) anuncia la disponibilidad general de su integración con NVIDIA NIM microservices, ofreciendo la única solución unificada de DevSecOps y MLOps en la industria con integración nativa de NVIDIA NIM. Esta colaboración permite el despliegue seguro de modelos de aprendizaje automático optimizados para GPU y grandes modelos de lenguaje (LLM), incluyendo Llama 3 de Meta y Mistral AI.

La integración aborda los principales desafíos en la implementación de IA al proporcionar:

  • Flujos de trabajo unificados de ML y DevOps para versionar, asegurar y desplegar modelos
  • Escaneo de seguridad de extremo a extremo y detección de amenazas para modelos de IA
  • Rendimiento optimizado utilizando infraestructura de computación acelerada por NVIDIA
  • Acceso sin interrupciones a NVIDIA NGC para modelos optimizados para GPU

Según IDC, para 2028, el 65% de las organizaciones utilizarán herramientas DevOps que combinan MLOps, LLMOps, DataOps, CloudOps y capacidades DevOps para optimizar el valor de la IA en los procesos de entrega de software.

JFrog (NASDAQ: FROG)NVIDIA NIM 마이크로서비스와의 통합의 일반 가용성을 발표하며, NVIDIA NIM 통합을 통해 업계 유일의 통합 DevSecOps 및 MLOps 솔루션을 제공합니다. 이 협업은 메타의 Llama 3 및 Mistral AI를 포함한 GPU 최적화 기계 학습 모델 및 대형 언어 모델(LLM)의 안전한 배포를 가능하게 합니다.

이 통합은 AI 구현의 주요 과제를 해결하며:

  • 모델의 버전 관리, 보안 및 배포를 위한 통합 ML 및 DevOps 워크플로우 제공
  • AI 모델에 대한 종단 간 보안 스캔 및 위협 탐지
  • NVIDIA 가속 컴퓨팅 인프라를 사용한 최적화된 성능
  • GPU 최적화 모델을 위한 NVIDIA NGC에 대한 원활한 접근 제공

IDC에 따르면, 2028년까지 65%의 조직이 MLOps, LLMOps, DataOps, CloudOps 및 DevOps 기능을 결합한 DevOps 도구를 사용하여 소프트웨어 전달 프로세스에서 AI 가치를 최적화할 것입니다.

JFrog (NASDAQ: FROG) annonce la disponibilité générale de son intégration avec NVIDIA NIM microservices, offrant la seule solution unifiée de DevSecOps et MLOps du secteur avec une intégration native de NVIDIA NIM. Cette collaboration permet le déploiement sécurisé de modèles d'apprentissage automatique optimisés pour GPU et de grands modèles linguistiques (LLM), y compris Llama 3 de Meta et Mistral AI.

L'intégration répond aux principaux défis de la mise en œuvre de l'IA en fournissant :

  • Des flux de travail ML et DevOps unifiés pour la version, la sécurisation et le déploiement des modèles
  • Une analyse de sécurité de bout en bout et une détection des menaces pour les modèles d'IA
  • Des performances optimisées grâce à l'infrastructure de calcul accéléré par NVIDIA
  • Un accès fluide à NVIDIA NGC pour les modèles optimisés pour GPU

Selon IDC, d'ici 2028, 65 % des organisations utiliseront des outils DevOps combinant MLOps, LLMOps, DataOps, CloudOps et des capacités DevOps pour optimiser la valeur de l'IA dans les processus de livraison de logiciels.

JFrog (NASDAQ: FROG) gibt die allgemeine Verfügbarkeit seiner Integration mit NVIDIA NIM-Mikroservices bekannt und bietet die einzige einheitliche DevSecOps- und MLOps-Lösung der Branche mit nativer NVIDIA NIM-Integration. Diese Zusammenarbeit ermöglicht die sichere Bereitstellung von GPU-optimierten Machine-Learning-Modellen und großen Sprachmodellen (LLMs), einschließlich Metas Llama 3 und Mistral AI.

Die Integration adressiert zentrale Herausforderungen bei der Implementierung von KI, indem sie:

  • Einheitliche ML- und DevOps-Workflows für Versionierung, Sicherheit und Bereitstellung von Modellen bereitstellt
  • End-to-End-Sicherheitsscans und Bedrohungserkennung für KI-Modelle bietet
  • Optimierte Leistung unter Verwendung der NVIDIA-beschleunigten Recheninfrastruktur
  • Nahtlosen Zugang zu NVIDIA NGC für GPU-optimierte Modelle ermöglicht

Laut IDC werden bis 2028 65 % der Organisationen DevOps-Tools nutzen, die MLOps, LLMOps, DataOps, CloudOps und DevOps-Funktionen kombinieren, um den KI-Wert in Softwarebereitstellungsprozessen zu optimieren.

Positive
  • First-to-market unified DevSecOps and MLOps solution with NVIDIA NIM integration
  • Expands product offering into high-growth AI/ML market
  • Strategic partnership with industry leader NVIDIA
  • Addresses growing enterprise demand for secure AI implementations
Negative
  • None.

Insights

JFrog's integration with NVIDIA NIM microservices represents a strategic positioning in the rapidly evolving AI deployment market. This partnership addresses critical enterprise pain points by streamlining the deployment of GPU-optimized ML models like Meta's Llama 3 and Mistral AI with enhanced security controls.

The timing is excellent as enterprises are increasingly concerned about AI security while simultaneously seeking to accelerate deployment. By enabling data scientists and ML engineers to use familiar DevSecOps workflows for AI deployment, JFrog is reducing adoption friction while maintaining governance—a important selling point for enterprise customers.

The IDC projection that 65% of organizations will adopt combined MLOps/DevOps tools by 2028 validates JFrog's strategic direction. By offering unified ML and DevOps workflows, comprehensive security scanning across containers and AI models, and optimized performance via NVIDIA's accelerated computing infrastructure, JFrog is building a compelling end-to-end solution.

For JFrog, this partnership enhances its value proposition in the increasingly competitive software supply chain management space, potentially expanding its addressable market beyond traditional DevOps to include ML/AI workflows—a high-growth segment. The integration leverages JFrog's existing Artifactory platform while adding AI-specific capabilities, creating a natural extension that should resonate with their current enterprise customer base.

This JFrog-NVIDIA partnership tackles the primary obstacle in enterprise AI adoption: the disconnect between experimental AI models and production-ready deployment. By enabling ML models to flow through established DevSecOps pipelines, JFrog eliminates a major organizational bottleneck that has been hampering AI implementation.

The integration addresses three critical enterprise challenges: security governance, workflow efficiency, and deployment consistency. From a commercial perspective, this positions JFrog to capture value from both traditional software development and the rapidly expanding AI application market, effectively doubling their potential touchpoints within customer organizations.

The integration's core value lies in JFrog Artifactory becoming the centralized repository for both traditional software artifacts and AI models. This creates significant lock-in potential as organizations standardize on JFrog for both conventional and AI-driven application delivery. For enterprises already using JFrog, this reduces the need for separate ML tooling purchases, while for NVIDIA, it expands the accessibility of their GPU-optimized models to JFrog's enterprise customer base.

Most significantly, this integration aligns with the enterprise trend toward unified software delivery platforms rather than specialized point solutions. As organizations seek to standardize their AI governance amid increasing regulatory scrutiny, JFrog's ability to provide continuous security scanning and audit trails for AI models addresses compliance concerns that often delay production deployments.

New integration accelerates secure GenAI and LLM model deployment including Meta’s Llama 3 and Mistral AI LLMs packaged as NVIDIA NIM with increased transparency, traceability and trust

SUNNYVALE, Calif. & NEW YORK--(BUSINESS WIRE)-- JFrog Ltd (Nasdaq: FROG), the Liquid Software company and creators of the JFrog Software Supply Chain Platform, is announcing general availability of its integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform. The JFrog Platform is the only unified, end-to-end, and secure DevSecOps and MLOps solution with native NVIDIA NIM integration. This enables rapid deployment of GPU-optimized, pre-approved machine learning (ML) models, and large language models (LLMs) to production with enterprise-grade security, increased visibility, and governance controls. This unified infrastructure enables developers to create and deliver AI-powered applications with greater efficiency and peace of mind.

"The demand for secure and efficient AI implementations continues to rise, with many businesses aiming to expand their AI strategies in 2025. However, AI deployments often struggle to reach production due to significant security challenges," said Gal Marder, Chief Strategy Officer at JFrog. "AI-powered applications are inherently complex to secure, deploy, and manage, and concerns around the security of open-source AI models and platforms continue to grow. We’re excited to collaborate with NVIDIA to deliver an easy-to-deploy, end-to-end solution that enables companies to accelerate the delivery of their AI/ML models with enterprise-grade security, compliance, and provenance."

With the rise and accelerated demand for AI in software applications, data scientists and ML engineers face significant challenges when attempting to scale their enterprise ML model deployments. The complexities of integrating AI workflows with existing software development processes—coupled with fragmented asset management, security vulnerabilities, and compliance issues—can lead to lengthy, costly deployment cycles and, often, failed AI initiatives. According to IDC, by 2028, 65% of organizations will use DevOps tools that combine MLOps, LLMOps, DataOps, CloudOps, and DevOps capabilities to optimize the route to AI value in software delivery processes.

“The rise of open source MLOps platforms has made AI more accessible to developers of all skill levels to quickly build amazing AI applications, but this process needs to be done securely and in compliance with today’s quickly evolving government regulations,” said Jim Mercer, IDC’s Program Vice President, Software Development, DevOps & DevSecOps. “As enterprises scale their generative AI deployments, having a central repository of pre-approved, fully compliant, performance-optimized models developers can choose from and quickly deploy while maintaining high levels of visibility, traceability, and control through the use of existing DevSecOps workflows is compelling.”

The JFrog integration with NVIDIA NIM enables enterprises to seamlessly deploy and manage the latest foundational LLMs – including Meta's Llama 3 and Mistral AI – while maintaining enterprise-grade security and governance controls throughout their software supply chain. JFrog Artifactory – the heart of the JFrog Platform – provides a single solution for hosting and seamlessly managing all software artifacts, binaries, packages, ML Models, LLMs, container images, and components throughout the software development lifecycle. By integrating NVIDIA NIM into the JFrog Platform developers can easily access NVIDIA NGC – a hub for GPU-optimized deep learning, ML, and HPC models. This provides customers with a single source of truth for software models and tools, while leveraging enterprise DevSecOps best practices to gain visibility, governance, and control across their software supply chain.

The JFrog Platform update provides AI developers and DevSecOps teams with multiple benefits, including:

  • Unified ML & DevOps Workflows: Data Scientists and ML Engineers can now version, secure, and deploy models using the same JFrog DevSecOps software development workflows they already know and trust. This eliminates the need for teams to use separate ML tools while ensuring automated compliance checks, audit trails, and governance of ML Models using JFrog Curation.
  • End-to-End Security & Integrity: Implement continuous security scanning across containers, AI models and dependencies – delivering contextual insights across NIM microservices - to identify vulnerabilities, supplemented by smart threat detection that focuses on real risks and proactive protection against compromised AI models and packages.
  • Exceptional Model Performance and Scalability: Optimized AI application performance using NVIDIA accelerated computing infrastructure, offering low latency and high throughput for scalable deployment of LLMs to large-scale production environments. Easily bundle ML models with dependencies to reduce external requirements and utilize existing workflows for seamless AI deployment. Additionally, the JFrog Platform offers flexible deployment options for increased scalability, including self-hosted, multi-cloud, and air-gap deployments.

"Performance and security are crucial for successful enterprise AI deployments,” said Pat Lee, vice president, Enterprise Strategic Partnerships, NVIDIA. “With NVIDIA NIM integrated directly into the JFrog Platform, developers can accelerate AI adoption with a unified, end-to-end solution for building, deploying, and managing production AI agents at scale."

For a deeper look at the integration of NVIDIA NIM into the JFrog Platform, read this blog or visit https://jfrog.com/nvidia-and-jfrog. You can also see the JFrog and NVIDIA NIM integration showcased at NVIDIA GTC, the premier AI conference, taking place from March 17-21 in San Jose, California. Get started, register, and book a meeting or hands-on demo here.

Like this story? Post this on X (Twitter): .@jfrog + @nvidia delivers the industry's first #softwaresupplychain solution, offering a secure, streamlined path for rapidly building world-class #GenAI solutions. Learn more: https://jfrog.co/4igNess #MLOps #DevSecOps #GPUs #MachineLearning #AI

About JFrog

JFrog Ltd. (Nasdaq: FROG) is on a mission to create a world of software delivered without friction from developer to device. Driven by a “Liquid Software” vision, the JFrog Software Supply Chain Platform is a single system of record that powers organizations to build, manage, and distribute software quickly and securely, ensuring it is available, traceable, and tamper-proof. The integrated security features also help identify, protect, and remediate against threats and vulnerabilities. JFrog’s hybrid, universal, multi-cloud platform is available as both self-hosted and SaaS services across major cloud service providers. Millions of users and 7K+ customers worldwide, including a majority of the Fortune 100, depend on JFrog solutions to securely embrace digital transformation. Once you leap forward, you won’t go back! Learn more at jfrog.com and follow us on X: @jfrog

Cautionary Note About Forward-Looking Statements

This press release contains “forward-looking” statements, as that term is defined under the U.S. federal securities laws, including, but not limited to, statements regarding our expectations related to an anticipated increase in efficiencies in the development and accelerated delivery of AI-powered applications, and anticipated benefits to AI developers and DevSecOps team, including an expected increase in security, integrity and scalability.

These forward-looking statements are based on our current assumptions, expectations, and beliefs and are subject to substantial risks, uncertainties, assumptions and changes in circumstances that may cause JFrog’s actual results, performance or achievements to differ materially from those expressed or implied in any forward-looking statement. There are a significant number of factors that could cause actual results, performance or achievements to differ materially from statements made in this press release, including but not limited to risks detailed in our filings with the Securities and Exchange Commission, including in our annual report on Form 10-K for the year ended December 31, 2024, our quarterly reports on Form 10-Q, and other filings and reports that we may file from time to time with the Securities and Exchange Commission. Forward-looking statements represent our beliefs and assumptions only as of the date of this press release. We disclaim any obligation to update forward-looking statements except as required by law.

Media Contact:

Siobhan Lyons, Sr. Mngr. Global Communications, siobhanL@jfrog.cm

Investor Contact:

Jeff Schreiner, VP of Investor Relations, jeffS@jfrog.com

Source: JFrog Ltd.

FAQ

What is the significance of JFrog's integration with NVIDIA NIM for AI development?

The integration provides secure deployment of GPU-optimized ML models and LLMs with enterprise-grade security, enabling unified DevSecOps and MLOps workflows for faster AI implementation.

How does the JFrog Platform (FROG) handle AI model security and governance?

It implements continuous security scanning across containers and AI models, provides threat detection, and ensures compliance checks and audit trails through JFrog Curation.

What LLM models are supported in JFrog's NVIDIA NIM integration?

The integration supports deployment of latest foundational LLMs including Meta's Llama 3 and Mistral AI models.

What are the key benefits of JFrog's (FROG) new AI platform integration?

Benefits include unified ML & DevOps workflows, end-to-end security, exceptional model performance, and flexible deployment options including self-hosted, multi-cloud, and air-gap deployments.

Jfrog Ltd

NASDAQ:FROG

FROG Rankings

FROG Latest News

FROG Stock Data

3.92B
95.37M
14.75%
77.44%
2.77%
Software - Application
Services-prepackaged Software
Link
United States
SUNNYVALE