STOCK TITAN

Couchbase Capella to Accelerate Agentic AI Application Development with NVIDIA AI

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI

Couchbase (NASDAQ: BASE) announced the integration of NVIDIA NIM microservices into its Capella AI Model Services, enhancing the development of AI-powered applications. The integration, part of NVIDIA AI Enterprise software platform, enables enterprises to privately run generative AI models with improved security and performance.

The collaboration focuses on streamlining retrieval-augmented generation (RAG) and agentic AI capabilities, allowing organizations to operate AI workloads with reduced latency by bringing AI closer to data. Capella AI Model Services provides managed endpoints for LLMs and embedding models, addressing enterprise requirements for privacy, performance, and scalability.

The solution includes features like semantic caching, guardrail creation, and agent monitoring with RAG workflows. It utilizes NVIDIA NeMo Guardrails to help prevent AI hallucinations and enforce policies, while offering pre-tested LLMs optimized for reliability and business-specific needs.

Couchbase (NASDAQ: BASE) ha annunciato l'integrazione dei microservizi NVIDIA NIM nei suoi Servizi di Modello AI Capella, migliorando lo sviluppo di applicazioni basate su AI. L'integrazione, parte della piattaforma software NVIDIA AI Enterprise, consente alle imprese di eseguire modelli di AI generativa in modo privato, con una maggiore sicurezza e prestazioni.

La collaborazione si concentra sul semplificare la generazione aumentata da recupero (RAG) e le capacità di AI agentiche, consentendo alle organizzazioni di gestire i carichi di lavoro AI con una latenza ridotta, avvicinando l'AI ai dati. I Servizi di Modello AI Capella forniscono endpoint gestiti per i LLM e i modelli di embedding, rispondendo alle esigenze aziendali di privacy, prestazioni e scalabilità.

La soluzione include funzionalità come caching semantico, creazione di guardrail e monitoraggio degli agenti con flussi di lavoro RAG. Utilizza NVIDIA NeMo Guardrails per aiutare a prevenire le allucinazioni dell'AI e applicare politiche, offrendo LLM pre-testati ottimizzati per affidabilità e esigenze specifiche del business.

Couchbase (NASDAQ: BASE) anunció la integración de microservicios NVIDIA NIM en sus Servicios de Modelo AI Capella, mejorando el desarrollo de aplicaciones impulsadas por AI. La integración, parte de la plataforma de software NVIDIA AI Enterprise, permite a las empresas ejecutar modelos de AI generativa de forma privada, con mayor seguridad y rendimiento.

La colaboración se centra en optimizar la generación aumentada por recuperación (RAG) y las capacidades de AI agentiva, permitiendo a las organizaciones operar cargas de trabajo de AI con menor latencia al acercar la AI a los datos. Los Servicios de Modelo AI Capella proporcionan puntos finales gestionados para LLM y modelos de embedding, abordando los requisitos empresariales de privacidad, rendimiento y escalabilidad.

La solución incluye características como caché semántico, creación de guardrails y monitoreo de agentes con flujos de trabajo RAG. Utiliza NVIDIA NeMo Guardrails para ayudar a prevenir alucinaciones de AI y hacer cumplir políticas, mientras ofrece LLM preprobados optimizados para confiabilidad y necesidades específicas del negocio.

카우치베이스 (NASDAQ: BASE)는 AI 기반 애플리케이션 개발을 향상시키기 위해 NVIDIA NIM 마이크로서비스를 Capella AI 모델 서비스에 통합했다고 발표했습니다. 이 통합은 NVIDIA AI Enterprise 소프트웨어 플랫폼의 일부로, 기업들이 보안성과 성능을 개선하여 생성형 AI 모델을 비공식적으로 실행할 수 있게 합니다.

이번 협력은 검색 증강 생성 (RAG) 및 에이전트 AI 기능을 간소화하는 데 중점을 두어, 조직들이 데이터를 더 가까이에서 AI를 운영할 수 있도록 하여 AI 작업 부하의 대기 시간을 줄입니다. Capella AI 모델 서비스는 LLM 및 임베딩 모델을 위한 관리형 엔드포인트를 제공하여 기업의 프라이버시, 성능 및 확장성 요구를 충족합니다.

이 솔루션은 의미적 캐싱, 가드레일 생성 및 RAG 워크플로우를 통한 에이전트 모니터링과 같은 기능을 포함합니다. NVIDIA NeMo Guardrails를 활용하여 AI 환각을 방지하고 정책을 시행하며, 신뢰성과 비즈니스 특정 요구에 최적화된 사전 테스트된 LLM을 제공합니다.

Couchbase (NASDAQ: BASE) a annoncé l'intégration des microservices NVIDIA NIM dans ses Services de Modèle AI Capella, améliorant ainsi le développement d'applications alimentées par l'IA. L'intégration, qui fait partie de la plateforme logicielle NVIDIA AI Enterprise, permet aux entreprises d'exécuter des modèles d'IA générative de manière privée, avec une sécurité et des performances améliorées.

La collaboration se concentre sur l'optimisation de la génération augmentée par récupération (RAG) et des capacités d'IA agentique, permettant aux organisations de gérer des charges de travail d'IA avec une latence réduite en rapprochant l'IA des données. Les Services de Modèle AI Capella fournissent des points de terminaison gérés pour les LLM et les modèles d'embedding, répondant aux exigences des entreprises en matière de confidentialité, de performance et d'évolutivité.

La solution comprend des fonctionnalités telles que le caching sémantique, la création de garde-fous et la surveillance des agents avec des flux de travail RAG. Elle utilise les NVIDIA NeMo Guardrails pour aider à prévenir les hallucinations de l'IA et à faire respecter les politiques, tout en offrant des LLM pré-testés optimisés pour la fiabilité et les besoins spécifiques des entreprises.

Couchbase (NASDAQ: BASE) hat die Integration von NVIDIA NIM-Mikroservices in seine Capella AI-Modellservices angekündigt, um die Entwicklung von KI-gestützten Anwendungen zu verbessern. Die Integration, die Teil der NVIDIA AI Enterprise-Softwareplattform ist, ermöglicht es Unternehmen, generative KI-Modelle privat mit verbesserter Sicherheit und Leistung auszuführen.

Die Zusammenarbeit konzentriert sich auf die Optimierung der abruf-unterstützten Generierung (RAG) und agentischen KI-Funktionalitäten, sodass Organisationen KI-Arbeitslasten mit geringerer Latenz betreiben können, indem sie KI näher an die Daten bringen. Die Capella AI-Modellservices bieten verwaltete Endpunkte für LLMs und Einbettungsmodelle und erfüllen die Anforderungen der Unternehmen an Datenschutz, Leistung und Skalierbarkeit.

Die Lösung umfasst Funktionen wie semantisches Caching, Erstellung von Leitplanken und Überwachung von Agenten mit RAG-Workflows. Sie nutzt NVIDIA NeMo Guardrails, um KI-Halluzinationen zu verhindern und Richtlinien durchzusetzen, während sie vorab getestete LLMs bietet, die auf Zuverlässigkeit und geschäftsspezifische Bedürfnisse optimiert sind.

Positive
  • Integration with NVIDIA NIM microservices enhances AI application development capabilities
  • Solution offers improved security and performance for enterprise AI workloads
  • Features include semantic caching and guardrail creation for better AI response accuracy
Negative
  • None.

Insights

This strategic integration with NVIDIA marks a pivotal moment for Couchbase in the enterprise AI infrastructure market. The partnership addresses three critical challenges that have been holding back enterprise AI adoption: deployment complexity, data privacy concerns, and performance optimization.

The integration's significance lies in its potential to accelerate Couchbase's market penetration in the rapidly growing enterprise AI sector. By offering pre-tested LLMs and NVIDIA NeMo Guardrails, Couchbase is positioning itself as a one-stop solution for enterprises looking to deploy AI applications without the traditional overhead of managing multiple specialized databases.

From a competitive standpoint, this integration creates several key advantages:

  • Reduced time-to-market for AI applications through streamlined deployment processes
  • Enhanced data security and governance through colocated models and data
  • Improved performance optimization through NVIDIA's GPU acceleration

The partnership's timing is particularly strategic as enterprises are increasingly seeking solutions that can handle both traditional database workloads and AI operations within a unified platform. This positions Couchbase to capture a larger share of enterprise IT budgets as organizations consolidate their data infrastructure.

While the immediate revenue impact may be modest as the service is in private preview, the long-term potential is substantial as enterprises increasingly adopt AI technologies. The integration with NVIDIA's enterprise-grade AI solutions significantly enhances Couchbase's value proposition in the enterprise market.

Capella AI Services Help Organizations Build, Deploy and Evolve AI-powered Applications with NVIDIA NIM

SANTA CLARA, Calif., Feb. 24, 2025 /PRNewswire/ -- Couchbase, Inc. (NASDAQ: BASE), the developer data platform for critical applications in our AI world, today announced that its Capella AI Model Services have integrated NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, to streamline deployment of AI-powered applications, providing enterprises a powerful solution for privately running generative (GenAI) models.

Capella AI Model Services, which were recently introduced as part of a comprehensive Capella AI Services offering for streamlining the development of agentic applications, provide managed endpoints for LLMs and embedding models so enterprises can meet privacy, performance, scalability and latency requirements within their organizational boundary. Capella AI Model Services, powered by NVIDIA AI Enterprise, minimize latency by bringing AI closer to the data, combining GPU-accelerated performance and enterprise-grade security to empower organizations to seamlessly operate their AI workloads. The collaboration enhances Capella's agentic AI and retrieval-augmented generation (RAG) capabilities, allowing customers to efficiently power high-throughput AI-powered applications while maintaining model flexibility.

"Enterprises require a unified and highly performant data platform to underpin their AI efforts and support the full application lifecycle – from development through deployment and optimization," said Matt McDonough, SVP of product and partners at Couchbase. "By integrating NVIDIA NIM microservices into Capella AI Model Services, we're giving customers the flexibility to run their preferred AI models in a secure and governed way, while providing better performance for AI workloads and seamless integration of AI with transactional and analytical data. Capella AI Services allow customers to accelerate their RAG and agentic applications with confidence, knowing they can scale and optimize their applications as business needs evolve."

Capella Delivers Fully Integrated User Experience with NVIDIA AI Enterprise, Enabling Flexible, Scalable AI Model Deployment

Enterprises building and deploying high-throughput AI applications can face challenges with ensuring agent reliability and compliance, as unreliable AI responses can damage brand reputation. PII data leaks can violate privacy regulations and managing multiple specialized databases can create unsustainable operational overhead. Couchbase is helping address these challenges with Capella AI Model Services, which streamline agent application development and operations by keeping models and data colocated in a unified platform, facilitating agentic operations as they happen. For example, agent conversation transcripts must be captured and compared in real time to elevate model response accuracy. Capella also delivers built-in capabilities like semantic caching, guardrail creation and agent monitoring with RAG workflows.

Capella AI Model Services with NVIDIA NIM provides Couchbase customers a cost-effective solution that accelerates agent delivery by simplifying model deployment while maximizing resource utilization and performance. The solution leverages pre-tested LLMs and tools including NVIDIA NeMo Guardrails to help organizations accelerate AI development while enforcing policies and safeguards against AI hallucinations. NVIDIA's rigorously tested, production-ready NIM microservices are optimized for reliability and fine-tuned for specific business needs.

"Integrating NVIDIA AI software into Couchbase's Capella AI Model Services enables developers to quickly deploy, scale and optimize applications," said Anne Hecht, senior director of enterprise software at NVIDIA. "Access to NVIDIA NIM microservices further accelerates AI deployment with optimized models, delivering low-latency performance and security for real-time intelligent applications."

Couchbase is a silver sponsor at NVIDIA GTC, taking place in San Jose, CA. To learn more about how Couchbase's work with NVIDIA accelerates agentic AI application development, stop by booth 2004.

Learn more about Capella AI Services and sign up for the private preview.

Additional Resources

About Couchbase
As industries race to embrace AI, traditional database solutions fall short of rising demands for versatility, performance and affordability. Couchbase is seizing the opportunity to lead with Capella, the developer data platform architected for critical applications in our AI world. By uniting transactional, analytical, mobile and AI workloads into a seamless, fully managed solution, Couchbase empowers developers and enterprises to build and scale applications and AI agents with complete flexibility – delivering exceptional performance, scalability and cost-efficiency from cloud to edge and everything in between. Couchbase enables organizations to unlock innovation, accelerate AI transformation and redefine customer experiences wherever they happen. Discover why Couchbase is the foundation of critical everyday applications by visiting www.couchbase.com and following us on LinkedIn and X.

Couchbase®, the Couchbase logo and the names and marks associated with Couchbase's products are trademarks of Couchbase, Inc. All other trademarks are the property of their respective owners.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/couchbase-capella-to-accelerate-agentic-ai-application-development-with-nvidia-ai-302382749.html

SOURCE Couchbase, Inc.

FAQ

What benefits does the NVIDIA NIM integration bring to Couchbase BASE shareholders?

The integration enhances Couchbase's AI capabilities, potentially strengthening its market position in AI-powered applications and providing new revenue opportunities through improved enterprise solutions.

How does Couchbase BASE's Capella AI Model Services improve AI application security?

It provides enterprise-grade security features and allows organizations to run AI models within their organizational boundary, ensuring data privacy and compliance requirements are met.

What are the key features of Couchbase BASE's Capella AI Model Services?

Key features include managed endpoints for LLMs, embedding models, semantic caching, guardrail creation, agent monitoring, and RAG workflows, all designed to improve AI application development and performance.

How does Couchbase BASE address AI reliability challenges with Capella AI Services?

It uses NVIDIA NeMo Guardrails to enforce policies and prevent AI hallucinations, while providing real-time monitoring and comparison of agent conversations to improve response accuracy.

Couchbase, Inc.

NASDAQ:BASE

BASE Rankings

BASE Latest News

BASE Stock Data

861.12M
48.69M
0.84%
92.99%
2.73%
Software - Infrastructure
Services-prepackaged Software
Link
United States
SANTA CLARA