New Confluent Cloud for Apache Flink® Capabilities Simplify Real-Time AI Development
Flink Native Inference seamlessly runs AI models directly in Confluent Cloud for streamlined development
Flink search delivers a unified interface for querying vector databases, simplifying the data enrichment process
Built-in ML functions open the full potential of AI-driven analytics to non-data science specialists
BENGALURU,
"Building real-time AI applications has been too complex for too long, requiring a maze of tools and deep expertise just to get started," said Shaun Clowes, Chief Product Officer at Confluent. "With the latest advancements in Confluent Cloud for Apache Flink, we’re breaking down those barriers—bringing AI-powered streaming intelligence within reach of any team. What once required a patchwork of technologies can now be done seamlessly within our platform, with enterprise-level security and cost efficiencies baked in."
The AI boom is here. According to McKinsey,
Simplify the Path to AI Success
“Confluent helps us accelerate copilot adoption for our customers, giving teams access to valuable real-time, organizational knowledge,” said Steffen Hoellinger, Co-founder and CEO at Airy. “Confluent’s data streaming platform with Flink AI Model Inference simplified our tech stack by enabling us to work directly with large language models (LLMs) and vector databases for retrieval-augmented generation (RAG) and schema intelligence, providing real-time context for smarter AI agents. As a result, our customers have achieved greater productivity and improved workflows across their enterprise operations.”
As the only serverless stream processing solution on the market that unifies real-time and batch processing, Confluent Cloud for Apache Flink empowers teams to effortlessly handle both continuous streams of data and batch workloads within a single platform. This eliminates the complexity and operational overhead of managing separate processing solutions. With these newly released AI, ML, and analytics features, it enables businesses to streamline more workflows and unlock greater efficiency. These features are available in an early access program, which is open for signup to Confluent Cloud customers.
-
Flink Native Inference: Run open source AI models in Confluent Cloud without added infrastructure management.
When working with ML models and data pipelines, developers often use separate tools and languages, leading to complex and fragmented workflows and outdated data. Flink Native Inference simplifies this by enabling teams to run open source or fine-tuned AI models directly in Confluent Cloud. This approach offers greater flexibility and cost savings. Plus, the data never leaves the platform for inference, adding a greater level of security.
-
Flink search: Use just one interface to access data from multiple vector databases.
Vector searches provide LLMs with the necessary context to prevent hallucinations and ensure trustworthy results. Flink search simplifies accessing real-time data from vector databases, such as MongoDB, Elasticsearch, and Pinecone. This eliminates the need for complex ETL processes or manual data consolidation, saving valuable time and resources, while ensuring that data is contextual and always up to date.
-
Built-in ML functions: Make data science skills accessible to more teams.
Many data science solutions require highly specialized expertise, creating bottlenecks in the development cycles. Built-in ML functions simplify complex tasks, such as forecasting, anomaly detection, and real-time visualization, directly in Flink SQL. These features make real-time AI accessible to more developers, enabling teams to gain actionable insights faster and empowering businesses to make smarter decisions with greater speed and agility.
“The ability to integrate real-time, contextualized, and trustworthy data into AI and ML models will give companies a competitive edge with AI,” said Stewart Bond, Vice President, Data Intelligence and Integration Software at IDC. “Organizations need to unify data processing and AI workflows for accurate predictions and LLM responses. Flink provides a single interface to orchestrate inference and vector search for RAG, and having it available in a cloud-native and fully managed implementation will make real-time analytics and AI more accessible and applicable to the future of generative AI and agentic AI.”
Additional Confluent Cloud Features
Confluent also announced further advancements in Confluent Cloud, making it easier for teams to connect and access their real-time data, including Tableflow, Freight Clusters, Confluent for Visual Studio (VS) Code, and the Oracle XStream CDC Source Connector. Learn more about these new features in this blog post.
About Confluent
Confluent is the data streaming platform that is pioneering a fundamentally new category of data infrastructure that sets data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion—designed to be the intelligent connective tissue enabling real-time data from multiple sources to constantly stream across an organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven back-end operations. To learn more, please visit www.confluent.io.
As our road map may change in the future, the features referred to here may change, may not be delivered on time, or may not be delivered at all. This information is not a commitment to deliver any functionality, and customers should make their purchasing decisions based on features that are currently available.
Confluent® and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Kafka®, Kafka®, Apache Flink®, and Flink® are registered trademarks of the Apache Software Foundation in
View source version on businesswire.com: https://www.businesswire.com/news/home/20250318242614/en/
Media Contact:
Natalie Mangan
pr@confluent.io
Source: Confluent, Inc.