Confluent Launches Data Access Controls and Enterprise Insights for More Secure, Reliable Data Streaming in the Cloud
Confluent, Inc. (NASDAQ: CFLT) has launched new features enhancing security, observability, and reliability for its data streaming platform. Key updates include role-based access controls (RBAC) for granular permissions, an expanded Metrics API for enterprise-wide visibility, and a new 99.99% uptime SLA for Apache Kafka. These advancements aim to optimize cloud data streaming performance and ensure data compliance. The enhancements address critical industry demands for security and operational efficiency as businesses increasingly migrate to the cloud.
- Introduction of role-based access controls (RBAC) for enhanced security at the data plane level.
- Expanded Metrics API offers better visibility and optimization for data streaming performance.
- 99.99% uptime SLA for Apache Kafka boosts reliability for sensitive cloud workloads.
- None.
New role-based access controls enable granular permissions on the data plane level to ensure data compliance and privacy at scale
Expanded Confluent Cloud Metrics API delivers enterprise-wide observability to optimize data streaming performance across the entire business
New
“Every company is in a race to transform their business and take advantage of the simplicity of cloud computing,” said
New role-based access controls (RBAC) enable granular permissions on the data plane level to ensure data compliance and privacy at scale
Data security is paramount in any organization, especially when migrating to public clouds. To operate efficiently and securely, organizations need to ensure the right people have access to only the right data. However, controlling access to sensitive data all the way down to individual Apache Kafka topics takes significant time and resources because of the complex scripts needed to manually set permissions.
Last year, Confluent introduced RBAC for Confluent Cloud, enabling customers to streamline this process for critical resources like production environments, sensitive clusters, and billing details, and making role-based permissions as simple as clicking a button. With today’s launch, RBAC now covers access to individual Kafka resources including topics, consumer groups, and transactional IDs. It allows organizations to set clear roles and responsibilities for administrators, operators, and developers, allowing them to access only the data specifically required for their jobs on both data and control planes.
“Here at Neon we have many teams with different roles and business contexts that use Confluent Cloud,” said
Expanded Confluent Cloud Metrics API delivers enterprise-wide observability to optimize data streaming performance across the entire business
Businesses need a strong understanding of their IT stack to effectively deliver high-quality services their customers demand while efficiently managing operating costs. The Confluent Cloud Metrics API already provides the easiest and fastest way for customers to understand their usage and performance across the platform. Today, Confluent is introducing two new insights for even greater visibility into data streaming deployments, alongside an expansion to our third-party monitoring integrations to ensure these critical metrics are available wherever they are needed:
- Customers can now easily understand organizational usage of data streams across their business and sub-divisions to see where and how resources are used. This capability is particularly important to enterprises that are expanding their use of data streaming and need to manage internal chargebacks by business unit. Additionally, it helps teams to identify where resources are being over or underutilized, down to the level of an individual user, in order to optimize resource allocation and improve cost savings.
- New capabilities for consumer lag monitoring help organizations ensure their mission-critical services are always meeting customer expectations. With real-time insights, customers are able to identify hotspots in their data pipelines and can easily identify where resources need to be scaled to avoid an incident before it occurs. Additionally, with records exposed as a time series, teams are equipped to make informed decisions based upon deep historical context when setting or adjusting SLOs.
- A new, first-class integration with Grafana Cloud gives customers deep visibility into Confluent Cloud from within the monitoring tool they already use. Along with recently announced integrations, this update allows businesses to monitor their data streams directly alongside the rest of their technology stack through their service of choice.
To enable easy and cost-effective integration of more data from high-value systems, Confluent’s Premium Connector for Oracle® Change Data Capture (CDC) Source is now available for Confluent Cloud. The fully managed connector enables users to capture valuable change events from an Oracle database and see them in real time within Confluent’s leading cloud-native Kafka service without any operational overhead.
New
One of the biggest concerns with relying on open source systems for business-critical workloads is reliability. Downtime is unacceptable for businesses operating in a digital-first world. It not only causes negative financial and business impact, but it often leads to long-term damage to a brand’s reputation. Confluent now offers a
Beyond reliability, engineering teams and developers face the challenge of understanding the new programming paradigm of stream processing and the different use cases it enables. To help jumpstart stream processing use cases, Confluent is introducing Stream Processing Use Case Recipes. Sourced from customers and validated by experts, this set of over 25 of the most popular real-world use cases can be launched in Confluent Cloud with the click of a button, enabling developers to quickly start unlocking the value of stream processing.
“ksqlDB made it super easy to get started with stream processing thanks to its simple, intuitive SQL syntax,” said
Learn more about the Q2 ‘22 Launch
Join our upcoming webinar or check out the blog to dive deeper into the Confluent Q2 ‘22 Launch.
Additional Resources
- For a deep dive into the Confluent Q2 ‘22 Launch, check out this blog post: https://cnfl.io/announcing-the-confluent-Q2-22-launch
- To get started fast with these new features, register for the upcoming Confluent Q2 ‘22 Launch webinar: https://cnfl.io/confluent-Q2-22-launch-webinar
- See how Confluent is helping its customers transform their businesses: https://www.confluent.io/customers/
- Join Confluent and apply for one of its open positions: https://www.confluent.io/careers/
About Confluent
Confluent is the data streaming platform that is pioneering a fundamentally new category of data infrastructure that sets data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion—designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based upon services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of
Apache® and Apache Kafka® are either registered trademarks or trademarks of the
View source version on businesswire.com: https://www.businesswire.com/news/home/20220419005289/en/
pr@confluent.io
Source:
FAQ
What are the new features announced by Confluent, Inc. on April 19, 2022?
How does role-based access control benefit Confluent customers?
What is the significance of the 99.99% uptime SLA for Apache Kafka?