Current London 2025
Session Archive
Check out our session archive to catch up on anything you missed or rewatch your favorites to make sure you hear all of the industry-changing insights from the best minds in data streaming.


Keynote: Unifying Batch and Streaming in the Age of AI
Join us to explore how Confluent is unifying streaming and batch to power real-time, AI-ready applications. Learn about our latest product innovations, hear directly from customers, and see what’s next in the future of data.
Jay Kreps, Shaun Clowes, Ahmed Zamzam, Dora Simroth. Robin Sutara


Turning the Lights On: How We Unlocked Real-Time Customer Data
For years, PPC—Greece’s leading electric utility—focused on power generation, distribution, and supply. However, our digital channels lagged behind. As part of a major digital transformation, we needed to refactor the core backend engine, powering these channels, due to system decommissioning. This raised a critical challenge: How could we bring customer data closer to our digital channels—reliably, at scale, and without escalating operational costs? Our solution: Confluent. We faced skepticism—tight project timelines, a steep learning curve, resistance to moving beyond Microsoft’s Event Hub, and the ever-present temptation to rely on legacy API calls. Instead of a big-bang approach, we started small: streaming CRM data via Confluent’s CDC connector into PostgreSQL on Azure. This eliminated API bottlenecks, mitigated quota limitations, improved resilience, and optimized operational costs. Challenges arose—simultaneous CRM migrations overloaded connectors, requiring fine-tuned data handling. But we pushed through. Today, our digital channels operate with a real-time, unified customer view, improved response times, and Confluent serving as the foundation of our data strategy. Now that we’ve learned to walk, it’s time to run. What’s next? Real-time energy insights from PV systems, heat pumps, and smart meters, plus proactive customer operations. Join us to explore how Confluent transformed our data strategy—and what’s ahead.
Aikaterini Baousi


Streamline Access: Just-in-Time User Provisioning & Group Mappings for Confluent Cloud
Managing user access in Confluent Cloud within modern, dynamic environments can be challenging, especially as teams scale. In this talk, we’ll explore how Just-in-Time (JIT) user provisioning combined with group mappings can redefine access control for your Kafka deployments. Learn how this automated approach streamlines user onboarding and ensures that access permissions align dynamically with your organization’s evolving structure. I’ll share practical examples and best practices for integrating these features with your identity provider, reducing administrative overhead, and tightening security without slowing down your operations. Key Takeaways: Automation in Action: Understand how JIT provisioning automates user creation at the point of authentication, reducing manual overhead. Streamlined Group Management: Insight into how dynamic group mappings simplify permission management, aligning user roles with organizational policies. Security & Scalability: Learn how automated access control strengthens security, reduces manual errors and scales with your organization’s needs.
Flavius Fernandes


Taming the Kafka Chaos: How OpenAI Simplifies Kafka Consumption
At OpenAI, Kafka fuels real-time data streaming at massive scale, but traditional consumers struggle under the burden of Partition management, Offset tracking, Error handling, Retries, Dead Letter Queues (DLQ), and Dynamic scaling—all while racing to maintain ultra-high throughput. As deployments scale, complexity multiplies. Enter Kafka Forwarder—a game-changing Kafka Consumer Proxy that flips the script on traditional Kafka consumption. By offloading client-side complexity and pushing messages to consumers, it ensures at-least-once delivery, automated retries, and seamless DLQ management via Databricks. The result? Scalable, reliable, and effortless Kafka consumption that lets teams focus on what truly matters. Want to see how OpenAI cracked the code for frictionless, high-scale Kafka streaming? Join us as we dive into the motivation, architecture, and hidden challenges behind Kafka Forwarder—and discover how OpenAI orchestrates Kafka consumption across multiple clusters and regions with unparalleled efficiency.
Jigar Bhati


Next-Gen RAG Architectures for Streaming Vector Data
Real-time retrieval-augmented generation (RAG) is poised to revolutionize how businesses leverage streaming vector data, but many current RAG architectures fall short of meeting the demands of real-time use cases. These architectures, originally designed for batch-based workflows, struggle with latency issues that prevent applications like real-time personalization, financial analysis, and fleet optimization from achieving their full potential. In this session, we’ll introduce an emerging real-time RAG reference architecture - originally designed by Uber - designed specifically to handle the complexities of streaming vector data. We’ll explore how this architecture overcomes the limitations of traditional RAG systems by enabling real-time analysis on freshly created vector embeddings. Attendees will leave this session with actionable insights into building and deploying real-time RAG systems, unlocking new possibilities for applications that demand both speed and accuracy in vector-driven analysis.
Chad Meley


Kafka 4.0: Preparing Platform Teams for KIP-896
While Apache Kafka has typically ensured backward and forward compatibility, Kafka 4.0 will introduce breaking changes by dropping support for some older API versions (KIP-896). This session will detail these changes, explain the reasoning behind them, and equip platform teams to adapt. We'll explore the real-world impact, provide essential warnings for app developers, review the added metrics for identifying unsupported APIs, and develop an action plan to ready your clients for a smooth upgrade to Kafka 4.0.
Rohit Shrivastava