Current London 2025

Session Archive

Check out our session archive to catch up on anything you missed or rewatch your favorites to make sure you hear all of the industry-changing insights from the best minds in data streaming.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The art of structuring real-time data streams into actionable insights

Detecting problems as they happen is essential in today’s fast-moving world. This talk shows how to build a simple, powerful system for real-time anomaly detection. We’ll use Apache Kafka for streaming data, Apache Flink for processing it, and AI to find unusual patterns. Whether it’s spotting fraud, monitoring systems, or tracking IoT devices, this solution is flexible and reliable. First, we’ll explain how Kafka helps collect and manage fast-moving data. Then, we’ll show how Flink processes this data in real time to detect events as they happen. We’ll also explore how to add AI to the pipeline, using pre-trained models to find anomalies with high accuracy. Finally, we’ll look at how Apache Iceberg can store past data for analysis and model improvements. Combining real-time detection with historical data makes the system smarter and more effective over time. This talk includes clear examples and practical steps to help you build your own pipeline. It’s perfect for anyone who wants to learn how to use open-source tools to spot problems in real-time data streams.

Presenters

Olena Kutsenko

Breakout Session
May 20

Changing engines mid-flight: Kafka migrations at OpenAI

Ever wondered how OpenAI keeps Kafka running smoothly while scaling, upgrading, or replacing clusters? Join us for an inside look at the strategies and tools we use for seamless Kafka migrations at massive scale — without ever missing a message. We'll also explore best practices for Kafka consumers, patterns for high availability and disaster recovery, and lessons learned from real-world incidents and edge cases. Attendees will learn a new set of tools and tactics for making infrastructure changes safely and transparently. We'll cover applications to specific technologies including Apache Kafka, Apache Flink for stateful stream processing, Apache Spark (Structured Streaming) for streaming ELT, and Uber uForwarder as a platform for managed Kafka consumers.

Presenters

Adam Richardson, Nat Wilson

Breakout Session
May 20

Autoscaling Apache Kafka brokers on Kubernetes with Strimzi

Autoscaling is an important part of modern cloud-native architecture. It allows applications to handle a big load at peak times while helping to optimize costs and make deployments more green and sustainable at the same time. Apache Kafka is well known for its scalability. It can grow with your project from a small cluster up to hundreds of brokers. But it was not very elastic for a long time and using dynamic autoscaling with it was very hard. This talk will guide the attendees through the main challenges of auto-scaling Apache Kafka on Kubernetes. It will show how these challenges can be solved with the help of new features added recently in Strimzi and Apache Kafka projects such as auto-rebalancing, node pools, or tiered storage. And it will help the users get started with the auto-scaling of Apache Kafka.

Presenters

Jakub Scholz

Breakout Session
May 20

Secure Streaming RAG: AI in Regulated FinTech

How can you leverage AI and LLM in a regulated environment without overwhelming development teams with security overhead? At Alpian—a fast-moving Swiss digital bank—Kafka and event-driven architecture form the backbone of our cloud-native platform. This event-first approach has enabled us to scale tenfold with a lean, expert team, paving the way for a new generation of internal and client-facing LLM applications. We’ve found that RAG is essential for enhancing accuracy and extending prompt context in generative AI. Continuous integration of real-time data is key to delivering the most recent and relevant information, as demonstrated by our budget assistant—a conversational tool advising clients on financial transactions. However, as a bank we must adhere to strict regulations on data management, encryption, locality, and sensitive data access. Robust guarantees on what data is shared, where it is stored, and how it’s managed are critical—even if these requirements seem at odds with using foundational models. How do we push innovation while remaining compliant? In this talk, you learn: System Design&Architecture: How the Alpian platform leverages Kafka events for service communication and as the foundation for AI and machine learning models with built-in security and privacy. Data Regulation Compliance: How Alpian meets data regulations by using Schema Registry and field-level encryption via Confluent CSFLE and how we integrated schema management and tagging rules directly into our CI/CD pipeline. Streaming RAG: How streaming is used to generate embeddings for the budget assistant, demonstrating that a central, secure event model can support LLM-based analytics and real-time AI without compromising data privacy or developer productivity. This “secure by design” approach shows how addressing data sensitivity at the event level protects your entire architecture—from analytics to microservices and AI-driven platforms—while maintaining innovation and compliance.

Presenters

Luca Magnoni

Breakout Session
May 20

The Latency-Cost equation for Disaggregated Architectures

There’s a shift towards disaggregated architectures using object storage and open table formats. Cost efficiency, avoidance of vendor lock-in, standardization, and proper governance with a single source of truth are benefits of this new paradigm. However, there are also challenges. Most of our systems have been designed to work with physical disks, with their own optimization and debugging methods. Object storage works in a totally different way than physical disks and requires a new set of capabilities to minimize latency and decrease cloud costs. In this talk, Anton will share the lessons learned from moving data and systems from block storage to object storage. Using Apache Flink, a popular stream processing engine often used for data lake ingestion, as a case study, we’ll start with an overview of Iceberg and the FileIO pluggable module for reading, writing, and deleting files. We’ll continue with the journey of cost optimization with the Flink File Connector. Then, we'll delve into the creation of a custom Flink connector for object storage, addressing the limitations of the built-in File Connector. This custom connector uses techniques like metadata synchronization and optimized partitioning to reduce the number of requests without introducing additional latency. This talk is ideal for data engineers and architects who are building data lakes on object storage and using Apache Flink for data processing. You'll learn practical strategies and best practices for optimizing performance and cost in disaggregated architectures, including how to build custom Flink connectors tailored to object storage.

Presenters

Antón Rodríguez

Breakout Session
May 20

Queues for Kafka

Event streaming is great but sometimes it’s easier to use a queue, especially when parallel consumption is more important than ordering. Wouldn't it be great if you had the option of consuming your data in Apache Kafka just like a message queue? For workloads where each message is an independent work item, you’d really like to be able to run as many consumers as you need, cooperating to handle the load, and to acknowledge messages one at a time as the work is completed. You might even want to be able to retry specific messages. This is much easier to achieve using a queue rather than a topic with a consumer group. KIP-932 brings queuing semantics to Apache Kafka. It introduces the concept of share groups. Share groups let your applications consume data off regular Kafka topics with per-message acknowledgement and without worrying about balancing the number of partitions and consumers. With this KIP, you can bring your queuing workloads to Apache Kafka. Come and hear about this innovative new feature starting with Early Access in Apache Kafka 4.0, and then Preview in Apache Kafka 4.1.

Presenters

Andrew Schofield

Breakout Session
May 20