Current London 2025

Session Archive

Check out our session archive to catch up on anything you missed or rewatch your favorites to make sure you hear all of the industry-changing insights from the best minds in data streaming.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The race against time: Real-time Content Insights

The day start with one problem: How to have content from a CMS system to reflect into multiple systems, especially in our global search (knauf.com) “right away”! That’s easy if you can wait (welcome back to 1920s). Today, milliseconds can mean the difference between a happy customer (in our case, editor) and one lost to frustration. Why is that impressive? ~200 editors working and ~3.3 millions of connections a day!!! Here is where the stream helps us with (near) real-time data processing. Our system efficiently integrates Contentful CMS, Confluent Kafka, and Apache Flink to create a real-time data pipeline that captures, processes, and analyzes content updates with lightning-fast speed and precision.

Presenters

Eliel Lima Oliviera

Breakout Session
May 21

Stream Processing Smackdown: Kafka Streams vs. Apache Flink

Attention, Data Streaming Engineers! In a world where speed is everything, choosing the proper stream processing framework is crucial. Want to supercharge your apps with real-time data processing? Should you opt for the streamlined Kafka Streams, a lightweight library for building streaming applications, or the feature-rich Apache Flink, a powerful and flexible stream processing framework? Viktor Gamov, a principal developer advocate at Confluent with extensive experience in stream processing, will walk you through the nuts and bolts of these two leading technologies. Through live coding and practical examples, we'll cover: • Mastering State Management: Discover how each framework handles stateful computations and pick up optimization tips. • Fault Tolerance in Practice: See how Kafka Streams and Flink keep your applications running smoothly, even when things go wrong. • Scalability Showdown: Find out which tool scales better under heavy loads and complex tasks. • Integration Insights: Learn how to seamlessly fit these frameworks into your existing setup to boost productivity. We'll explore scenarios showcasing each option’s strengths and weaknesses, giving you the tools to choose the best fit for your next project. Whether you're into microservices, event-driven systems, or big data streaming, this talk is packed with practical knowledge that you can immediately apply to your projects, improving performance and efficiency.

Presenters

Viktor Gamov

Breakout Session
May 21

The Future of AI is Event-Driven

Autonomous agents are reshaping enterprise operations, but scaling them isn’t just about smarter AI—it’s about better infrastructure. Agents need real-time data, seamless tool integration, and shared outputs across systems. Rigid request/response models create bottlenecks, while event-driven architecture (EDA) unlocks the flexibility and scalability agents require. This session will show how EDA enables autonomous agents to thrive. Key takeaways include: - How EDA enables real-time, adaptive agent workflows and multi-agent problem solving. - Key design patterns like Orchestrator-Worker, Multi-Agent Collaboration, and Market-Based Competition. - Strategies for leveraging Kafka to handle scalability, fault tolerance, and low latency. - Lessons from microservices evolution to solve interoperability and context-sharing challenges. This talk is for engineers and architects building scalable AI systems. You’ll leave with actionable insights to design resilient, event-driven agents and future-proof your infrastructure for enterprise-scale AI.

Presenters

Sean Falconer, Andrew Sellers

Breakout Session
May 21

Observability Made Easy: Unlocking Kafka Client Insights with KIP-714

Kafka is the backbone of modern data streaming architectures, but understanding what’s happening inside your clients has long been a challenge. KIP-714 changes the game by introducing a standardized and extensible way to expose client metrics, making observability accessible to everyone—not just Kafka experts. In this talk, we’ll explore why KIP-714 is a must-have for non-trivial systems, how it seamlessly integrates with popular observability stacks like OpenTelemetry, and what it means for debugging, performance tuning, and SLA monitoring. With real-world examples and a live demo, you’ll see how easy it is to connect Kafka clients to your telemetry and logging pipelines, unlocking deep insights with minimal effort. Whether you’re an engineer, SRE, or architect, you’ll walk away with practical knowledge on leveraging KIP-714 to make your Kafka-powered systems more transparent, resilient, and debuggable. No prior Kafka internals knowledge required—just a desire to see your data streams with clarity!

Presenters

Florent Ramiere

Breakout Session
May 21

Before and After: Transforming Wix’s Online Feature Store with Apache Flink

At Wix, our Feature Store processes billions of events every day to power data-driven experiences - from real-time personalizations to machine learning model inferences. Our initial, Apache Storm–based design struggled under massive event volumes, resulting in significant data loss and complex maintenance challenges that limited our ability to scale. In this session, we'll share how we re-architected our online feature store with Apache Flink. You'll learn about the limitations of our previous design, the challenges we faced, and the principles that guided our shift to a high-performance online feature store. We'll illustrate how we combined Apache Spark, Apache Kafka, Aerospike and Apache Flink to achieve high-throughput, low latency feature computations and seamless real-time updates to over 2,500 features, without data loss. Expect a direct, architecture focused session where we’ll compare our old and new designs, sharing the lessons learned along the way, without the philosophical debates.

Presenters

Tal Sheldon, Guy Levinhr

Breakout Session
May 21

From days to seconds: adidas' journey to scalable Kafka self-service

This is a story of a team who was at the verge of becoming a victim of his own success, with a massive adoption of a technology and the challenge to maintain a decent service quality, while keeping the infrastructure stable and reliable. Implementing multi-tenancy in Kafka is not too complex when the number of use cases sharing the cluster is low. A central team can operate the infrastructure, taking care of the heavy lifting and creating required assets on demand. This is true until adoption starts growing and the solution becomes a problem. You are a bottleneck and any service request piles up until an agent can resolve it, increasing resolution times and frustration at the same pace. Also, the amount of mistakes committed when you are doing everything by hand is very high, provoking toil and unexpected side effects and operational complexities. In this talk, we'll explain how we reverted the tendency implementing a non opinionated, vendor-agnostic self-service solution, delegating completely the responsibility to maintain assets to our stakeholders (topics, permissions, schemas, connectors) and reducing resolution times for any of these activities several orders of magnitude, from days to seconds. All of these while keeping the balance between governance and autonomy. Also, we'll explain how we managed to implement a standard based documentation model using AsyncAPI specs, enabling data discovery and reusability and reducing duplication. The main takeaways of the talk will be: * Technical Architecture, architectural decisions and tradeoffs * Operational model of the solution * DSL Specification * Rollout strategy to reach Globally Available state * SLAs and Adoption KPIs

Presenters

Guillermo Lagunas, Jose Manuel Cristobal

Breakout Session
May 21