Background

Breakout Session

Unbundle Your Cake and Eat It Too: Redefining the Limits of Kafka Streams

Kafka Streams powers mission critical applications across a variety of use cases at some of the biggest companies in the world. What are the properties that make Kafka Streams so popular amongst developers looking to build event driven applications on top of Kafka? What are the operational risks associated with using Kafka Streams in your mission critical applications? Most importantly, what can you do about those risks?  

  After touching on what makes Kafka Streams so popular amongst developers writing mission critical event driven apps, this talk will explore the fundamental scaling challenges inherent in Kafka Streams’ architecture. From a systems design perspective, compartmentalization enables systems to scale. However, in Kafka Streams, application logic is bundled with thread scheduling, distributed state management, and load distribution. We’ll share the symptoms to look out for which suggest that you’re running up against the limits of this bundled approach.    

 Having understood that, we provide a blueprint for unbundling Kafka Streams in such a way that one can retain its power, flexibility, and clean operational model while also making it dramatically easier to operate at scale. In particular, we identify state management and load distribution (ie. the rebalance protocol) as the systems which free up Kafka Streams to scale beyond its limits when compartmentalized, and what we have learned by unbundling them.    In summary, if you are a developer looking to build event driven applications on Kafka, you will learn about when and why to pick Kafka Streams, what operational hurdles you need to be aware of, and your options for scaling those hurdles if you hit them.

Apurva Mehta

Responsive