Current Bengaluru 2025

Session Archive

Check out our session archive to catch up on anything you missed or rewatch your favorites to make sure you hear all of the industry-changing insights from the best minds in data streaming.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Keynote

The future starts here! Confluent CEO Jay Kreps and some of the top minds in data took to the keynote stage at Current Bengaluru to demonstrate how Data Streaming Platforms are transforming organizations and powering next generation AI with unified and reliable real-time data. It’s a game-changer for every industry and every data practitioner. Welcome to what’s next.

Presenters

Jay Kreps, Rajesh Kandhasamy, Shaun Clowes, Addison Huddy, Marc Selwan

Keynote

Transitioning to KRaft in Apache Kafka: Avoiding Common Mistakes

With Apache Kafka 4.0 around the corner, kafka users will have no choice but to migrate Zookeeper based clusters to Kraft. In this talk, I will talk about how to prepare the existing zookeeper based kafka clusters to migrate to KRaft. will dive into the world of Kafka producer configs and explore how to understand and optimize them for better performance. We will talk about the considerations before the migration, common mistakes and how to avoid them. Session Overview: - Introduction to KRaft - Migration prep: minimizing the impact of potential downtime - Discuss Kraft specific configs like process.roles, node.id and controller.quorum.voters - Common mistakes and how to avoid them - Demo Through this session, attendees will gain the knowledge and tools necessary to navigate this transition effectively, ensuring their Kafka deployments are poised for future growth and innovation.

Presenters

Ravi Bhardwaj

Lightning Talk

Powering Real-Time Analytics, Predictions, and Recommendations at Swiggy with Confluent Kafka

Swiggy, India’s leading food delivery platform, processes millions of messages every second to power real-time recommendations, predictions, order tracking, and personalized user experiences. In this session, we’ll explore the challenges Swiggy faced while managing open-source Kafka and how we successfully migrated to Confluent’s managed Kafka cluster, streamlining operations and significantly improving performance. We’ll also dive into the critical role Confluent Kafka plays in our microservices architecture, with a special focus on the complexities of Kafka consumer canary testing. We’ll discuss why this process is complex and how we uniquely solved these challenges to ensure reliable, efficient service delivery. Finally, we’ll demonstrate how Confluent Kafka enables Swiggy to handle millions of messages per second, empowering real-time analytics, predictive models like SLA predictions, and personalized user experiences at scale. This session will provide valuable insights into Kafka’s central role in modern microservices architectures and how Confluent Kafka supports high-performance, scalable, and real-time data pipelines for large-scale applications.

Presenters

Akash Agarwal

Lightning Talk

Breaking the Kubernetes Barrier: Deploying Kafka Across Clusters

The growing adoption of Kubernetes and Kafka for distributed systems presents exciting opportunities alongside unique challenges for enhancing the availability and resilience of Kafka deployments. While Kubernetes offers powerful orchestration capabilities, deploying a Kafka cluster within a single Kubernetes cluster can expose organizations to limitations. A Kubernetes cluster outage may render the entire Kafka system unavailable, disrupting applications and clients. To overcome this, many organizations including us are working to achieve scalable, distributed, multi-zone Kafka clusters where the Kafka nodes span across multiple Kubernetes clusters in nearby availability zones. This multi-cluster approach provides several key benefits. It ensures high availability by preventing single-cluster outages, supports migration efforts by allowing Kafka nodes to be deployed across clusters with minimal disruption, and optimizes resource usage by leveraging the combined capacity of multiple Kubernetes environments. However, implementing such deployments introduces significant challenges, including managing increased network complexity and costs, ensuring low-latency connectivity for performance, and maintaining data consistency in latency-sensitive environments. This session explores practical methodologies and principles for deploying Kafka across Kubernetes clusters, focusing on broker and controller distribution, fault tolerance, scalability, cross-cluster communication, and resource synchronization. Attendees will gain insights into challenges associated with distributing Kafka across Kubernetes clusters and explore potential solutions within the Operator framework. Tailored for developers and operators, this talk provides actionable takeaways for enhancing Kafka’s resilience, scalability, and flexibility on Kubernetes, including best practices for resource integration, configuration management, and performance tuning.

Presenters

Aswin A

Lightning Talk

Building an intelligent Kafka streaming pipeline for near real-time business insights

In this session, Team Yubi demonstrates how an intelligent streaming data pipeline leveraging Apache Kafka creates a unified analytical platform to deliver near real-time insights from a centralized Redshift data warehouse. Business operations teams face challenges approving large-ticket trades due to fragmented data across multiple systems managed by different teams. Fetching and reconciling this data often involves writing complex queries—expertise many operations teams lack—leading to delays in due diligence and decision-making. To solve this, we built a robust streaming data pipeline that centralizes disparate data sources into Redshift. The pipeline uses Apache Kafka for streaming, Kubernetes for scalability, dbt for data transformations, and Redshift WLM with data sharing for optimized query execution. Our custom Kafka sink connectors process data efficiently in two modes—snapshot (replicating the source RDS) and CDC (capturing incremental changes)—within a single flush cycle. This approach keeps the warehouse up-to-date, reduces ETL loads, lowers infrastructure costs, and enables quick data refresh cycles. The unified platform also lays the foundation for AI-based Text-to-SQL (TTS) capabilities, allowing teams to generate SQL queries using natural language for ad-hoc requests and reports. By enabling real-time streaming, Team Yubi empowers operations teams to process high-value transactions—disbursing amounts worth hundreds of crores—quickly and efficiently. The ability to reinitiate actions seamlessly in case of failures minimizes operational bottlenecks and ensures smooth transaction workflows, reducing revenue impact. Join us to learn how real-time data streaming transforms operational efficiency and decision-making.

Presenters

Saravanan Ravichandran

Lightning Talk

From Events to Insights: Kafka’s Role in Myntra’s Real-Time Data Revolution

In today’s fast-paced world, where actionable business insights drive competitive advantage, tapping into dynamic real-time streams marks the evolution of data-driven decision-making and revolutionizing business intelligence. Traditional batch-based data pipelines slowed down decision-making, causing delays in business insights, and limiting our ability to respond in real time. Join this session to learn, how at Myntra, we revamped our data infrastructure by transforming batch-based pipelines into a robust, real-time streaming architecture, reducing latency from hours to mere minutes. This session will also delve into how we leveraged Kafka, Spark Structured Streaming, and Delta Lake to create a scalable, low-latency ingestion pipeline. By implementing exactly-once semantics and optimizing data flows, we achieved the reliability and scalability needed to power mission-critical use cases.We’ll also explore how this transformation addressed the inherent limitations of traditional batch systems, enabling data freshness, operational agility, and the delivery of actionable near real-time business insights. These advancements have redefined how Myntra supports its dynamic ecosystem, driving unprecedented agility. The audience will gain actionable strategies for building real-time streaming pipelines, overcoming data freshness challenges, and unlocking the potential of near real-time insights to fuel innovation and growth at scale. Key highlights: 1. Kafka-Centric Streaming Architecture: Delve into the architectural design where Kafka powers seamless integration between streaming and batch workflows,efficiently handling millions of events/minute. 2. Data Freshness & Completeness Challenges: Understand how Myntra ensures data freshness and completeness using write ahead logs, micro-batch freshness propagation. 3. Operational Innovations with Delta and Spark: Explore how Apache Spark enabled efficient real-time ingestion, exactly-once semantics and fault tolerance in high-throughput.

Presenters

Shrvan Warke

Lightning Talk