Breakout Session
More than 12,000 hotels globally rely on Revinate’s Guest Data Platform and guest communication solutions to drive direct revenue and deliver delightful guest experiences.
Powering these capabilities is a highly available, event-driven data pipeline that ingests, transforms, validates, deduplicates, enriches, and persists hotel data. This pipeline uses several Kafka Streams microservices, running in Kubernetes, to consume from, and produce to, a Kafka cluster running in Confluent Cloud.
We’ve built this pipeline over several years and have learned what works and what doesn’t. But the complexities we’ve uncovered are not unique to Revinate. This session digs into our architecture and showcases our successes (and failures) so others can improve their own data pipelines.
You will leave this session knowing how to: asynchronously connect Java microservices via Kafka topics; use Protobuf as a SerDes for flexible schema evolution; support messages larger than 1MB; create an expedited topic to enable message prioritization; perform scalability testing; and monitor and alert on data issues.