Breakout Session
Powered by the popularity of ChatGPT and other large autoregressive language models, we've seen a huge surge in interest for vector search in 2023. In applications that leverage [RAG](https://zilliz.com/use-cases/llm-retrieval-augmented-generation), the Milvus vector database is commonly used as a way to retrieve relevant document chucks and other short-form text. The applicability of text-based RAG has led to widespread adoption, from single-person tech startups to Fortune 500 fintech companies.
In this talk, we'll discuss how Kafka powers Milvus, both from the inside (within the vector database) as well as outside (as both a sink and source for text, images, and other forms of unstructured data). In particular, Kafka serves as the backbone for data ingestion in Milvus, enabling asynchronous and distributed ingestion and querying of vector data. This capability is essential for applications that leverage generative AI, where the ability to rapidly access and process vast amounts of vectorized information can significantly impact the quality and relevance of the generated content.