Optimizing high-throughput ingestion pipelines

Optimizing high-throughput ingestion pipelines
TechTalks powered by CrowdStrike

Processing hundreds of thousands of events per second and optimizing for both availability and cost is not an easy task.

This talk will showcase how we used a data-driven approach to leverage Kafka and create a generic batching framework that enabled us to solve a real-world use-case in one of our products.

 

3 reasons why you should participate in the keynote:

- Using Kafka as a transaction log in the context of distributed systems.

- Optimizing database writes using batching.

- Designing reusable components in a micro-service ecosystem

Back to Event