Fundamentals of Apache Kafka
.MP4, AVC, 1920x1080, 30 fps | English, AAC, 2 Ch | 2h 33m | 552 MB
Instructor: Ivan Mushketyk
.MP4, AVC, 1920x1080, 30 fps | English, AAC, 2 Ch | 2h 33m | 552 MB
Instructor: Ivan Mushketyk
Master Apache Kafka fundamentals from the ground up and learn how to build robust, real-time data pipelines. This course covers Kafka architecture, producers and consumers, stream processing reliability, delivery semantics, ecosystem tools like Kafka Connect and Schema Registry and more.
What you'll learn
- Understand Kafka’s architecture and design principles
- Build Kafka producers and consumers from scratch
- Implement reliable stream processing applications
- Master delivery semantics including exactly-once processing
- Enforce schema compatibility with Confluent Registry
- Integrate Kafka Connect for seamless data integration
- Work with Schema Registry for data consistency
- Apply Kafka to real-world streaming use cases
Apache Kafka powers the real-time data systems behind many of today’s most innovative apps. If you’ve ever wondered how companies process massive streams of data in real time, this is your starting point.
This course gives you a practical, hands-on introduction to Kafka’s architecture and why it matters. You'll learn how to build reliable producers and consumers, tackle real-world stream processing challenges, and explore the trade-offs of different delivery semantics.
By the end, you’ll not only know how Kafka works…you’ll know how to use it to build data systems that can scale and handle real-world complexity.