Apache Beam | A Hands-On course to build Big data Pipelines
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | 5 hours 23 minutes | 62 lectures | 1.78 GB
Genre: eLearning | Language: English
MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | 5 hours 23 minutes | 62 lectures | 1.78 GB
Genre: eLearning | Language: English
Build Big data pipelines with Apache Beam in any language and run it via Spark, Flink, GCP (Google Cloud Dataflow).
Apache Beam is a unified and portable programming model for both Batch and Streaming data use cases.
Earlier we could run Spark, Flink & Cloud Dataflow Jobs only on their respective clusters. But now Apache Beam has come up with a portable programming model where we can build language agnostic Big data pipelines and run it using any Big data engine (Apache Spark, Flink or in Google Cloud Platform using its Cloud Dataflow and many more Big data engines).
Apache Beam is the future of building Big data processing pipelines and is going to be accepted by mass companies due to its portability. Many big companies have even started deploying Beam pipelines in their production servers.
What's included in the course ?
Complete Apache Beam concepts explained from
Scratch to Real-Time implementation.
Each and every Apache Beam concept is
explained with proper
HANDS-ON
examples of it.
Include even those concepts, the explanation to which is not very clear anywhere online.
Type Hints, Encoding & Decoding, Watermarks, Windows, Triggers and many more.
Build 2 Real-time Big data case studies using Apache Beam programming model.
Load processed data to Google Cloud BigQuery Tables from Apache Beam pipeline via Dataflow.
Codes and Datasets used in lectures are attached in the course for your convenience.