Tags
Language
Tags
May 2024
Su Mo Tu We Th Fr Sa
28 29 30 1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31 1

A Big Data Hadoop and Spark project for absolute beginners

Posted By: lucky_aut
A Big Data Hadoop and Spark project for absolute beginners

A Big Data Hadoop and Spark project for absolute beginners
Last updated 2/2023
Duration: 12h18m | .MP4 1280x720, 30 fps(r) | AAC, 44100 Hz, 2ch | 4.08 GB
Genre: eLearning | Language: English

Data Engineering Spark Hive Python PySpark Scala Coding Framework Testing IntelliJ Maven Glue Databricks Delta Lake

What you'll learn
Big Data , Hadoop and Spark from scratch by solving a real world use case using Python and Scala
Spark Scala & PySpark real world coding framework.
Real world coding best practices, logging, error handling , configuration management using both Scala and Python.
Serverless big data solution using AWS Glue, Athena and S3

Requirements
Students should have some programming background and some knowledge of SQL queries.
Description
This course will prepare you for a real world Data Engineer role !
Data Engineering is a crucial component of data-driven organizations, as it encompasses the processing, management, and analysis of large-scale data sets, which is essential for staying competitive.
This course provides an opportunity to quickly get started with Big Data through the use of a free cloud clusters, and solve a practical use case.
You will learn the fundamental concepts of Hadoop, Hive, and Spark, using both Python and Scala. The course aims to develop your Spark Scala and PySpark coding abilities to that of a professional developer, by introducing you to industry-standard coding practices such as logging, error handling, and configuration management.
Additionally, you will understand the Databricks Lakehouse Platform and learn how to conduct analytics using Python and Scala with Spark, apply Spark SQL and Databricks SQL for analytics, develop a data pipeline with Apache Spark, and manage a Delta table by accessing version history, restoring data, and utilizing time travel features. You will also learn how to optimize query performance using Delta Cache, work with Delta Tables and Databricks File System, and gain insights into real-world scenarios from our experienced instructor.
What you will learn :
Big Data, Hadoop concepts
How to create a free Hadoop and Spark cluster using Google Dataproc
Hadoop hands-on - HDFS, Hive
Python basics
PySpark RDD - hands-on
PySpark SQL, DataFrame - hands-on
Project work using PySpark and Hive
Scala basics
Spark Scala DataFrame
Project work using Spark Scala
Developing a practical comprehension of Databricks Delta Lake Lakehouse concepts through hands-on experience
Learning to operate a Delta table by accessing its version history, recovering data, and utilizing time travel functionality
Spark Scala Real world coding framework and development using Winutil, Maven and IntelliJ.
Python Spark Hadoop Hive coding framework and development using PyCharm
Building a data pipeline using Hive , PostgreSQL, Spark
Logging , error handling and unit testing of PySpark and Spark Scala applications
Spark Scala Structured Streaming
Applying spark transformation on data stored in AWS S3 using Glue and viewing data using Athena
How to become a productive data engineer leveraging ChatGPT
Prerequisites
:
This course is designed for Data Engineering beginners with no prior knowledge of Python and Scala required. However, some familiarity with databases and SQL is necessary to succeed in this course. Upon completion, you will have the skills and knowledge required to succeed in a real-world Data Engineer role.
Who this course is for:
Beginners who want to learn Big Data or experienced people who want to transition to a Big Data role
Big data beginners who want to learn how to code in the real world

More Info