Complete Rag Testing Course With Ragas Deepeval And Python

Posted By: ELK1nG

Complete Rag Testing Course With Ragas Deepeval And Python
Published 7/2025
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 2.61 GB | Duration: 5h 10m

Learn the complete way to test RAG implementations. From functional to performance from Python to RAGAs and DeepEval

What you'll learn

Understand the Basics of LLMs

Understand LLM Application types

Gain know how on types of AI - Weak and Generative

Understand How RAG works

Understand the types of RAG Testing

A lot of ready to use code that can be used from moment 0

Understand ML metrics such as Accuracy, Recall and F1

Understand RAG Testing Metrics such as Context Recall, Context Accuracy

Understand RAG Testing Metrics such as Answer Relevancy

Understand RAG Testing Metrics such as Truthfulness

Gain know how with RAGAs open source Testing framework

Gain know how with DeepEval open source Testing framework

Understand how to create custom metrics

Test for Coherence, Fluency, tone and other human specific metrics

Rapid validation tools for MVPs using RAG systems.

Deep understanding of metrics (fluency, coherence, relevance, conciseness), customizable test frameworks.

Requirements

Some basic Python programing experience

Basic understanding of LLMs and AI

A LLM API Key

Basic Testing understanding

Laptop/ PC with VS Code

Willingness to learn a new hot skill

Description

Master the art of evaluating Retrieval-Augmented Generation (RAG) systems with the most practical and complete course on the market — trusted by over 25,000 students and backed by 1,000+ 5-star reviews.Whether you're building LLM applications, leading AI QA efforts, or shipping reliable MVPs, this course gives you all the tools, code, and frameworks to test and validate RAG pipelines using DeepEval and RAGAS. What You’ll Learn Understand the Basics of LLMs and how they are applied across industries Explore different LLM Application Types and use cases Learn the difference between Weak AI and Generative AI Deep-dive into how RAG works, and where testing fits into the pipeline Discover the types of RAG Testing: factuality, hallucination detection, context evaluation, etc. Get hands-on with ready-to-use code from Day 0 — minimal setup required Master classic ML metrics (Accuracy, Recall, F1) and where they still matter Learn RAG-specific metrics:Context RecallContext AccuracyAnswer RelevancyTruthfulnessFluency, Coherence, Tone, Conciseness Build custom test cases and metrics with DeepEval and RAGASLearn how to use RAGAS and DeepEval open-source frameworks for production and research Validate MVPs quickly and reliably using automated test coverage Who is This For?AI & LLM Developers who want to ship trustworthy RAG systemsQA Engineers transitioning into AI testing rolesML Researchers aiming for reproducible benchmarksProduct Managers who want to measure quality in RAG outputsMLOps/DevOps professionals looking to automate evaluation in CI/CD

Overview

Section 1: Introduction

Lecture 1 Introduction

Lecture 2 Quick 5 Minute RAG Test

Section 2: Setup the environment - Installing dependencies

Lecture 3 Install Python

Lecture 4 Install PIP for Python

Lecture 5 Install NPM and Node.js

Lecture 6 Install VSCode

Lecture 7 Get an OPENAI API Key

Lecture 8 Github Repository link

Section 3: Types of AI and Model Lifecycle - Optional but highly recommended

Lecture 9 How AI Works

Lecture 10 Types of AI

Lecture 11 How does the App Tech Stack Look with AI

Lecture 12 What is a Foundation Model and a LLM

Lecture 13 Model - Lifecycle - Pretraining Phase of a Model

Lecture 14 Model - Lifecyle Fine Tunning Phase of a model

Lecture 15 AI Model - Some considerations around data

Lecture 16 Types of applications that use AI / LLMs

Section 4: Introduction to RAG

Lecture 17 How RAG works - a high level overview

Lecture 18 Hallucinations of RAG

Lecture 19 Types of RAG

Lecture 20 Applications of RAG

Lecture 21 Setting up the repo and dependencies

Lecture 22 Implementing a retriever and a Faiss DB

Lecture 23 RAG - Chunks and overlaps for documents

Lecture 24 RAG - Implementing an Augmentor

Lecture 25 RAG - Implementing Retriever + Augmenter + Generator

Section 5: How to Test RAG Systems

Lecture 26 Gen AI Param - TOP - K & P and Temperature

Lecture 27 Introducing top - K Documents

Lecture 28 Introducing Top - K Chunks

Lecture 29 Top K Chunks from most Relevant Document

Lecture 30 RAG - Testing Before pipeline is implemented

Lecture 31 RAG - Testing for the Retriever - Cosine Similarity

Lecture 32 RAG - Testing for the Augmentation

Lecture 33 RAG - Testing for the Generation

Section 6: Types of RAG Testing

Lecture 34 Manual or Human Testing

Lecture 35 Automated Testing with API validations - Pytest Demo

Lecture 36 Using LLM as a Judge to validate the response

Section 7: RAG Single and multihop Testing

Lecture 37 RAG Testing - Specific Query Synthesizer

Lecture 38 RAG Testing - Abstract Query Synthesizer

Lecture 39 RAG Testing - MultiHop Specific Query Synthesizer

Lecture 40 RAG Testing MultiHop Abstract Query Synthesizer

Lecture 41 Golden Nugget Metrics

Section 8: Important Machine Learning Metrics

Lecture 42 Ground Truth Table - source of Truth | Test Oracle

Lecture 43 Machine Learning Metrics - Accuracy

Lecture 44 Machine Learning Metrics - Precision

Lecture 45 Machine Learning Metrics - Recall

Lecture 46 Machine Learning Metrics - F1 Score

Section 9: Testing with the RAGAS library

Lecture 47 RAGAs Validation Framework - Retrieval

Lecture 48 RAG Metrics - Context Precision

Lecture 49 RAGAs - Python DEMO - Context Precision

Lecture 50 RAG Metrics - Context Recall

Lecture 51 RAGAs - Python DEMO - Context Recall

Lecture 52 RAG Metrics - Context Relevance

Lecture 53 RAGAs - Python DEMO - Context Relevance

Lecture 54 RAG Metric - Truthfulness

Lecture 55 RAGAs - Python DEMO - Faithfulness

Lecture 56 RAGAs Validation Framework - Retrieval - Augmentation - Generation

Lecture 57 Rag framework - Coherence, Fluency and Relevance

Section 10: Testing with Deepeval Library

Lecture 58 What is the DeepEval LLM Evaluation Platform

Lecture 59 Installing and running the first test

Lecture 60 Creating a Generative Metric

Lecture 61 Implementing a HTLM Report

AI Engineers & LLM Developers,QA/Test Automation Engineers transitioning to AI,ML Researchers & Applied Scientists,AI Product Managers