Practical Agentic AI: RAG, Planning & Vector Search
Published 11/2025
Duration: 2h | .MP4 1280x720 30 fps(r) | AAC, 44100 Hz, 2ch | 1.11 GB
Genre: eLearning | Language: English
Published 11/2025
Duration: 2h | .MP4 1280x720 30 fps(r) | AAC, 44100 Hz, 2ch | 1.11 GB
Genre: eLearning | Language: English
Agentic AI in Practice: Build Proactive LLM Agents with LangChain, RAG & Vector Search
What you'll learn
- Distinguish LLM chat apps from agentic systems across autonomy, tools, and memory
- Apply the Perceive → Reason → Act loop to multi-step, goal-directed tasks
- Orchestrate Model–Controller–Prompter (MCP) style workflows for agents
- Implement Retrieval-Augmented Generation (RAG) with grounding and citations
- Integrate web search (Tavily) and LLM ranking to fetch and summarize sources
- Persist interaction history using SQLite and migrate embeddings to ChromaDB
- Engineer topic “pillars,” weights, and recency decay to personalize results
- Perform semantic re-ranking to improve recommendation quality and diversity
- Mitigate agent risks including prompt injection, memory poisoning, and spoofing
- Evaluate agent performance with offline tests and scenario-based checks
- Package and ship a CLI news-curation agent with reproducible configs and prompts
Requirements
- Working knowledge of Python and virtual environments; comfort with CLI & Git.
- Understanding of HTTP APIs and JSON; basic familiarity with LLM prompts.
Description
This course contains the use of artificial intelligence.
Generative AI is moving from reactive “knowers” to proactive “doers” that perceive, plan, and act toward goals. This shift—Agentic AI—pairs LLM reasoning with tools, memory, and workflows so systems can execute multi-step tasks autonomously.
Enterprises now expect agents that ground answers with RAG, orchestrate APIs, and operate reliably with guardrails—raising new questions about autonomy, accountability, and oversight.
What This Course Covers
You’ll learn an end-to-end Agentic AI stack: the Perceive→Reason→Act loop; Retrieval-Augmented Generation; planning & memory; the MCP (Model–Controller–Prompter) workflow; and framework choices (LangChain, LlamaIndex, CrewAI, AutoGen). We translate concepts into an applied build: a CLI “Personalized News Curator” that uses Tavily for live search, ChatGPT/Gemini for ranking & summaries, an in-memory/SQLite → ChromaDB store, topic-pillar weighting, semantic re-ranking, and explanation generation.
What You Will Learn
Differentiate LLMs vs. Agentic AI across autonomy, memory, and tool use.
Apply the Perceive→Reason→Act loop to real tasks.
Implement MCP (Model–Controller–Prompter) orchestration for agents.
Ground responses with RAG for factuality and reliability.
Build a CLI agent that collects preferences and runs a continuous recommendation loop.
Integrate Tavily search + ChatGPT/Gemini for retrieval and ranking.
Persist interaction history (SQLite) and migrate to ChromaDB embeddings.
Engineer topic “pillars,” weighted selection, and semantic re-ranking.
Generate user-facing explanations for recommendations (XAI).
Address agent risks: memory poisoning, goal manipulation, identity spoofing.
Compare frameworks (LangChain, LlamaIndex, CrewAI, AutoGen) to match goals.
Apply prompt and project structuring best practices for agentic coding.
Real-World Application & Use Cases
We design a proactive companion that monitors interests, fetches fresh articles, updates a preference model from likes/dislikes, and iterates autonomously—illustrating agent planning, tool use, and memory in a compact workflow.You’ll see how to evolve from a simple loop to a production-style recommender: topic extraction, weighted exploration vs. exploitation, semantic vectors, source allow-listing, recency decay, and a user-readable “why this was recommended” message.
Course Format & Learning Experience
Structured modules combine concept briefings with hands-on labs: set up the environment and MCP scaffolding; implement RAG; wire Tavily+ChatGPT; add persistence (SQLite → ChromaDB); introduce topic pillars & semantic re-ranking; add explanation UX; then harden with tests and risk mitigations. Expect checklists, prompts, and refactors aligned to engineering best practices.
Instructor
Taught byShreejit Gangadharanwith 12 years of industry experience in companies like Flipkart, Microsoft, and Google.
Updated with 2024–2025 practices across MCP orchestration, vector stores, topic-pillar ranking, source allow-listing, recency decay, and agent risk controls.
Primary Topics/Keywords:Agentic AI, Perceive-Reason-Act, MCP workflow, RAG, LangChain, LlamaIndex, CrewAI, AutoGen, Tavily, ChatGPT/Gemini, SQLite, ChromaDB, embeddings, semantic re-ranking, topic pillars, explainability, agent risks.
Prerequisites:
Working knowledge of Python and virtual environments; comfort with CLI & Git.
Understanding of HTTP APIs and JSON; basic familiarity with LLM prompts.
Tools & Frameworks Used:LangChain, Tavily, ChatGPT/Gemini, SQLite, ChromaDB, pytest.
Capstone Project:CLI “Personalized News Curator” with preference learning, topic pillars, and semantic re-ranking, plus user-facing explanations.
Who this course is for:
- Software & ML Engineers implementing LLM apps and internal automations.
- Data/AI Product Managers specifying agent capabilities, guardrails, and KPIs.
- Solutions Architects & Platform Teams integrating agents with APIs, search, and data stores.
- Security, Risk & Compliance Leads evaluating autonomous behaviors and controls.
- Intermediate Python practitioners moving from prompt engineering to agentic systems.
More Info

