Agentic AI Engineering: Design, Build & Deploy Smart Agents
Published 10/2025
Duration: 1h 1m | .MP4 1280x720 30 fps(r) | AAC, 44100 Hz, 2ch | 437.44 MB
Genre: eLearning | Language: English
Published 10/2025
Duration: 1h 1m | .MP4 1280x720 30 fps(r) | AAC, 44100 Hz, 2ch | 437.44 MB
Genre: eLearning | Language: English
Master AI Agents with, RAG, Fine-Tuning & Deployment – Build next-gen intelligent systems powered by open-source LLMs
What you'll learn
- Run LLMs locally (e.g. Ollama, LM Studio, Hugging Face) to build and develop AI applications entirely on your own machine.
- Create RAG systems integrating embeddings, vector stores, and local LLMs for efficient knowledge retrieval.
- Build agentic systems where smart agents use tools and workflows to autonomously accomplish tasks.
- Implement prompt engineering, context management, and guardrails to control agent behavior and ensure reliability.
Requirements
- A desktop or laptop with internet access for hands‑on projects, basic Python knowledge is plus
Description
This course, takes you from the fundamentals of AI models to building and deployingIntelligent AI agentsusing the latestGenerative AIFrameworkandLLM-powered architectures. Designed for professionals, developers, and innovators, this program blends theory, practice, and hands-on insights.
Here’s what you’ll explore:
Foundations of AI & Generative AI– Learn what AI models are, how they are trained, and study architectures like CNNs, Autoencoders, BERT, Transformers, and Diffusion Models. Understand how Generative AI creates new content and the most popular LLMs powering today’s applications.
Traditional vs Agentic AI Engineering– See how GenAI engineering differs from traditional rule-based development, from tools to outputs, and why agent-based design is the future.
How LLMs Work– Dive into tokenization, embeddings, self-attention, and prediction—the building blocks of modern LLMs.
RAG & Fine-Tuning– Master Retrieval-Augmented Generation (RAG), embedding models, vector databases, and learn when to fine-tune vs. when to use retrieval, with practical case studies.
Local LLM Deployment– Explore how to set up and run LLMs locally for secure, customizable development workflows.
Open-Source Platforms & Hugging Face Ecosystem– Leverage powerful tools, libraries, and the Hugging Face Model Hub to accelerate AI solutions.
AI Agent Projects– Apply your knowledge bydesigning and building AI agentsfor tasks like intelligent chat, document Q&A, and enterprise workflows, bringing all concepts together in practical, end-to-end projects.
Cloud Deployment & Containerization– Learn how to package, scale, and distribute AI agents using containers and cloud platforms for seamless deployment in production environments.
Who this course is for:
- Beginners & Non‑Technical Learners: Eager to explore the world of Agentic AI, with no prior experience required.
- Software Engineers & AI Developers: Seeking to build, deploy, and scale autonomous AI agents using frameworks like LangChain, LangGraph, and Ollama.
- Data Scientists & Technical Professionals: Aiming to gain hands‑on experience with state‑of‑the‑art agentic frameworks and real‑world AI solutions.
- Product Managers & Business Professionals: Looking to understand and lead AI projects, collaborate with AI teams, and drive business value using AI agent solutions.
- Entrepreneurs & Small Business Owners: Interested in integrating AI agents into their products or automating tasks using no‑code platforms like LangFlow.
More Info