Agentic AI Security- Threats, Architectures & Mitigations
Published 6/2025
Duration: 1h 26m | .MP4 1280x720 30 fps(r) | AAC, 44100 Hz, 2ch | 824 MB
Genre: eLearning | Language: English
Published 6/2025
Duration: 1h 26m | .MP4 1280x720 30 fps(r) | AAC, 44100 Hz, 2ch | 824 MB
Genre: eLearning | Language: English
Threat Modeling for Agentic AI
What you'll learn
- Understand the core principles of Agentic AI, including autonomy, memory, planning, and tool-use in modern AI systems.
- Explore OWASP’s Agentic Security Initiative, its threat taxonomy, and relevance to real-world AI security practices.
- Distinguish between single-agent and multi-agent architectures, and how their design influences security posture.
- Learn to apply STRIDE, PASTA, and MAESTRO frameworks for threat modeling in autonomous AI environments.
- Identify and analyze critical threats such as reasoning manipulation, memory poisoning, tool misuse, and agent identity spoofing.
- Navigate OWASP’s Agentic Threat Taxonomy Navigator to classify threats based on execution, cognition, memory, and human interaction.
- Gain practical skills in mitigating agentic threats through structured playbooks for reasoning validation, identity hardening, tool control, and governance.
- Design and evaluate secure deployment architectures, including rollback mechanisms, memory isolation, and API constraints.
- Build and test vulnerable AI agents using LangChain, and simulate threats like prompt injection, consensus manipulation, and RCE.
- Map emerging agentic AI risks to industry frameworks like MITRE ATLAS and NIST AI RMF, enabling governance, compliance, and SOC integration.
Requirements
- Interest in Secure AI Practices
Description
Agentic AI Security: Threats, Architectures & Mitigationsis a comprehensive course designed to prepare developers, security engineers, AI architects, and risk officers to defend the next generation of autonomous systems. The course begins by grounding learners in the fundamentals of agentic AI, explaining how modern AI agents—unlike traditional models—perceive, reason, plan, and act with increasing autonomy. It explores the pivotal role of OWASP’s Agentic Security Initiative and introduces the architectural foundations of single-agent and multi-agent systems, showcasing the core capabilities of agents, including memory, tool use, and goal decomposition. Learners are introduced to orchestration layers, agent frameworks like LangChain and AutoGen, and real-world agentic patterns and use cases. As the course progresses, it delves into threat modeling with STRIDE, PASTA, and MAESTRO frameworks, before detailing OWASP’s reference agentic threat model and taxonomy navigator.
The midsection focuses on deep-dives into specialized threats—reasoning drift, memory poisoning, tool misuse, identity spoofing, HITL exploitation, and multi-agent coordination failures. Six mitigation playbooks provide practical countermeasures: reasoning validation, memory control, tool execution hardening, identity strengthening, HITL optimization, and inter-agent trust assurance. Learners then transition into architectural solutions including modular agent design, execution guards, rollback systems, and defense-in-depth strategies. The deployment section emphasizes containerization, policy-driven API access, and lessons from real-world agent incidents. To ensure proactive defense, the course includes guidance on designing red teams, secure simulation labs, and building vulnerable agents for training purposes using LangChain. Hands-on labs like simulating memory poisoning and consensus manipulation are also included.
The course concludes by integrating agentic threats into existing security frameworks—mapping OWASP threats to MITRE ATLAS and NIST AI RMF—thus aligning advanced agent risks with enterprise governance and compliance expectations. Learners emerge prepared to design, test, and deploy secure, interpretable, and auditable AI agents.
Who this course is for:
- AI Developers and Engineers
- Cybersecurity Professionals and SOC Analysts
- Security Architects and DevSecOps Engineers
- AI Product Managers and Technical Leads
- Governance, Risk & Compliance (GRC) Officers
- Researchers and Academics in AI Safety or Adversarial ML
- Red Teamers and Penetration Testers for AI Systems
- Cloud and Infrastructure Engineers
More Info