Integrating Open Source LLMs
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 1h 2m | 222 MB
Instructor: Sandy Ludosky
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 1h 2m | 222 MB
Instructor: Sandy Ludosky
This course will teach you how to integrate, deploy, and scale open-source LLMs with retrieval, API handling, tools configuration, and safety features.
What you'll learn
Open source LLMs provide powerful, flexible alternatives to proprietary models, but they typically lack up-to-date or domain-specific knowledge.
First, you’ll explore how to set up your application with secure credential management and safe integration practices, ensuring reliable connections to multiple LLMs. Next, you’ll discover how to provide context and manage conversation history with techniques like vectorized search and state persistence so your LLM-powered systems can deliver coherent, context-aware interactions. Finally, you’ll learn how to optimize performance with scaling strategies, caching, and rate-limiting, while also integrating moderation and compliance filters to prevent unsafe outputs.
When you’re finished with this course, you’ll have the skills and knowledge needed to design, build, and deploy secure, scalable, and context-aware applications powered by open-source LLMs.