Most tutorials show you how to connect a few LLM calls. This live workshop shows you how to build a multi-agent system that actually works in production — with context engineering, MCP orchestration, and the Glass-Box architecture that keeps agents reliable at scale.
By Packt Publishing · Refunds up to 10 days before
Building a multi-agent LLM system that works in a demo takes an afternoon. Building one that works reliably in production requires context engineering — semantic blueprints, explicit context management, and transparent orchestration. This workshop teaches the production approach from the start.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production rather than depending on fragile prompts.
A multi-agent system uses multiple specialized AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering is the key to making them work predictably.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides a structured way to orchestrate multi-agent workflows with clear context boundaries — making systems transparent and debuggable.
Context engineering concepts require hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time — far more effective than reading documentation alone.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI systems structured, goal-driven contextual awareness that scales reliably.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems that coordinate reliably.
Build retrieval augmented generation pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agent interactions.
Architect a transparent, explainable context engine where every decision is traceable. Build AI systems that are predictable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce moderation, trust boundaries, and access controls in multi-agent environments.
Deploy your context-engineered multi-agent system to production. Apply patterns for scaling, monitoring, and maintaining reliability under real-world load.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop — making complex context engineering concepts immediately actionable.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. In this workshop he guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
This is an intermediate to advanced workshop. Solid Python and basic LLM experience required.
Common questions about the workshop, what to expect, and how to prepare.
The most reliable architecture for a multi-agent LLM system is the Glass-Box Context Engine approach taught in this workshop. Each agent has a semantic blueprint defining its role and context boundaries. Agents coordinate through MCP with typed, structured messages. A RAG pipeline provides shared knowledge with citation tracking. Safeguards protect against prompt injection at each agent boundary.
The number of agents depends on the task complexity, not a fixed rule. The workshop teaches you how to decompose tasks into agent responsibilities — starting with the minimal number of specialized agents needed and knowing when to add more. Most practical multi-agent LLM systems start with three to five specialized agents: an orchestrator, domain specialists, and a synthesis agent.
Context management is consistently the hardest part. As agents interact, context accumulates, becomes polluted with irrelevant information, and causes performance to degrade. The workshop directly addresses this with explicit context boundaries in MCP orchestration and semantic blueprints that control what each agent sees.
With the patterns from this workshop, a production-ready multi-agent LLM system can be built in days rather than weeks. The core Glass-Box architecture provides reusable components. The 6-hour workshop gives you the complete system and the architectural understanding to build variations quickly.
The workshop uses Python with the MCP SDK for agent orchestration, standard LLM client libraries for model interaction, embedding libraries for the RAG pipeline, and structured logging for the Glass-Box observability layer. The instructor covers setup and library versions at the start of the session.
Yes — and this workshop does exactly that. Building with context engineering principles and MCP directly gives you a deeper understanding of the system and avoids framework-specific lock-in. The patterns you learn are more durable than any specific framework version.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2