The architecture of a multi-agent system determines everything: its reliability, its debuggability, its ability to scale, and its production lifespan. This live workshop teaches the Glass-Box Context Engine architecture that makes multi-agent systems production-ready from day one.
By Packt Publishing · Refunds up to 10 days before
Most multi-agent systems fail because architecture is an afterthought. Agents are connected informally, context is passed as unstructured text, and there is no visibility into why the system behaves as it does. The Glass-Box architecture taught in this workshop makes structure, observability, and reliability architectural first principles.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
Intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
A production multi-agent system architecture has five layers: the task decomposition layer (converts user goals into agent-specific subtasks), the semantic blueprint layer (structures instructions for each agent invocation), the MCP orchestration layer (coordinates typed communication between agents), the knowledge layer (RAG pipeline and memory management), and the Glass-Box observability layer (logs every decision across all layers). This workshop designs and builds all five layers in Python during the live session.
The right number of specialised agents is determined by the distinct capability domains your system requires, not by arbitrary decomposition. Start by identifying the fundamental capabilities needed: retrieval, synthesis, validation, moderation, and any domain-specific processing. Each distinct capability becomes one specialised agent with its own MCP server. Avoid splitting capabilities that naturally belong together and avoid combining capabilities that have different expertise requirements.
Hub-and-spoke architecture uses a central orchestrator agent that coordinates all other specialised agents: it decomposes tasks, dispatches to agents, collects results, and synthesises the final output. Peer-to-peer architecture has agents communicating directly with each other without a central coordinator. The workshop advocates hub-and-spoke for production systems because it is significantly easier to debug, monitor, and modify: all coordination decisions flow through one observable component.
Context isolation is an architectural principle that prevents context pollution: each agent receives only the context its semantic blueprint specifies, not the accumulated state of the entire multi-agent interaction. In the MCP orchestration layer, context isolation is implemented by the context router that assembles agent-specific context packages from the available context pool. Without architectural context isolation, multi-agent systems degrade as agents accumulate irrelevant context from other agents.
The architecture taught in this workshop is extensible by design: new specialised agents are added as MCP servers with typed tool interfaces, the orchestrator discovers new capabilities through MCP's capability negotiation protocol, and the semantic blueprint generator automatically learns to route tasks to new agents based on their tool descriptions. This plug-and-play extensibility means new capabilities can be added without modifying the core orchestration architecture.
The RAG pipeline is a specialised component of the multi-agent architecture, typically implemented as a dedicated RAG agent server exposed through MCP. The architectural relationship is: the orchestrator directs queries to the RAG agent, the RAG agent returns cited knowledge packages, and those packages flow to synthesis agents through the context routing layer. Designing the RAG agent as a first-class architectural component rather than a shared utility prevents the access conflicts and citation loss that occur when RAG is implemented as a shared function.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2