Multi-agent AI systems are notoriously hard to debug because agent interactions are opaque. This live workshop teaches the Glass-Box architecture that makes every decision, context state, and agent interaction fully observable: turning a black box into a transparent, debuggable system.
By Packt Publishing · Refunds up to 10 days before
Multi-agent systems are hard to debug because there is typically no record of what context each agent received, what it considered, and why it produced its output. The Glass-Box Context Engine changes this by making observability a first-class architectural concern, not an afterthought.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
This is an intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
Multi-agent systems are hard to debug because the failure mode is often emergent: the individual agents work correctly but their interaction produces unexpected results. Without visibility into what context each agent received and what decision each agent made, diagnosing these emergent failures requires expensive trial-and-error reproduction. The Glass-Box architecture provides the observability that makes these interactions visible.
The Glass-Box architecture makes every decision in an AI system observable and traceable. Every semantic blueprint, context passing step via MCP, RAG retrieval, agent output, and safeguard trigger is logged with structured metadata. This creates a complete audit trail of every system interaction that can be replayed and analyzed to pinpoint the exact cause of any failure.
Adding Glass-Box observability to an existing system involves wrapping agent invocations with structured logging, adding trace IDs to context objects that propagate through the entire agent call chain, implementing a log aggregation component that assembles per-request traces, and building a simple dashboard to query and visualize traces. The workshop covers this implementation from scratch and as a retrofit.
Effective Glass-Box logging captures: the semantic blueprint used for each agent invocation, complete context window contents at invocation time, retrieved RAG results with source citations, agent output and confidence scores, safeguard evaluation results, MCP communication between agents, and timing information for each step. This complete picture makes any failure reproducible and traceable.
Trace IDs create a common identifier that connects all log entries for a single user request as it flows through multiple agents. When a failure occurs, the trace ID lets you retrieve the complete interaction log for that specific request, showing exactly which agent received what context, what each agent produced, and where in the chain the failure originated.
Yes. The same observability that enables debugging also enables optimization: identifying which agents are slowest, which RAG retrievals have the lowest citation coverage, which semantic blueprints produce the most consistent outputs, and which safeguard triggers indicate unaddressed edge cases. Glass-Box logging turns debugging data into a continuous improvement dataset for the entire multi-agent system.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2