Most AI agent systems are black boxes: they produce outputs but cannot explain why. The Glass-Box Context Engine makes every decision observable. This live workshop builds a complete Glass-Box AI agent system where every reasoning step, context choice, and agent interaction is logged, traceable, and explainable.
By Packt Publishing · Refunds up to 10 days before
A Glass-Box AI agent system is not just one with logging added. It is architected from the ground up with observability as a first-class concern: every semantic blueprint, every MCP interaction, every RAG retrieval, and every safeguard evaluation is structured to be logged, queryable, and explainable. This workshop builds that architecture.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
Intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
A Glass-Box AI agent system is one where every decision made by every agent is observable, logged with structured metadata, and traceable back to the specific inputs and context that produced it. Unlike a black-box system where you only see inputs and outputs, a Glass-Box system exposes the complete reasoning chain: the semantic blueprint used, the knowledge retrieved from RAG, the MCP interactions between agents, the safeguard evaluations, and the output validation results. This transparency makes the system debuggable, auditable, and trustworthy.
A Glass-Box system is architecturally different from a system with logging added. In a Glass-Box architecture, observability is designed into the component interfaces from the start: every context passing operation produces a structured log entry as a side effect, trace IDs propagate through all components automatically, and the logging layer is as carefully engineered as the functional layer. Retrofitting logging to a black-box system produces fragmented, inconsistent telemetry. The Glass-Box architecture produces comprehensive, consistent, queryable observability data.
The Glass-Box architecture makes four categories of AI agent decisions explainable: knowledge decisions (which documents were retrieved from RAG and why they were ranked highest), reasoning decisions (which semantic blueprint guided the agent and how it structured its response), coordination decisions (which MCP tool was invoked and why, what parameters were passed, what result was received), and safety decisions (which safeguard checks were run, what they found, and what action was taken). Together these cover the complete decision surface of a multi-agent AI system.
Glass-Box transparency significantly simplifies AI regulatory compliance by providing audit trails that document the knowledge basis for every AI decision, the safeguard evaluations that were applied, the semantic boundaries within which the agent operated, and the complete chain of reasoning from input to output. Regulatory frameworks increasingly require AI systems to be explainable and auditable. A Glass-Box architecture provides the technical foundation for demonstrating compliance without requiring after-the-fact reconstruction of decision rationale.
The Python Glass-Box observability layer in this workshop uses structured logging with a custom log formatter that produces JSON-structured log entries with consistent fields: trace ID, span ID, component name, operation type, input summary, output summary, latency, and any relevant metadata. A trace context manager propagates IDs through component calls. The logging layer is implemented as Python decorators and context managers that wrap the functional components without modifying their implementation.
Yes. The Glass-Box logging creates a dataset of every agent decision with full context that can be used for systematic quality improvement: identifying which query types have the lowest citation coverage (indicating RAG gaps), which semantic blueprints produce the highest output variance (indicating specification ambiguity), which safeguard triggers indicate unhandled edge cases, and which agent coordination patterns cause the most failures. This data-driven improvement cycle is covered in the production deployment module of the workshop.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2