This live workshop takes you from an empty editor to a complete production AI system in 6 hours. The Glass-Box Context Engine you build has everything a production system needs: semantic blueprint orchestration, MCP-coordinated agents, high-fidelity RAG, memory management, safeguards, and deployment configuration.
By Packt Publishing · Refunds up to 10 days before
A production AI system is not just a demo that works once. It is a system that handles real users, graceful failures, adversarial inputs, long-running conversations, evolving knowledge, and operational monitoring — reliably and continuously. This workshop builds for all of these requirements from the first line of code.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
Intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
You will build a Glass-Box Context Engine: a production-grade multi-agent AI system with a semantic blueprint orchestration layer, MCP-coordinated specialist agents (retrieval, synthesis, validation, moderation), a high-fidelity RAG pipeline with citation tracking, episodic and working memory management, prompt injection safeguards and output moderation, and production deployment configuration with health monitoring and Glass-Box observability dashboards.
A chatbot tutorial produces a demo that responds to input. This workshop produces a production system that handles real-world conditions: adversarial inputs are rejected by architectural safeguards rather than prompt instructions, context overflow is prevented by explicit memory management rather than context length limits, agent coordination is reliable through typed MCP interfaces rather than informal text passing, and system behavior is observable through Glass-Box logging rather than opaque. The difference is production-grade reliability.
The Glass-Box Context Engine architecture is specifically designed for production readiness. At the end of the workshop your system has: typed MCP interfaces that prevent agent communication errors, citation-grounded RAG that prevents hallucination, prompt injection detection that prevents adversarial overrides, Glass-Box logging that enables debugging and auditing, and Docker deployment configuration that makes the system reproducibly deployable. These are the properties that define production readiness for an AI system.
The context routing layer is typically the most conceptually challenging: designing how the orchestrator assembles the right context package for each agent from the available sources (task state, RAG retrievals, episodic memory, inter-agent results) while respecting semantic blueprint-defined budgets and boundaries. Denis Rothman spends significant time on this module because getting context routing right is what makes the difference between a fragile demo and a reliable production system.
Deployment is covered in the final module: containerising each MCP agent server using Docker, configuring the service registry that the orchestrator uses to find agent servers, setting up the Glass-Box logging infrastructure, configuring health checks and restart policies, and establishing a CI/CD pipeline for updating components without disrupting running interactions. You leave the workshop with deployment configuration files ready to use in your own infrastructure.
Yes. The Glass-Box Context Engine is designed as an adaptable architecture. Adapting it to a specific use case involves: writing semantic blueprints for your domain-specific agents, populating the RAG knowledge base with your documents, implementing MCP tool servers for your domain-specific capabilities, and configuring the safeguard rules for your specific risk profile. The core orchestration, context management, and observability infrastructure transfers directly to any use case.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2