An LLM orchestrator that works reliably in production coordinates agents with typed interfaces, explicit context management, and transparent decision logging. This live Python workshop shows you how to build one using the Model Context Protocol and Glass-Box architecture.
By Packt Publishing · Refunds up to 10 days before
Most Python LLM orchestrators are fragile: they pass raw text between agents and rely on the LLM to figure out coordination. A production Python LLM orchestrator uses MCP for typed agent communication, semantic blueprints for structured task dispatch, and the Glass-Box layer to make every orchestration decision observable and debuggable.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
This is an intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
A production Python LLM orchestrator has four layers: the task decomposition layer that converts a high-level goal into agent-specific subtasks with semantic blueprints, the MCP coordination layer that dispatches tasks to specialised agent servers and collects typed responses, the context management layer that maintains orchestrator-level state and routes shared context, and the Glass-Box logging layer that records every orchestration decision.
Task decomposition converts a complex user request into a directed graph of agent subtasks. The orchestrator uses a planner LLM call with a semantic blueprint that defines the available agents and their capabilities to generate this task graph. The workshop covers implementing this planner as a Python component with validation that ensures the generated task graph is executable before dispatching.
The Python LLM orchestrator implements failure handling at multiple levels: MCP error types for structured failure communication from agents, retry logic with exponential backoff for transient failures, circuit breakers for agents that are consistently failing, fallback agent routing when a primary agent is unavailable, and partial result handling when only some agents in a task graph complete successfully.
The Glass-Box observability layer uses structured Python logging to record every orchestrator decision: task graph generation, agent dispatch decisions, context routing choices, error handling actions, and final synthesis. This creates a complete audit trail of every orchestration run that can be replayed for debugging and used to improve orchestration quality over time.
Yes. The orchestrator architecture taught in this workshop is designed to be workflow-agnostic. It discovers available agents through MCP, generates task graphs based on the capabilities those agents expose, and coordinates them dynamically. The same Python orchestrator can manage many different multi-agent workflows without code changes, simply by connecting to different combinations of MCP agent servers.
Testing a Python LLM orchestrator requires mocking the LLM calls for the planner component and the MCP agent servers for the coordination component, while testing the orchestration logic in isolation. The workshop covers pytest fixtures for MCP server mocking, golden test patterns for orchestration flows, and how to run integration tests against real agent servers in a controlled environment.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2