A multi-agent workflow that works reliably in production is not just a sequence of agent calls. It is a carefully designed task graph with typed interfaces, explicit context management, failure recovery, and Glass-Box observability. This live Python workshop designs and builds one from scratch.
By Packt Publishing · Refunds up to 10 days before
Production multi-agent workflows in Python require: a task graph design that makes agent dependencies explicit, typed MCP interfaces that validate every agent interaction, a context routing layer that prevents context pollution, failure handling that recovers gracefully from individual agent errors, and Glass-Box logging that makes the entire workflow observable. This workshop builds all of these.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
Intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
A multi-agent workflow is represented in Python as a directed acyclic graph where each node is an agent task and each edge is a data dependency between tasks. The task graph is a Python data structure (typically a dictionary of task objects with dependency lists) that the orchestrator traverses to determine the correct execution order. The workshop covers implementing a task graph executor that identifies which tasks can run in parallel versus which must run sequentially based on their declared dependencies.
The most effective Python patterns for multi-agent workflow orchestration are: the pipeline pattern for sequential workflows where each step's output feeds the next, the fan-out/fan-in pattern for parallel workflows where a task is distributed to multiple agents simultaneously and results are collected, and the conditional routing pattern where workflow branches are selected based on agent output content. The workshop implements all three patterns as reusable Python orchestrator components.
Python interfaces between agents in a multi-agent workflow use Pydantic models to define typed input and output schemas for each agent. The workflow orchestrator validates that the output schema of one agent is compatible with the input schema of the next agent in the dependency graph before executing the workflow. This schema compatibility checking catches workflow design errors before runtime and produces clear error messages when incompatibilities are found.
Partial completion handling in Python multi-agent workflows uses checkpoint-based execution: the orchestrator records each successfully completed task's output in a workflow state store before dispatching the next task. When an agent fails, the orchestrator can retry the failed task, route to a fallback agent, or return a partial result using the outputs of the agents that completed successfully. The Glass-Box logging records which tasks completed and which failed, providing a complete picture of the workflow execution.
Multi-agent workflow design testing in Python uses a workflow simulator that replaces real MCP agent servers with mock implementations that return configurable test responses. The simulator runs the complete workflow with controlled inputs and verifies the task graph execution order, context routing correctness, failure handling behavior, and output schema conformance at each step. The workshop covers building a workflow simulator that makes workflow design testing fast and deterministic.
Yes. The workshop covers both imperative workflow definition (Python code that directly builds the task graph) and declarative workflow definition (a YAML or JSON configuration file that describes the task graph, which the orchestrator loads and executes). Declarative workflow definition makes it easier to modify workflows without code changes and enables non-developer team members to adjust workflow configurations within defined constraints.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2