Production AI Systems Engineering · Live · April 25

The Production AI Systems Engineering Course — Build, Deploy, Operate

Building a production AI system requires engineering discipline at every stage: architecture design, component implementation, testing, deployment, monitoring, and continuous improvement. This live course covers the complete production AI systems engineering lifecycle using the Glass-Box Context Engine.

Saturday, April 25  9am – 3pm EDT
6 Hours  Hands-on coding
Cohort 2  Intermediate to Advanced

Workshop Details

📅
Date & Time
Saturday, April 25, 2026
9:00am – 3:00pm EDT
Duration
6 Hours · Hands-on
💻
Format
Live Online · Interactive
📚
Level
Intermediate to Advanced
🎓
Includes
Certificate of Completion
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

✦ By Packt Publishing
6 Hours Live Hands-On
Cohort 2 — April 25, 2026
Intermediate to Advanced
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published
108
Live workshops hosted on Eventbrite
30+
Years of AI experience — Denis Rothman
100%
Hands-on — real code every session
About This Workshop

What Production AI Systems Engineering Covers

Production AI systems engineering is the discipline of building AI systems that operate reliably for real users over extended periods. It covers architecture (the Glass-Box Context Engine), implementation (semantic blueprints, MCP, RAG), testing (unit, integration, adversarial), deployment (containerisation, monitoring), and operations (incident response, continuous improvement). This course treats all six areas as engineering disciplines.

🧠

What is Context Engineering?

Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.

🤖

What is a Multi-Agent System?

A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.

🔗

What is the Model Context Protocol?

MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.

🎯

Why Attend as a Live Workshop?

Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.

Workshop Curriculum

What This 6-Hour Workshop Covers

Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.

01

From Prompts to Semantic Blueprints

Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.

02

Multi-Agent Orchestration With MCP

Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.

03

High-Fidelity RAG With Citations

Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.

04

The Glass-Box Context Engine

Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.

05

Safeguards and Trust

Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.

06

Production Deployment and Scaling

Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.

What You Walk Away With

By the End of This Workshop You Will Have

Concrete working deliverables — not just theory and slides.

A working Glass-Box Context Engine with transparent, traceable reasoning

Multi-agent workflow orchestrated with the Model Context Protocol

High-fidelity RAG pipeline with memory and citations

Safeguards against prompt injection and data poisoning

Reusable architecture patterns for production AI systems

Certificate of completion from Packt Publishing

Your Instructor

Learn From a Bestselling AI Author With 30+ Years of Experience

Denis Rothman brings decades of production AI engineering experience to this live workshop.

Denis Rothman

Denis Rothman

Workshop Instructor · April 25, 2026

Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.

Prerequisites

Who Is This Workshop For?

Intermediate to advanced workshop. Solid Python and basic LLM experience required.

Frequently Asked Questions

Common Questions About Production AI Systems Engineering

Everything you need to know before registering.

What is production AI systems engineering and why does it matter? +

Production AI systems engineering is the application of software engineering discipline to AI systems: treating reliability, observability, and maintainability as requirements that must be designed for rather than properties that emerge naturally. It matters because AI systems that are not engineered for production fail in predictable ways (context overflow, hallucination, coordination failures, and inability to diagnose problems) that become expensive and damaging at real-world scale. Engineering discipline prevents these failures.

How does production AI systems engineering apply to multi-agent systems? +

For multi-agent systems, production engineering principles apply at every level: architectural design using the Glass-Box Context Engine pattern (reliability), MCP-based typed communication (testability), semantic blueprint versioning (maintainability), Glass-Box logging (observability), and incremental deployment with backward-compatible schemas (deployability). Each engineering discipline maps directly to specific architectural decisions in the multi-agent system design.

What is the testing pyramid for production AI systems? +

The testing pyramid for production AI systems has four layers: unit tests at the base (testing individual components like the blueprint generator and context router with mocked LLM responses), integration tests (testing component interactions with controlled test LLM responses), system tests (testing complete agent workflows end-to-end with a realistic test environment), and adversarial tests at the top (testing safeguards and failure handling under intentionally challenging inputs). The workshop covers implementing tests at every layer.

How do I implement continuous integration for production AI systems? +

Continuous integration for production AI systems runs the test pyramid automatically on every code change: unit tests and integration tests on pull requests (fast feedback), system tests and golden tests on merge to main (comprehensive verification), and adversarial tests on a scheduled basis (safeguard effectiveness verification). The Glass-Box logging provides the ground truth for what the system actually did during CI runs, making test failures informative rather than opaque.

What operational runbook should I maintain for a production AI system? +

A production AI system operational runbook covers: health check verification procedures (how to verify all MCP servers are healthy), common failure diagnosis steps (how to use Glass-Box traces to diagnose specific failure patterns), incident escalation procedures (when to engage senior engineers or model providers), scheduled maintenance procedures (how to update LLM versions, reindex RAG knowledge bases, and archive episodic memory), and change management procedures (how to deploy semantic blueprint updates without disrupting active sessions).

How does production AI systems engineering handle model API changes from providers? +

Model API changes from providers are handled through abstraction: the LLM client layer in the Glass-Box Context Engine wraps the provider API, so provider-specific changes are isolated to one component. Schema changes to model APIs are caught by integration tests before deployment. Model capability changes (new features, deprecated parameters) are detected through the Glass-Box monitoring layer that tracks which model API features are actually used in production, enabling proactive migration planning.

Context Engineering for Multi-Agent Systems · Cohort 2 · April 25, 2026

Ready to Build Production AI With Context Engineering?

6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.

Register Now →

Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2