Software engineers have a significant advantage in building agentic AI: you already understand system design, API contracts, observability, and production operations. This live workshop shows you how to apply those skills to agentic AI using context engineering, MCP orchestration, and the Glass-Box architecture.
By Packt Publishing · Refunds up to 10 days before
Software engineers approach agentic AI with discipline that improves the result: they design interfaces before implementation, test components before integration, instrument systems for observability, and design for failure modes from the start. Context engineering channels this discipline into a principled agentic AI architecture. This workshop is that channel.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
Intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
Software engineering skills transfer directly to agentic AI development: system design maps to context engine architecture, API design maps to MCP tool schema design, testing discipline maps to LLM component testing with mocked responses, observability engineering maps to Glass-Box logging implementation, and production operations maps to AI system deployment and monitoring. The discipline is the same; the domain is new. This workshop shows the mapping at each point.
Software engineers find these agentic AI concepts immediately intuitive: MCP typed interfaces (analogous to API contracts), the Glass-Box observability layer (analogous to distributed tracing), context window budget management (analogous to memory management), semantic blueprint versioning (analogous to schema versioning), and the hub-and-spoke multi-agent pattern (analogous to microservices with an API gateway). The workshop leverages these analogies to accelerate understanding.
The genuinely new concepts for software engineers are: semantic blueprint design (how to structure instructions that reliably guide LLM behavior), RAG pipeline engineering (how embedding-based retrieval works and how to build production retrieval systems), hallucination prevention (how to ground LLM outputs in verified sources), context rot (how LLM performance degrades as context accumulates), and prompt injection (how adversarial text can override agent instructions). The workshop introduces each of these clearly before implementing them.
Software engineering testing discipline applies to agentic AI by treating LLM responses as external dependencies that should be mocked in unit tests, similar to database or API calls. Unit tests verify that the non-LLM logic (context routing, citation parsing, schema validation) works correctly with controlled mock responses. Integration tests verify component interactions with a test LLM that produces predictable outputs. This approach makes agentic AI systems testable and regression-proof in the same way as other complex software systems.
The direct pattern analogies are: the Glass-Box Context Engine is a pipeline architecture (familiar from data engineering), MCP servers are microservices with typed RPC interfaces (familiar from service-oriented architecture), semantic blueprints are structured configuration artifacts (familiar from configuration-driven systems), RAG is a hybrid search system (familiar from search engineering), and episodic memory is an event store (familiar from event-driven architecture). Denis Rothman explicitly draws these analogies throughout the workshop.
Software engineers are typically productive with agentic AI immediately after this workshop because they already have the foundational skills: Python, system design thinking, API design, and testing discipline. The workshop fills the agentic AI-specific knowledge gaps: context engineering patterns, MCP implementation, RAG pipeline design, and production LLM system operations. Most software engineers who attend report being able to meaningfully contribute to agentic AI projects the week after the workshop.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2