AI agents that forget context as conversations grow frustrate users and produce incoherent outputs. This live workshop teaches the memory engineering architecture that gives agents persistent, reliable context across conversations of any length without overwhelming the context window.
By Packt Publishing · Refunds up to 10 days before
AI agents lose context in long conversations because conversation history grows faster than they can manage it. The three-layer memory architecture taught in this workshop gives agents persistent context at every conversation length without context window overflow.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
This is an intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
AI agents lose context in long conversations for two reasons: fixed context window size means older content gets pushed out as new content arrives, and there is typically no memory management system that compresses and stores important past context for retrieval when needed. Without explicit memory engineering, the agent's effective memory is limited to the most recent portion of the conversation.
Episodic memory stores compressed records of past conversation turns in a retrievable format. When a new turn references an earlier topic, the memory manager retrieves the relevant episodic memory and injects it into the working context. This gives the agent access to past context without keeping the full conversation history in the context window. The workshop implements episodic memory as a production-ready component of the Glass-Box Context Engine.
Working memory contains the current active context: the semantic blueprint, recent conversation turns, and current RAG retrievals. Episodic memory stores important decisions, user preferences, task outcomes, and key facts established in past turns. The memory manager decides what to move from working to episodic memory based on recency and importance scoring.
User preferences stored as structured records in episodic memory can be retrieved at the start of each session. The memory manager retrieves relevant preferences based on the conversation topic and injects them into the working context through the semantic blueprint. This gives the agent appropriate personalization without requiring the user to re-specify preferences in every interaction.
Yes. Session-persistent episodic memory is a core feature of the three-layer memory architecture. The workshop covers implementing session persistence so agents can retrieve relevant context from past sessions: user history, previous task outcomes, established facts at the start of new sessions.
The workshop covers long-conversation testing patterns: scripted test conversations with known context dependencies that should be retained, memory retrieval verification tests that check episodic memory contents after compression, cross-session consistency tests that verify preferences and facts persist correctly, and context coherence tests that check for contradictions between early and late conversation turns.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2