How to Design AI Agent Memory · Live · April 25

How to Design AI Agent Memory That Works Across Long Conversations

AI agent memory is one of the hardest engineering problems in production AI. This live workshop teaches the three-layer memory architecture that makes agents retain useful context without context window overflow.

Saturday, April 25  9am – 3pm EDT
6 Hours  Hands-on coding
Cohort 2  Intermediate to Advanced

Workshop Details

📅
Date & Time
Saturday, April 25, 2026
9:00am – 3:00pm EDT
Duration
6 Hours · Hands-on
💻
Format
Live Online · Interactive
📚
Level
Intermediate to Advanced
🎓
Includes
Certificate of Completion
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

✦ By Packt Publishing
6 Hours Live Hands-On
Cohort 2 — April 25, 2026
Intermediate to Advanced
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published for developers worldwide
108
Live workshops and events hosted on Eventbrite
30+
Years of AI experience from your instructor Denis Rothman
100%
Hands-on — every session involves real code and live building
About This Workshop

Why AI Agent Memory Is the Hardest Part of Multi-Agent Engineering

Most AI agent memory approaches are either too simple (only current context) or too complex (storing everything). The memory engineering approach taught in this workshop designs a three-layer system that gives agents precisely the memory they need for reliable, long-running interactions.

🧠

What is Context Engineering?

Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.

🤖

What is a Multi-Agent System?

A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.

🔗

What is the Model Context Protocol?

MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.

🎯

Why Attend as a Live Workshop?

Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.

Workshop Curriculum

What This 6-Hour Workshop Covers

Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.

01

From Prompts to Semantic Blueprints

Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.

02

Multi-Agent Orchestration With MCP

Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.

03

High-Fidelity RAG With Citations

Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.

04

The Glass-Box Context Engine

Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.

05

Safeguards and Trust

Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.

06

Production Deployment and Scaling

Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.

What You Walk Away With

By the End of This Workshop You Will Have

Concrete working deliverables — not just theory and slides.

A working Glass-Box Context Engine with transparent, traceable reasoning

Multi-agent workflow orchestrated with the Model Context Protocol

High-fidelity RAG pipeline with memory and citations

Safeguards against prompt injection and data poisoning

Reusable architecture patterns for production AI systems

Certificate of completion from Packt Publishing

Your Instructor

Learn From a Bestselling AI Author With 30+ Years of Experience

Denis Rothman brings decades of production AI engineering experience to this live workshop.

Denis Rothman

Denis Rothman

Workshop Instructor · April 25, 2026

Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.

Prerequisites

Who Is This Workshop For?

This is an intermediate to advanced workshop. Solid Python and basic LLM experience required.

Frequently Asked Questions

Common Questions About AI Agent Memory Design

Everything you need to know before registering.

What are the three layers of AI agent memory covered in this workshop? +

This workshop covers three memory layers: working memory (the active context window for the current task), episodic memory (a compressed record of past interactions that can be selectively retrieved), and semantic memory (the embedded knowledge base accessed through the RAG pipeline). Designing these three layers to work together is what gives AI agents reliable long-term context without context window overflow.

How do I prevent AI agent memory from overflowing the context window? +

Context window overflow is prevented through active memory management: summarising and compressing episodic memory rather than retaining raw transcripts, using selective retrieval to pull only relevant memories into working memory, and setting explicit context budgets per agent that trigger compression when approaching the threshold. The workshop covers all three techniques with practical Python implementation.

What is the difference between short-term and long-term AI agent memory? +

Short-term AI agent memory is working memory: the current context window contents available for immediate use. Long-term memory is episodic and semantic memory: past interactions stored in compressed form and a knowledge base, both accessible through retrieval. The engineering challenge is moving information efficiently between these layers without losing important context or overwhelming working memory.

How does RAG function as part of an AI agent memory system? +

RAG serves as the retrieval interface to semantic memory. When an agent needs domain knowledge or past context not in its current working memory, it queries the RAG pipeline which retrieves relevant content from the embedded knowledge base. The retrieved content is injected into working memory as structured citation, keeping context relevant and verifiable.

Can AI agent memory persist across sessions? +

Yes. The episodic memory layer is designed to persist across sessions, storing compressed conversation summaries and key decisions in a retrievable format. When a new session begins, the memory system retrieves relevant episodic memories to give the agent appropriate context from past interactions. The workshop covers session persistence implementation for production systems.

How do I implement memory sharing between multiple agents safely? +

Memory sharing between agents requires explicit access controls and clear versioning to prevent one agent's writes from corrupting another's state. The workshop covers shared memory architecture using the MCP resource system, which provides typed, validated read and write access to shared memory stores. Each agent's access is logged by the Glass-Box layer making memory interactions auditable.

Context Engineering for Multi-Agent Systems · Cohort 2 · April 25, 2026

Ready to Build Production AI With Context Engineering?

6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.

Register Now →

Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2