RAG without citations is RAG you cannot trust. This live Python workshop shows you how to build a retrieval pipeline where every factual claim in the output has a verified, traceable source, making your AI system accountable and your outputs defensible.
By Packt Publishing · Refunds up to 10 days before
In production AI systems, knowing where a claim comes from is as important as the claim itself. Citation tracking lets you verify outputs, audit AI decisions, detect hallucination events, and build user trust. This workshop implements full citation tracking throughout a Python RAG pipeline for multi-agent systems.
Context engineering is the discipline of designing systems that give AI the right information, in the right format, to reason and act reliably. It goes beyond prompt engineering — building structured, deterministic systems that scale in production.
A multi-agent system uses multiple specialised AI agents working together — each with a defined role, context, and tools — to complete complex tasks no single agent could handle reliably. Context engineering makes them predictable.
MCP is Anthropic's open standard for connecting AI models to tools, data sources, and other agents. It provides structured agent orchestration with clear context boundaries — making systems transparent and debuggable.
Context engineering requires hands-on practice to truly understand. This live workshop lets you build a working system with a world-class instructor answering your questions in real time.
Six modules. Six hours. A production-ready context-engineered AI system by the time you finish.
Understand why prompts fail at scale and how semantic blueprints give AI structured, goal-driven contextual awareness.
Design and orchestrate multi-agent workflows using the Model Context Protocol. Build transparent, traceable agent systems.
Build RAG pipelines that deliver accurate, cited responses. Engineer memory systems that persist context reliably across agents.
Architect a transparent, explainable context engine where every decision is traceable and debuggable in production.
Implement safeguards against prompt injection and data poisoning. Enforce trust boundaries in multi-agent environments.
Deploy your context-engineered system to production. Apply patterns for scaling, monitoring, and reliability.
Concrete working deliverables — not just theory and slides.
A working Glass-Box Context Engine with transparent, traceable reasoning
Multi-agent workflow orchestrated with the Model Context Protocol
High-fidelity RAG pipeline with memory and citations
Safeguards against prompt injection and data poisoning
Reusable architecture patterns for production AI systems
Certificate of completion from Packt Publishing
Denis Rothman brings decades of production AI engineering experience to this live workshop.
Denis Rothman is a bestselling AI author with over 30 years of experience in artificial intelligence, agent systems, and optimization. He has authored multiple cutting-edge AI books published by Packt and is renowned for making complex AI architecture concepts practical and immediately applicable. He guides you step by step through building production-ready context-engineered multi-agent systems — answering your questions live throughout the 6-hour session.
This is an intermediate to advanced workshop. Solid Python and basic LLM experience required.
Everything you need to know before registering.
Citation tracking in Python RAG involves attaching source metadata (document ID, section, URL, confidence score) to each retrieved chunk, carrying that metadata through the LLM generation process using structured prompt templates that require in-text citation, extracting citations from the generated output using a citation parsing component, and verifying that each extracted citation references a document that was actually retrieved. The workshop implements all four steps.
A production citation in a RAG pipeline should contain: the source document identifier, the specific section or passage being referenced, the confidence score of the retrieval match, the retrieval timestamp for freshness verification, and optionally a direct URL to the source. This structured citation metadata is what lets you verify claims, audit responses, and maintain a defensible chain of evidence for AI outputs.
Citation verification involves cross-checking each extracted citation against the retrieved documents: confirming the cited document was in the retrieval set, verifying that the cited section supports the attributed claim, and checking that the confidence level claimed in the citation matches the retrieval score. The workshop covers implementing an automated citation verifier as part of the RAG output validation pipeline.
When the RAG pipeline cannot find sufficiently relevant documents to ground a response, the system should explicitly signal this rather than allowing the LLM to generate an uncited answer. The workshop covers implementing a retrieval confidence threshold below which the agent returns an explicit uncertainty response rather than a confident but unsupported answer.
Yes. Citation tracking can be retrofitted into an existing RAG pipeline by adding source metadata to the embedding store, modifying the generation prompt to require in-text citations, and adding a citation extraction and verification step to the output pipeline. The workshop covers both new-build and retrofit implementation patterns for Python RAG citation tracking.
Citations give users the ability to verify AI outputs directly, which significantly increases trust in AI-generated content. When every claim in an AI response can be traced to a specific, accessible source, users can evaluate the quality of the AI's reasoning rather than accepting outputs on faith. This transparency is increasingly important for professional and regulated use cases where AI outputs must be defensible.
6 hours. Bestselling AI author. Production context-engineered multi-agent system by the end. Seats are limited.
Register Now →Saturday April 25 · 9am to 3pm EDT · Online · Packt Publishing · Cohort 2