Open weight LLMs are now powerful enough to power a real personal AI assistant. Docker Model Runner makes running them locally straightforward. This live workshop teaches you to run open weight models with Docker and connect them to OpenClaw to build a working private AI assistant.
By Packt Publishing · Refunds up to 10 days before
Open weight models like Llama, Mistral and Phi have reached quality levels that make them genuinely useful for personal AI assistant tasks. Docker Model Runner makes running them locally simple and reliable. This workshop combines both into a complete private AI stack.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules covering model selection, Docker setup, and building a complete private AI assistant.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
Open weight LLMs running locally in Docker powering a working private AI assistant.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin is a Docker Captain with production experience running open weight models with Docker.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to run open weight LLMs locally using Docker and build something real with them.
Everything you need to know about running open weight models locally with Docker.
Docker Model Runner supports a growing library of open weight models. In this workshop you will work with models including Llama 3, Mistral 7B, Phi-3, and Gemma. The instructor covers the trade-offs between different model sizes and helps you select the best model for your hardware during the live session.
For personal assistant use cases, modern open weight LLMs in the 7B to 13B parameter range are excellent alternatives to proprietary models. They handle conversational tasks, code assistance, summarisation, and question answering very well. The quality gap between open weight and proprietary models has narrowed significantly in 2026.
A minimum of 16GB RAM is recommended for a smooth experience with 7B parameter models. The instructor covers model selection for different hardware configurations — including options for machines with 8GB RAM that still deliver reasonable performance.
Most open weight models have licences that permit personal use and many permit commercial use as well. Llama 3, Mistral, and Phi all have permissive licences suitable for personal assistant use. The instructor covers the relevant licence considerations for each model used in the workshop.
Updating open weight models in Docker Model Runner is straightforward — you pull the newer version of the model using the Docker CLI and restart your OpenClaw assistant to use it. The instructor covers the model update process during the workshop.
Docker Model Runner can manage multiple models but running them simultaneously requires significant RAM. The workshop covers how to configure your setup to switch between models efficiently and how to allocate resources appropriately for your hardware.
4 hours. Live Docker Captain instructor. Open weight LLMs running locally by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing