Running an open weight LLM on your own machine gives you a private AI with no cloud costs and no data leaving your hardware. This live workshop shows you how to do it properly using Docker Model Runner and build a complete private AI assistant with it.
By Packt Publishing · Refunds up to 10 days before
In 2026, running capable open weight LLMs on a standard developer laptop is practical and straightforward. Docker Model Runner handles the complexity. This workshop teaches you to go from zero to a fully deployed private AI assistant powered by a locally running open weight model.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules covering local model setup, OpenClaw integration, and production deployment.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
An open weight LLM running on your own machine powering a private AI assistant.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has run open weight LLMs on local machines in production environments.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to run capable AI models on their own machine with full control.
Everything you need to know about local open weight LLM deployment.
To run open weight LLMs on your own machine comfortably, 16GB of RAM is recommended. The instructor covers model selection for machines with different specs — including options that run on 8GB RAM machines at reduced but still useful performance levels. No dedicated GPU is required for the models used in this workshop.
For a typical developer laptop with 16GB RAM, models in the 3B to 8B parameter range offer the best balance of quality and performance. Phi-3 Mini, Mistral 7B, and Llama 3 8B are all excellent choices. The instructor covers performance benchmarks for each during the live session.
The impact on your other work depends on the model size and your hardware. Smaller models (3B to 4B parameters) have a minimal impact on system performance. The instructor covers how to configure Docker Model Runner resource limits to ensure your local LLM does not interfere with your other workloads.
You can run open weight LLMs without Docker using tools like Ollama or direct Python inference libraries. This workshop uses Docker Model Runner because it provides the cleanest integration with OpenClaw and the most straightforward setup for developers already in the Docker ecosystem.
The instructor covers model evaluation during the workshop — testing different models for the personal AI assistant use case and showing you how to compare their responses. This gives you a practical framework for choosing the right model for your specific needs.
When you close Docker Desktop or stop the Docker Model Runner, your local LLM stops running. The workshop covers how to configure automatic startup and how to deploy your setup to a VPS for always-on availability — so your private AI assistant keeps running even when your laptop is off.
4 hours. Live instructor. Local open weight LLM running by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing