Running a self-hosted LLM in 2026 is more accessible than ever. Docker Model Runner removes most of the complexity. This live workshop shows you how to self-host an LLM properly — configured, secured, and powering a complete private AI assistant connected to WhatsApp or Telegram.
By Packt Publishing · Refunds up to 10 days before
The combination of capable open weight models and Docker Model Runner has made self-hosting an LLM a practical option for any developer in 2026. This workshop covers the complete self-hosted LLM setup — from model selection to production deployment.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules covering the complete self-hosted LLM stack for 2026.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
A properly self-hosted LLM powering a complete private AI assistant in 2026.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has deployed self-hosted LLMs in production environments using Docker.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to properly self-host an LLM in 2026 and build something useful with it.
Everything you need to know about running and deploying a self-hosted LLM in 2026.
Self-hosting an LLM in 2026 means running a large language model on hardware you own or control — your laptop, desktop, or a VPS — using tools like Docker Model Runner. Your model runs locally, processes all requests on your own hardware, and sends no data to external AI providers. In this workshop you self-host an LLM and connect it to OpenClaw to build a complete private AI assistant.
The best self-hosted LLMs in 2026 for most developer setups are Llama 3 8B, Mistral 7B Instruct, and Phi-3 Mini. All are available through Docker Model Runner, all are free, and all deliver strong performance for personal AI assistant use cases. The instructor covers the trade-offs between each during the workshop.
Self-hosted LLMs require significant disk space — typically 4GB to 8GB per model for quantised versions of 7B to 8B parameter models. The instructor covers storage requirements for different models and how to manage model storage efficiently during the workshop.
You can have multiple LLMs installed through Docker Model Runner but running them simultaneously requires sufficient RAM for each. The workshop covers how to manage multiple models efficiently and how to configure OpenClaw to switch between them for different use cases.
Self-hosting an LLM is more accessible in 2026 than it has ever been, but it still requires comfort with Docker, command-line tools, and basic system administration. This workshop is designed for developers — not beginners. If you are comfortable with Python and basic terminal usage, you have the foundation needed to follow this workshop successfully.
Keeping your self-hosted LLM current in 2026 involves pulling updated model versions through Docker Model Runner when they are released. New versions of popular models are released regularly. The instructor covers the update process and how to evaluate whether a new model version is worth upgrading to during the workshop.
4 hours. Live Docker Captain instructor. Self-hosted LLM running in production by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing