Setting up Docker Model Runner properly is the foundation of a reliable local AI setup. This live guide covers the complete setup — model selection, memory configuration, API endpoint setup, and connecting it to OpenClaw for a working private AI assistant.
By Packt Publishing · Refunds up to 10 days before
Installing Docker Model Runner takes minutes. Configuring it correctly for production use — right model size, memory limits, API settings, and OpenClaw integration — is what this live setup guide covers in full.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules. From Docker Model Runner installation to a fully configured private AI assistant.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
A properly configured Docker Model Runner powering a working OpenClaw assistant.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin is a Docker Captain with real production Docker Model Runner experience.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to set up Docker Model Runner correctly for local AI inference.
Everything you need to know about setting up Docker Model Runner correctly.
The correct Docker Model Runner setup sequence is: install or update Docker Desktop to a version that includes Model Runner, enable the Model Runner feature in Docker Desktop settings, pull your chosen open weight model using the Docker Model Runner CLI, verify the OpenAI-compatible API endpoint is accessible, then connect your application — in this case OpenClaw — to the local API endpoint.
Docker Model Runner requires a recent version of Docker Desktop. The instructor covers the minimum required version and how to update Docker Desktop during the first module of this live setup guide.
Memory configuration is a critical part of Docker Model Runner setup. The instructor covers how to set memory limits appropriate for your hardware, how to choose models that fit within your available RAM, and how to monitor memory usage during inference.
Docker Model Runner exposes an OpenAI-compatible API endpoint on a local port. This setup guide covers the default port configuration, how to change it if needed, and how to configure OpenClaw to connect to the correct endpoint. The instructor covers all networking aspects of the Docker Model Runner setup.
This setup guide covers verification steps for each stage of the Docker Model Runner configuration. You will learn to test the model inference directly, verify the API endpoint responds correctly, and confirm OpenClaw is successfully communicating with Docker Model Runner before moving to messaging platform integration.
Yes. Docker Model Runner supports running and switching between multiple open weight models. This setup guide covers how to manage multiple models, how to switch between them in OpenClaw, and how to configure different models for different use cases during the live session.
4 hours. Live Docker Captain instructor. Complete setup by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing