Your own locally running coding assistant gives you capable AI help for development work without sending your code to GitHub Copilot or ChatGPT. This live workshop shows you how to build one using OpenClaw and Docker Model Runner — private, free, and working in 4 hours.
By Packt Publishing · Refunds up to 10 days before
Open weight models in 2026 are good enough for most coding assistance tasks. Docker Model Runner makes running them locally straightforward. Building your own coding assistant locally takes a 4-hour workshop and saves you both subscription costs and the IP risk of sending proprietary code to cloud AI.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules. Four hours. One working private AI assistant by the time you finish.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
Concrete working deliverables — not just theory.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has built his own locally running coding assistant and uses it in daily development work.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
You do not need to be an expert. You do need the basics.
Common questions about the workshop, what to expect, and how to prepare.
Your locally built coding assistant handles code review and suggestions, explains unfamiliar code, helps debug errors, generates code from descriptions, writes tests, explains error messages, helps with regex and SQL, assists with documentation, and answers technical questions — all without sending your code to any external AI service.
Your coding assistant is accessible through WhatsApp or Telegram — you paste code or ask questions in chat and receive AI responses. This conversational interface makes it easy to have back-and-forth technical discussions and iterate on code suggestions without leaving your messaging app.
It depends on your use case. If inline IDE autocomplete is your primary need, Copilot has a workflow advantage. If you want a conversational coding assistant for code review, explaining code, and answering technical questions — particularly for proprietary or sensitive code — building your own local coding assistant offers better privacy, zero ongoing cost, and full control.
Llama 3 8B and Mistral 7B Instruct are strong general-purpose choices that handle coding tasks well. The instructor compares model performance for coding tasks during the workshop to help you choose the right model.
Yes. OpenClaw's skills system lets you add capabilities beyond conversational coding assistance — such as querying your local documentation or automating code review workflows. The instructor covers the skills architecture so you can extend your coding assistant after the workshop.
Response time depends on query complexity and hardware. Simple questions receive responses in 3 to 10 seconds on a laptop with 16GB RAM. Complex code review tasks take 10 to 30 seconds. Phi-3 Mini provides quicker responses at slightly reduced quality.
4 hours. Live instructor. Your own local coding assistant by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing