Build Your Own Local Coding Assistant · April 26

Build Your Own Coding Assistant Running Locally — Code Stays on Your Machine

Your own locally running coding assistant gives you capable AI help for development work without sending your code to GitHub Copilot or ChatGPT. This live workshop shows you how to build one using OpenClaw and Docker Model Runner — private, free, and working in 4 hours.

Sunday, April 26   9am to 1pm EDT
4 Hours   Hands-on coding
Live Online   Interactive

Workshop Details

📅
Date and Time
Sunday, April 26, 2026
9:00am to 1:00pm EDT
Duration
4 Hours · Hands-on
💻
Format
Live Online · Interactive
🎓
Includes
Certificate of Completion
🔒
Privacy
100% Local · No Cloud Required
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

OpenClaw — 200K+ GitHub Stars
4 Hours Live Hands-On Coding
✦ By Packt Publishing
No Cloud Dependency Required
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published for developers worldwide
108
Live workshops and events hosted on Eventbrite
200K+
GitHub stars for OpenClaw — the tool you will master
100%
Hands-on — every session involves real code and live building
About This Workshop

Why Building Your Own Local Coding Assistant Makes Sense in 2026

Open weight models in 2026 are good enough for most coding assistance tasks. Docker Model Runner makes running them locally straightforward. Building your own coding assistant locally takes a 4-hour workshop and saves you both subscription costs and the IP risk of sending proprietary code to cloud AI.

🖥

What is OpenClaw?

OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.

🐳

What is Docker Model Runner?

Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.

🔗

Why Combine OpenClaw and Docker?

OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.

🎯

Why Attend as a Live Workshop?

Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.

Workshop Curriculum

How to Build Your Own Local Coding Assistant

Six modules. Four hours. One working private AI assistant by the time you finish.

01

How OpenClaw Works

Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.

02

Docker Model Runner Setup

Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.

03

Security and Privacy

Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.

04

Connect to WhatsApp or Telegram

Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.

05

Scalable Architecture

Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.

06

Production Deployment

Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.

What You Walk Away With

By the End of This Workshop You Will Have

Concrete working deliverables — not just theory.

A fully functional local AI assistant running on your machine

Docker Model Runner configured with your chosen LLM model

OpenClaw connected to WhatsApp or Telegram

Security and privacy configuration you can trust

A reusable architecture for future AI assistant projects

Certificate of completion from Packt Publishing

Your Instructor

Learn to Build a Local Coding Assistant From a Developer Who Uses One

Rami Krispin has built his own locally running coding assistant and uses it in daily development work.

Rami Krispin

Rami Krispin

Workshop Instructor · April 26, 2026

Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.

Prerequisites

Who Is This Workshop For?

You do not need to be an expert. You do need the basics.

Frequently Asked Questions

Common Questions About Building a Local Coding Assistant

Common questions about the workshop, what to expect, and how to prepare.

What can my own locally built coding assistant do? +

Your locally built coding assistant handles code review and suggestions, explains unfamiliar code, helps debug errors, generates code from descriptions, writes tests, explains error messages, helps with regex and SQL, assists with documentation, and answers technical questions — all without sending your code to any external AI service.

How do I interact with my locally built coding assistant? +

Your coding assistant is accessible through WhatsApp or Telegram — you paste code or ask questions in chat and receive AI responses. This conversational interface makes it easy to have back-and-forth technical discussions and iterate on code suggestions without leaving your messaging app.

Is building my own local coding assistant better than paying for Copilot? +

It depends on your use case. If inline IDE autocomplete is your primary need, Copilot has a workflow advantage. If you want a conversational coding assistant for code review, explaining code, and answering technical questions — particularly for proprietary or sensitive code — building your own local coding assistant offers better privacy, zero ongoing cost, and full control.

What open weight models perform best for a locally built coding assistant? +

Llama 3 8B and Mistral 7B Instruct are strong general-purpose choices that handle coding tasks well. The instructor compares model performance for coding tasks during the workshop to help you choose the right model.

Can I extend my locally built coding assistant with additional capabilities? +

Yes. OpenClaw's skills system lets you add capabilities beyond conversational coding assistance — such as querying your local documentation or automating code review workflows. The instructor covers the skills architecture so you can extend your coding assistant after the workshop.

How long does my own locally built coding assistant take to respond to code questions? +

Response time depends on query complexity and hardware. Simple questions receive responses in 3 to 10 seconds on a laptop with 16GB RAM. Complex code review tasks take 10 to 30 seconds. Phi-3 Mini provides quicker responses at slightly reduced quality.

Build Local Coding Assistant · April 26, 2026

Ready to Build Your Own Locally Running Coding Assistant?

4 hours. Live instructor. Your own local coding assistant by the end. Seats are limited.

Register Now →

Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing