Most WhatsApp AI bots use the OpenAI API. This live workshop shows you how to build a WhatsApp bot powered by a local LLM running through Docker Model Runner — no API costs, no data sent to OpenAI, and a genuine private AI assistant in your WhatsApp.
By Packt Publishing · Refunds up to 10 days before
An OpenAI-powered WhatsApp bot costs money per message and sends your conversations to OpenAI's servers. A WhatsApp bot powered by a local LLM through Docker Model Runner costs nothing per message and processes everything on your own hardware.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules. From local LLM setup to a fully working private WhatsApp AI bot.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
A WhatsApp bot powered by a local LLM — private, fast, and free to run.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has built local LLM WhatsApp integrations in production environments.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want a WhatsApp AI bot powered by a local LLM — not a cloud API.
Everything you need to know about local LLM WhatsApp bot development.
When you send a WhatsApp message to your bot, OpenClaw receives it through its WhatsApp channel integration. It passes the message to Docker Model Runner's local API which processes it using your locally running open weight LLM. The response comes back to OpenClaw which sends it to your WhatsApp. The entire AI processing chain runs on your own hardware.
Yes — local LLM inference is generally slower than the OpenAI API for typical hardware configurations. On a laptop with 16GB RAM you can expect responses in 5 to 20 seconds for most messages. This is acceptable for personal assistant use where you are not expecting instant responses. VPS deployment with more powerful hardware improves response times.
Yes. OpenClaw's allowlist system lets you add multiple authorised WhatsApp contacts who can all interact with your local LLM WhatsApp bot. Each contact sends messages to the same connected WhatsApp number and receives AI responses powered by your local model.
For WhatsApp bot use cases where response time matters, smaller and faster models like Phi-3 Mini (3.8B) or Mistral 7B work well. For higher response quality at the cost of speed, Llama 3 8B is an excellent choice. The instructor covers model selection and performance trade-offs during the workshop.
OpenClaw maintains conversation context within a session. The instructor covers how OpenClaw manages context and what options are available for configuring context window length and memory retention for your local LLM WhatsApp bot during the workshop.
The final module of this workshop covers deploying your OpenClaw setup to a VPS server for always-on availability. Once deployed to a VPS, your local LLM WhatsApp bot runs 24 hours a day — responding to messages even when your laptop is off.
4 hours. Live instructor. Working WhatsApp bot with local LLM by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing