Building a WhatsApp AI bot locally means your AI model runs on your own machine, your conversations stay private, and you pay nothing per message. This live workshop shows you how to build one using OpenClaw and Docker Model Runner in 4 hours.
By Packt Publishing · Refunds up to 10 days before
A locally built WhatsApp AI bot gives you complete control — you choose the model, you own the data, you set the rules. No cloud AI provider can change the terms, raise prices, or access your conversations. This workshop shows you how to build it properly.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules covering the complete local WhatsApp AI bot from setup to deployment.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
A WhatsApp AI bot running locally — deployed, secured, and responding in your chats.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has built and deployed local WhatsApp AI bots in production.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to build a real WhatsApp AI bot running locally on their own hardware.
Everything you need to know about local WhatsApp AI bot development.
Building a WhatsApp AI bot locally involves three main components: Docker Model Runner to run your chosen open weight LLM, OpenClaw to handle the WhatsApp channel integration and message routing, and a VPS or always-on local machine to keep it running. This workshop covers all three and connects them into a working private WhatsApp AI bot.
OpenClaw uses WhatsApp's standard personal messaging protocols for the connection. Building a WhatsApp AI bot locally for personal use is generally acceptable under WhatsApp's terms. For commercial use at scale, Meta's official WhatsApp Business API is the appropriate route. The instructor covers terms of service considerations during the workshop.
WhatsApp's built-in AI features send your conversations to Meta's AI infrastructure. A locally built WhatsApp AI bot powered by Docker Model Runner processes your messages entirely on your own hardware — your conversations never reach Meta's AI systems. You also have full control over the AI model, its behaviour, and its capabilities.
The workshop focuses on text-based WhatsApp AI bot functionality using open weight text models through Docker Model Runner. Multimodal capabilities (images, voice) are a more advanced topic beyond the scope of this 4-hour session. The instructor briefly covers what is possible for extension after the workshop.
There is no imposed limit on your locally built WhatsApp AI bot. The practical limit is your hardware's processing speed. A 7B parameter model on 16GB RAM typically handles personal assistant conversation volumes — a few dozen messages per day — without any performance issues.
Security configuration is covered in module three of this workshop. You will configure DM pairing so only authorised numbers can interact with your bot, set up an allowlist of permitted WhatsApp contacts, enable sandbox mode for safe testing, and configure proper access controls before connecting to WhatsApp.
4 hours. Live instructor. Local WhatsApp AI bot working by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing