WhatsApp Bot With Local LLM · Live · April 26

Build a WhatsApp Bot Powered by a Local LLM — Not the OpenAI API

Most WhatsApp AI bots use the OpenAI API. This live workshop shows you how to build a WhatsApp bot powered by a local LLM running through Docker Model Runner — no API costs, no data sent to OpenAI, and a genuine private AI assistant in your WhatsApp.

Sunday, April 26   9am to 1pm EDT
4 Hours   Hands-on coding
Live Online   Interactive

Workshop Details

📅
Date and Time
Sunday, April 26, 2026
9:00am to 1:00pm EDT
Duration
4 Hours · Hands-on
💻
Format
Live Online · Interactive
🎓
Includes
Certificate of Completion
🔒
Privacy
100% Local · No Cloud Required
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

OpenClaw — 200K+ GitHub Stars
4 Hours Live Hands-On Coding
✦ By Packt Publishing
No Cloud Dependency Required
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published for developers worldwide
108
Live workshops and events hosted on Eventbrite
200K+
GitHub stars for OpenClaw — the tool you will master
100%
Hands-on — every session involves real code and live building
About This Workshop

Why a Local LLM Beats the OpenAI API for a Private WhatsApp Bot

An OpenAI-powered WhatsApp bot costs money per message and sends your conversations to OpenAI's servers. A WhatsApp bot powered by a local LLM through Docker Model Runner costs nothing per message and processes everything on your own hardware.

🖥

What is OpenClaw?

OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.

🐳

What is Docker Model Runner?

Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.

🔗

Why Combine OpenClaw and Docker?

OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.

🎯

Why Attend as a Live Workshop?

Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.

Workshop Curriculum

How to Build a WhatsApp Bot With a Local LLM

Six modules. From local LLM setup to a fully working private WhatsApp AI bot.

01

How OpenClaw Works

Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.

02

Docker Model Runner Setup

Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.

03

Security and Privacy

Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.

04

Connect to WhatsApp or Telegram

Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.

05

Scalable Architecture

Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.

06

Production Deployment

Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.

What You Walk Away With

By the End of This Workshop You Will Have

A WhatsApp bot powered by a local LLM — private, fast, and free to run.

A fully functional local AI assistant running on your machine

Docker Model Runner configured with your chosen LLM model

OpenClaw connected to WhatsApp or Telegram

Security and privacy configuration you can trust

A reusable architecture for future AI assistant projects

Certificate of completion from Packt Publishing

Your Instructor

Learn WhatsApp Bot Development With Local LLMs From a Docker Captain

Rami Krispin has built local LLM WhatsApp integrations in production environments.

Rami Krispin

Rami Krispin

Workshop Instructor · April 26, 2026

Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.

Prerequisites

Who Is This Workshop For?

Developers who want a WhatsApp AI bot powered by a local LLM — not a cloud API.

Frequently Asked Questions

Common Questions About Building a WhatsApp Bot With a Local LLM

Everything you need to know about local LLM WhatsApp bot development.

How does a WhatsApp bot powered by a local LLM actually work? +

When you send a WhatsApp message to your bot, OpenClaw receives it through its WhatsApp channel integration. It passes the message to Docker Model Runner's local API which processes it using your locally running open weight LLM. The response comes back to OpenClaw which sends it to your WhatsApp. The entire AI processing chain runs on your own hardware.

Is a local LLM WhatsApp bot slower than an OpenAI-powered bot? +

Yes — local LLM inference is generally slower than the OpenAI API for typical hardware configurations. On a laptop with 16GB RAM you can expect responses in 5 to 20 seconds for most messages. This is acceptable for personal assistant use where you are not expecting instant responses. VPS deployment with more powerful hardware improves response times.

Can I use my WhatsApp bot with multiple contacts? +

Yes. OpenClaw's allowlist system lets you add multiple authorised WhatsApp contacts who can all interact with your local LLM WhatsApp bot. Each contact sends messages to the same connected WhatsApp number and receives AI responses powered by your local model.

What open weight models work best for a local LLM WhatsApp bot? +

For WhatsApp bot use cases where response time matters, smaller and faster models like Phi-3 Mini (3.8B) or Mistral 7B work well. For higher response quality at the cost of speed, Llama 3 8B is an excellent choice. The instructor covers model selection and performance trade-offs during the workshop.

Will my WhatsApp bot lose context between different conversations? +

OpenClaw maintains conversation context within a session. The instructor covers how OpenClaw manages context and what options are available for configuring context window length and memory retention for your local LLM WhatsApp bot during the workshop.

How do I keep my WhatsApp bot running when I close my laptop? +

The final module of this workshop covers deploying your OpenClaw setup to a VPS server for always-on availability. Once deployed to a VPS, your local LLM WhatsApp bot runs 24 hours a day — responding to messages even when your laptop is off.

WhatsApp Bot With Local LLM · April 26, 2026

Ready to Build Your WhatsApp Bot With a Local LLM?

4 hours. Live instructor. Working WhatsApp bot with local LLM by the end. Seats are limited.

Register Now →

Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing