A local LLM Telegram bot powered by Docker Model Runner and OpenClaw processes every message on your own hardware with no cloud AI costs and no data leaving your machine. This live workshop shows you how to build and deploy one in 4 hours.
By Packt Publishing · Refunds up to 10 days before
Cloud AI Telegram bots charge per token and send your conversations to external AI servers. A local LLM Telegram bot processes everything on your own hardware — zero API costs, complete privacy, and full control over which AI model powers your bot.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules. From local LLM setup to a fully deployed private Telegram AI bot.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
A working local LLM Telegram bot — private, free to run, and properly deployed.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has built local LLM Telegram bots in production environments using Docker.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to build a private Telegram AI bot powered by a local LLM.
Everything you need to know about local LLM Telegram bot development.
A standard Telegram bot typically calls an external API — usually OpenAI or another cloud AI service — to generate responses. A local LLM Telegram bot uses Docker Model Runner to run an open weight model on your own machine. This eliminates cloud AI costs, keeps your conversations private, and removes dependency on any external AI service.
Creating your Telegram bot is done through BotFather — Telegram's official bot creation service. The process takes about 5 minutes and gives you a bot token that you configure in OpenClaw. The instructor covers the complete BotFather process step by step during module four of the workshop.
Yes. OpenClaw supports Telegram bot commands — messages starting with / that trigger specific actions. You can configure commands for your local LLM Telegram bot to perform specific tasks, change the model, clear conversation history, or trigger custom skills. The instructor covers command configuration during the workshop.
Response time depends on your hardware and model size. With a 7B parameter model on 16GB RAM, expect 5 to 20 seconds per response on CPU. Phi-3 Mini (3.8B parameters) delivers faster responses — around 3 to 10 seconds — at slightly reduced quality. The instructor covers performance optimisation during the workshop.
Yes. OpenClaw's skills system lets you extend your Telegram bot with custom Python-based capabilities — from web lookups to file operations to external API integrations. The workshop covers the skills architecture so you can build custom skills for your Telegram bot after completing the session.
The workshop covers monitoring approaches for your local LLM Telegram bot — including how to check the status of Docker Model Runner, OpenClaw's process health, and the Telegram connection. The instructor covers practical monitoring techniques appropriate for both laptop and VPS deployments.
4 hours. Live instructor. Working local LLM Telegram bot by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing