Deploying a personal AI assistant locally means going from working code to a running service — accessible through WhatsApp or Telegram, stable, and secured. This live workshop covers the complete local deployment from your laptop to an always-on VPS.
By Packt Publishing · Refunds up to 10 days before
Deployment goes beyond installation. This workshop covers the complete local deployment — proper configuration, security hardening, messaging platform integration, process management, and VPS deployment for always-on availability.
OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.
Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.
OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.
Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.
Six modules covering every aspect of deploying your personal AI assistant locally.
Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.
Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.
Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.
Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.
Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.
Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.
A personal AI assistant properly deployed locally — stable, secured, and accessible.
A fully functional local AI assistant running on your machine
Docker Model Runner configured with your chosen LLM model
OpenClaw connected to WhatsApp or Telegram
Security and privacy configuration you can trust
A reusable architecture for future AI assistant projects
Certificate of completion from Packt Publishing
Rami Krispin has deployed local AI assistants in production — not just development environments.
Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.
Developers who want to properly deploy a personal AI assistant locally, not just run it.
Everything you need to know about proper local deployment of a personal AI assistant.
Running means the assistant works when you manually start it. Deploying means it starts automatically, stays running reliably, handles errors gracefully, and is accessible through your chosen interface. This workshop covers the full deployment process — not just getting the code to run once.
Yes — with some caveats. Your laptop deployment only works when your laptop is on and awake. The workshop covers both laptop deployment for personal use and VPS deployment for always-on availability. You can start with your laptop and move to a VPS later as your needs evolve.
A VPS with 16GB RAM and at least 2 CPU cores is recommended for running a 7B parameter model. This typically costs $20 to $40 per month depending on the provider. The instructor covers provider recommendations and configuration during the final module of the workshop.
The workshop covers process management configuration for your local deployment — including how to configure your OpenClaw and Docker Model Runner setup to start automatically when your machine boots. This is covered in the deployment module with both laptop and VPS configurations.
Security configuration for your locally deployed AI assistant includes: allowlist configuration to restrict who can interact with it, DM pairing for WhatsApp authentication, sandbox mode for testing new skills safely, and proper firewall configuration for VPS deployments. Module three of this workshop covers all of these.
Yes. Your laptop or desktop is a valid local deployment for personal use. The workshop covers both scenarios — laptop deployment for always-on-when-home use and VPS deployment for always-on-anywhere access. You choose which deployment model fits your needs.
4 hours. Live instructor. Fully deployed local AI assistant by the end. Seats are limited.
Register Now →Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing