Self-Hosted LLM 2026 · Live · April 26

How to Run a Self-Hosted LLM in 2026 — From Setup to Production

Running a self-hosted LLM in 2026 is more accessible than ever. Docker Model Runner removes most of the complexity. This live workshop shows you how to self-host an LLM properly — configured, secured, and powering a complete private AI assistant connected to WhatsApp or Telegram.

Sunday, April 26   9am to 1pm EDT
4 Hours   Hands-on coding
Live Online   Interactive

Workshop Details

📅
Date and Time
Sunday, April 26, 2026
9:00am to 1:00pm EDT
Duration
4 Hours · Hands-on
💻
Format
Live Online · Interactive
🎓
Includes
Certificate of Completion
🔒
Privacy
100% Local · No Cloud Required
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

OpenClaw — 200K+ GitHub Stars
4 Hours Live Hands-On Coding
✦ By Packt Publishing
No Cloud Dependency Required
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published for developers worldwide
108
Live workshops and events hosted on Eventbrite
200K+
GitHub stars for OpenClaw — the tool you will master
100%
Hands-on — every session involves real code and live building
About This Workshop

Why Self-Hosting an LLM in 2026 Is Now a Practical Choice

The combination of capable open weight models and Docker Model Runner has made self-hosting an LLM a practical option for any developer in 2026. This workshop covers the complete self-hosted LLM setup — from model selection to production deployment.

🖥

What is OpenClaw?

OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.

🐳

What is Docker Model Runner?

Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.

🔗

Why Combine OpenClaw and Docker?

OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.

🎯

Why Attend as a Live Workshop?

Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.

Workshop Curriculum

How to Self-Host an LLM in 2026 — The Complete Setup

Six modules covering the complete self-hosted LLM stack for 2026.

01

How OpenClaw Works

Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.

02

Docker Model Runner Setup

Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.

03

Security and Privacy

Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.

04

Connect to WhatsApp or Telegram

Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.

05

Scalable Architecture

Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.

06

Production Deployment

Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.

What You Walk Away With

By the End of This Workshop You Will Have

A properly self-hosted LLM powering a complete private AI assistant in 2026.

A fully functional local AI assistant running on your machine

Docker Model Runner configured with your chosen LLM model

OpenClaw connected to WhatsApp or Telegram

Security and privacy configuration you can trust

A reusable architecture for future AI assistant projects

Certificate of completion from Packt Publishing

Your Instructor

Learn Self-Hosted LLM Deployment From a Docker Captain

Rami Krispin has deployed self-hosted LLMs in production environments using Docker.

Rami Krispin

Rami Krispin

Workshop Instructor · April 26, 2026

Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.

Prerequisites

Who Is This Workshop For?

Developers who want to properly self-host an LLM in 2026 and build something useful with it.

Frequently Asked Questions

Common Questions About Self-Hosted LLMs in 2026

Everything you need to know about running and deploying a self-hosted LLM in 2026.

What does self-hosting an LLM actually mean in 2026? +

Self-hosting an LLM in 2026 means running a large language model on hardware you own or control — your laptop, desktop, or a VPS — using tools like Docker Model Runner. Your model runs locally, processes all requests on your own hardware, and sends no data to external AI providers. In this workshop you self-host an LLM and connect it to OpenClaw to build a complete private AI assistant.

What are the best LLMs to self-host in 2026? +

The best self-hosted LLMs in 2026 for most developer setups are Llama 3 8B, Mistral 7B Instruct, and Phi-3 Mini. All are available through Docker Model Runner, all are free, and all deliver strong performance for personal AI assistant use cases. The instructor covers the trade-offs between each during the workshop.

How much storage do I need to self-host an LLM in 2026? +

Self-hosted LLMs require significant disk space — typically 4GB to 8GB per model for quantised versions of 7B to 8B parameter models. The instructor covers storage requirements for different models and how to manage model storage efficiently during the workshop.

Can I self-host multiple LLMs simultaneously in 2026? +

You can have multiple LLMs installed through Docker Model Runner but running them simultaneously requires sufficient RAM for each. The workshop covers how to manage multiple models efficiently and how to configure OpenClaw to switch between them for different use cases.

Is self-hosting an LLM in 2026 suitable for beginners? +

Self-hosting an LLM is more accessible in 2026 than it has ever been, but it still requires comfort with Docker, command-line tools, and basic system administration. This workshop is designed for developers — not beginners. If you are comfortable with Python and basic terminal usage, you have the foundation needed to follow this workshop successfully.

How do I keep my self-hosted LLM up to date in 2026? +

Keeping your self-hosted LLM current in 2026 involves pulling updated model versions through Docker Model Runner when they are released. New versions of popular models are released regularly. The instructor covers the update process and how to evaluate whether a new model version is worth upgrading to during the workshop.

Self-Hosted LLM 2026 · April 26

Ready to Self-Host Your Own LLM in 2026?

4 hours. Live Docker Captain instructor. Self-hosted LLM running in production by the end. Seats are limited.

Register Now →

Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing