Run Llama Locally With Docker Desktop · April 26

How to Run Llama Locally With Docker Desktop — Setup to Working Assistant

Llama is one of the most capable open weight models available in 2026. Docker Desktop's Model Runner makes running Llama locally simple and reliable. This live workshop shows you how to run Llama locally and connect it to OpenClaw for a working private AI assistant.

Sunday, April 26   9am to 1pm EDT
4 Hours   Hands-on coding
Live Online   Interactive

Workshop Details

📅
Date and Time
Sunday, April 26, 2026
9:00am to 1:00pm EDT
Duration
4 Hours · Hands-on
💻
Format
Live Online · Interactive
🎓
Includes
Certificate of Completion
🔒
Privacy
100% Local · No Cloud Required
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

OpenClaw — 200K+ GitHub Stars
4 Hours Live Hands-On Coding
✦ By Packt Publishing
No Cloud Dependency Required
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published for developers worldwide
108
Live workshops and events hosted on Eventbrite
200K+
GitHub stars for OpenClaw — the tool you will master
100%
Hands-on — every session involves real code and live building
About This Workshop

Why Llama With Docker Desktop Is a Powerful Local AI Stack

Meta's Llama models offer excellent performance for personal AI assistant use cases. Docker Model Runner in Docker Desktop makes running Llama locally straightforward. Combined with OpenClaw, this gives you a capable private AI assistant with no cloud dependency.

🖥

What is OpenClaw?

OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.

🐳

What is Docker Model Runner?

Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.

🔗

Why Combine OpenClaw and Docker?

OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.

🎯

Why Attend as a Live Workshop?

Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.

Workshop Curriculum

What You Will Learn Running Llama Locally With Docker Desktop

Six modules. From pulling Llama in Docker Desktop to a deployed private AI assistant.

01

How OpenClaw Works

Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.

02

Docker Model Runner Setup

Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.

03

Security and Privacy

Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.

04

Connect to WhatsApp or Telegram

Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.

05

Scalable Architecture

Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.

06

Production Deployment

Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.

What You Walk Away With

By the End of This Workshop You Will Have

Llama running locally in Docker Desktop powering a working OpenClaw AI assistant.

A fully functional local AI assistant running on your machine

Docker Model Runner configured with your chosen LLM model

OpenClaw connected to WhatsApp or Telegram

Security and privacy configuration you can trust

A reusable architecture for future AI assistant projects

Certificate of completion from Packt Publishing

Your Instructor

Learn Llama Local Deployment From a Docker Captain

Rami Krispin has run Llama in production local environments using Docker Desktop.

Rami Krispin

Rami Krispin

Workshop Instructor · April 26, 2026

Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.

Prerequisites

Who Is This Workshop For?

Developers who want to run Llama locally using Docker Desktop and build something real with it.

Frequently Asked Questions

Common Questions About Running Llama Locally With Docker Desktop

Everything you need to know about running Llama locally using Docker Desktop.

Which version of Llama should I run locally with Docker Desktop? +

The workshop covers Llama 3 models available through Docker Model Runner. For most developer laptops with 16GB RAM, Llama 3 8B offers the best balance of quality and performance. The instructor covers the different Llama variants and helps you select the right version for your hardware during the live session.

How much RAM do I need to run Llama locally with Docker Desktop? +

Running Llama 3 8B locally with Docker Desktop requires approximately 8GB of RAM for the model itself plus system overhead. 16GB total system RAM is recommended for a smooth experience. The instructor covers memory requirements for different Llama model sizes.

How fast does Llama run locally compared to the ChatGPT API? +

Llama running locally through Docker Desktop is slower than the ChatGPT API for most hardware configurations. On a modern laptop you can expect around 15 to 25 tokens per second with Llama 3 8B on CPU. This is perfectly usable for a personal AI assistant. Response times feel natural for conversational use.

Can I use Llama running in Docker Desktop as a drop-in replacement for the OpenAI API? +

Yes. Docker Model Runner exposes Llama through an OpenAI-compatible API endpoint. This means OpenClaw and any other application that supports the OpenAI API format can use locally running Llama as a drop-in replacement — pointing to the local endpoint instead of OpenAI's servers.

Is Llama free to use for personal and commercial projects? +

Llama 3 is available under Meta's licence which permits personal use and many commercial use cases. The instructor covers the specific licence terms during the workshop so you understand what is permitted for your intended use case.

Can I run Llama locally on Docker Desktop without a GPU? +

Yes. Docker Model Runner supports running Llama on CPU without a GPU. Performance is reasonable for personal AI assistant use cases. The instructor covers CPU versus GPU performance expectations and model size recommendations for CPU-only machines during the live session.

Run Llama Locally With Docker Desktop · April 26, 2026

Ready to Run Llama Locally With Docker Desktop?

4 hours. Live Docker Captain instructor. Llama running locally by the end. Seats are limited.

Register Now →

Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing