Run Phi Locally With Docker · Live · April 26

How to Run Phi Locally With Docker — Small Model, Surprisingly Capable

Microsoft's Phi models deliver impressive performance at very small sizes — making them ideal for machines with limited RAM. This live workshop shows you how to run Phi locally with Docker Model Runner and connect it to OpenClaw for a fast, lightweight private AI assistant.

Sunday, April 26   9am to 1pm EDT
4 Hours   Hands-on coding
Live Online   Interactive

Workshop Details

📅
Date and Time
Sunday, April 26, 2026
9:00am to 1:00pm EDT
Duration
4 Hours · Hands-on
💻
Format
Live Online · Interactive
🎓
Includes
Certificate of Completion
🔒
Privacy
100% Local · No Cloud Required
Register on Eventbrite →

By Packt Publishing · Refunds up to 10 days before

OpenClaw — 200K+ GitHub Stars
4 Hours Live Hands-On Coding
✦ By Packt Publishing
No Cloud Dependency Required
Certificate of Completion
Why Trust Packt

Over 20 Years of Helping Developers Build Real Skills

7,500+
Books and video courses published for developers worldwide
108
Live workshops and events hosted on Eventbrite
200K+
GitHub stars for OpenClaw — the tool you will master
100%
Hands-on — every session involves real code and live building
About This Workshop

Why Phi Is the Best Choice for Low-Resource Local AI Deployment

Phi-3 Mini and Phi-3 Small deliver remarkable quality at 3B to 4B parameters — meaning they run on machines with 8GB RAM while still providing genuinely useful AI assistant capabilities. This workshop uses Phi as the engine for an OpenClaw private AI assistant.

🖥

What is OpenClaw?

OpenClaw is the open-source personal AI assistant that went viral in early 2026 with 200K+ GitHub stars. It runs on your own devices and connects to WhatsApp, Telegram, Slack and more. No subscription. No data leaving your machine.

🐳

What is Docker Model Runner?

Docker Model Runner is Docker's native feature for running large language models locally on your machine. It gives you an OpenAI-compatible API that OpenClaw uses as its AI brain — complete data privacy, no cloud costs.

🔗

Why Combine OpenClaw and Docker?

OpenClaw gives you the assistant interface and messaging integrations. Docker Model Runner gives you the AI brain running privately on your machine. Together they create a production grade private AI assistant you fully own.

🎯

Why Attend as a Live Workshop?

Setting this up from scattered documentation takes days of debugging. This live workshop gives you a complete guided build in 4 hours with a live instructor answering your questions. Packt has delivered 108 workshops worldwide.

Workshop Curriculum

What You Will Build Running Phi Locally With Docker

Six modules. From running Phi in Docker to a fully deployed lightweight private AI assistant.

01

How OpenClaw Works

Understand the Gateway, channels, and skills architecture. Set up and configure OpenClaw locally from scratch.

02

Docker Model Runner Setup

Run and manage local LLMs using Docker Model Runner. Pull models, configure memory, and understand the OpenAI-compatible API.

03

Security and Privacy

Configure DM pairing, allowlists, sandbox mode, and proper access controls for your local AI deployment.

04

Connect to WhatsApp or Telegram

Deploy your AI assistant to real messaging platforms without sending data to any third party cloud service.

05

Scalable Architecture

Design an extensible assistant architecture. Add skills, configure personality, and set up proactive automation.

06

Production Deployment

Deploy your OpenClaw and Docker setup to a VPS for always-on availability running 24 hours a day.

What You Walk Away With

By the End of This Workshop You Will Have

Phi running locally via Docker powering a fast, lightweight private AI assistant.

A fully functional local AI assistant running on your machine

Docker Model Runner configured with your chosen LLM model

OpenClaw connected to WhatsApp or Telegram

Security and privacy configuration you can trust

A reusable architecture for future AI assistant projects

Certificate of completion from Packt Publishing

Your Instructor

Learn to Run Phi Locally With Docker From a Docker Captain

Rami Krispin has deployed Phi models in local Docker environments for resource-constrained setups.

Rami Krispin

Rami Krispin

Workshop Instructor · April 26, 2026

Rami is a Senior Manager of Data Science and Engineering, Docker Captain, and LinkedIn Learning Instructor with deep expertise in building and deploying production AI systems. He guides you step by step from a blank terminal to a fully deployed private AI assistant — answering your questions live throughout the 4-hour session.

Prerequisites

Who Is This Workshop For?

Developers who want a capable local AI assistant even on machines with limited RAM.

Frequently Asked Questions

Common Questions About Running Phi Locally With Docker

Everything you need to know about running Phi locally using Docker.

Why choose Phi over Llama or Mistral for local deployment? +

Phi-3 Mini at 3.8B parameters delivers surprisingly strong performance at a fraction of the size of Llama 8B or Mistral 7B. If you have a machine with only 8GB RAM or want the fastest possible response times, Phi is an excellent choice. The instructor covers the trade-offs between Phi, Llama, and Mistral during the workshop.

What hardware do I need to run Phi locally with Docker? +

Phi-3 Mini can run on machines with as little as 8GB of RAM — making it one of the most accessible local AI models available. This makes it an ideal starting point for developers with older machines or those who want to minimize the performance impact on their system.

How does Phi perform as a personal AI assistant? +

Phi-3 Mini and Phi-3 Small are optimised for instruction following and conversational tasks. For a personal AI assistant handling questions, writing assistance, and general conversation, Phi performs very well despite its compact size. The instructor evaluates Phi for assistant tasks during the live session.

Is Phi free to use commercially? +

Microsoft's Phi models are released under the MIT licence — one of the most permissive open source licences available. This means you can use Phi freely for both personal and commercial projects with no restrictions.

Can I switch from Phi to a larger model later in my OpenClaw setup? +

Yes. One of the advantages of building on Docker Model Runner and OpenClaw is the ability to upgrade your model as your hardware improves or your needs change. Switching from Phi to Mistral or Llama requires pulling the new model and updating a single configuration setting in OpenClaw.

How fast is Phi running locally compared to larger models? +

Phi-3 Mini is significantly faster than Llama 8B or Mistral 7B on the same hardware because of its smaller size. On a typical developer laptop you can expect around 30 to 50 tokens per second with Phi on CPU — considerably faster than the 15 to 25 tokens per second typical for 7B parameter models.

Run Phi Locally With Docker · April 26, 2026

Ready to Run Phi Locally With Docker?

4 hours. Live Docker Captain instructor. Phi running locally by the end. Seats are limited.

Register Now →

Sunday April 26 · 9am to 1pm EDT · Online · Packt Publishing