Installing and Running a WebUI for Ollama Chat with Docker

Switching Docker to rootless mode enhances security by running containers without root privileges. In this post, I’ll walk you through the setup, key benefits, and considerations to help you adopt a safer Docker environment.

Installing and Running a WebUI for Ollama Chat with Docker
Photo by Alphy John / Unsplash

1. Introduction – From Terminal to Web Browser

If you’ve been exploring Ollama, you probably know its default charm: it runs beautifully in your terminal. A quick ollama run llama3, and suddenly you’re chatting with an AI model right from the command line. Fast, clean, minimalistic.

But let’s be honest: as much as we love the terminal, sometimes you want more. You want a comfortable chat window where you can scroll back through conversations, switch models from a dropdown menu, or even upload files and let the AI process them. You want something that looks and feels like ChatGPT—but powered entirely by your local machine.

That’s where Open WebUI (and other community-built interfaces) come into play. These projects wrap Ollama in a browser-based UI that makes chatting with AI models intuitive and pleasant, without losing the privacy and offline-first advantages of local hosting.

In this post, I’ll take you on a hands-on journey of installing and running Open WebUI for Ollama using Docker. Along the way, we’ll answer questions like:

  • How does the WebUI actually communicate with my Ollama server?
  • Can I share the interface with other devices on my local network?
  • What’s the easiest way to keep this setup running?

By the end, you’ll have a fully working WebUI, accessible from your browser, powered by your own machine. And yes, we’ll even wrap it in a docker-compose.yaml so that it feels professional, maintainable, and easy to restart whenever you need.

2. Why Add a WebUI to Ollama?

Before we dive into setup, let’s answer a simple question: Why bother with a WebUI if Ollama already works in the terminal?

Here are some reasons:

  • Chat history – Terminals aren’t great at storing multi-day conversations. A web UI can save your chats in a clean, searchable way.
  • Ease of use – Not everyone in your household wants to type commands into a terminal. With a browser-based chat, anyone can use it.
  • Features beyond chat – Many UIs support document uploads, Retrieval-Augmented Generation (RAG), and integration with web search.
  • Multi-model switching – Instead of typing ollama run modelname, you can pick from a dropdown.
  • Accessibility – Once running, you can access it from your desktop, laptop, or even a tablet on the couch.

In short, a WebUI makes Ollama friendly and versatile while still keeping everything private and local.

There are several community-built UIs for Ollama (like Msty.app and Chatbot Ollama), but the one we’ll focus on here is Open WebUI.

Why? Because it’s:

  • Open source – Transparent and trustworthy.
  • Feature-rich – Supports RAG, file uploads, web search, themes, and Markdown.
  • Actively maintained – Has a large user community and regular updates.
  • Docker-friendly – Installation is a breeze with a single docker run command.

With Open WebUI, you’ll get a ChatGPT-like experience, but everything runs on your machine, offline.

4. Prerequisites

Before setting things up, let’s check what we need:

  1. Ollama installed
    • Head to ollama.com and install Ollama for your OS.
  2. Docker installed
    • On Windows/Mac, download Docker Desktop from docker.com.
  3. Basic familiarity with the terminal
    • You’ll be running a few commands, nothing too scary.

Test it with:

docker run hello-world

On Linux, install Docker Engine with:

curl -fsSL https://get.docker.com | sh

Verify it works by running:

ollama run llama3

If you see a chat prompt, you’re good to go.

5. Running Open WebUI with Docker

Here’s where the magic happens. The fastest way to get started is with a single Docker command.

Step 1: Run Open WebUI

Open a terminal and run:

docker run -d \
  -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Let’s break this down:

  • -d → Runs in detached (background) mode.
  • -p 3000:8080 → Maps container port 8080 to your local 3000. You’ll access it at http://localhost:3000.
  • --add-host=host.docker.internal:host-gateway → The key! This lets the container talk to Ollama on your host machine.
  • -v open-webui:/app/backend/data → Creates a Docker volume so your chat history and settings persist.
  • --restart always → Restarts automatically if it crashes or if your machine reboots.
  • ghcr.io/open-webui/open-webui:main → The official Docker image.

Step 2: Access the WebUI

Once the container is running, open a browser and go to: http://localhost:3000

http://localhost:3000

You’ll see a login page the first time, this is just for your local account. Create an account and log in.

Open WebUI will automatically detect Ollama running on your host machine. If everything is working, you’ll be able to:

  • Start a new chat.
  • Pick a model (like llama3) from the dropdown.
  • Send messages and get responses, just like ChatGPT.

🎉 Congratulations, you now have a working Ollama WebUI!

6. How Does the WebUI Talk to Ollama?

At this point, you might be curious: How does a Docker container running Open WebUI actually connect to my local Ollama installation?

The answer lies in this part of the command:

--add-host=host.docker.internal:host-gateway

Here’s what happens:

  1. Normally, containers can’t access services on your host machine’s localhost. They’re isolated by design.
  2. Docker provides a special DNS name: host.docker.internal.
  3. The --add-host flag maps this name to the correct internal IP.
  4. Inside the container, Open WebUI makes API calls to http://host.docker.internal:11434 (the default Ollama port).
  5. Those requests reach your Ollama server running on your host.

It’s a neat trick that makes containers talk to your local machine without breaking isolation.

7. Making the WebUI Available on Your Local Network

Right now, your WebUI works only on your machine at http://localhost:3000. But what if you want to access it from your tablet, phone, or another laptop on the same Wi-Fi?

Easy. We just need to expose it properly.

Step 1: Stop the container

docker stop open-webui
docker rm open-webui

Step 2: Rerun with 0.0.0.0

docker run -d \
  -p 0.0.0.0:3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

The change is here:

  • -p 0.0.0.0:3000:8080 → Instead of binding only to localhost, we bind to all interfaces.

Step 3: Find your machine’s IP

On Linux/macOS:

hostname -I

On Windows:

ipconfig

Look for something like 192.168.1.50.

Step 4: Access from another device

On your phone/tablet/laptop, open a browser and go to:

http://192.168.1.50:3000

Now the WebUI is accessible to anyone on your network (assuming your firewall allows it).

💡 Tip: If you want to keep it private, configure a firewall rule or require login from the UI.

8. Running with Docker Compose

Typing long docker run commands is fine once or twice, but for something permanent, Docker Compose is a better choice.

Here’s a docker-compose.yaml that sets up both Ollama and Open WebUI together:

version: "3.9"
services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    restart: always
    ports:
      - "11434:11434"
    volumes:
      - ollama-data:/root/.ollama

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    restart: always
    ports:
      - "3000:8080"
    volumes:
      - open-webui-data:/app/backend/data
    extra_hosts:
      - "host.docker.internal:host-gateway"

volumes:
  ollama-data:
  open-webui-data:

Steps:

  1. Create the file docker-compose.yaml with the content above.
  2. Access at http://localhost:3000.

Run:

docker compose up -d

Create a folder:

mkdir ollama-webui && cd ollama-webui

Now you’ve got both Ollama and WebUI running inside containers, neatly managed with Docker Compose.

9. Troubleshooting

Even smooth setups hit bumps. Here are common issues:

  • Problem: WebUI says it can’t find Ollama.
    • Fix: Make sure Ollama is running (ollama serve on host or container).
    • If using host installation, check the --add-host flag.
  • Problem: Can’t access from another device.
    • Fix: Use 0.0.0.0 binding and verify your firewall allows port 3000.
  • Problem: Container keeps restarting.
    • Fix: Run docker logs open-webui to see what’s wrong.

10. Conclusion – From CLI to Couch

With just a few commands, we’ve transformed Ollama from a CLI-only AI tool into a full ChatGPT-like experience running in your browser.

  • For simplicity → run Open WebUI as a single container.
  • For robustness → use Docker Compose to manage Ollama and WebUI together.
  • For accessibility → expose it to your local network and enjoy chatting from any device.

And the best part? It’s all running locally, offline, with your privacy intact. No cloud servers, no leaks—just your machine, your models, your data.

So the next time someone asks, “Hey, what if ChatGPT was self-hosted and private?”—you can smile and say, “I’ve got that running at home.”

Read next