8/10/2025

Setting Up OpenWebUI with Ollama in Docker Containers for Seamless AI Chat

Hey there! If you've been dipping your toes into the world of local AI & large language models (LLMs), you've probably heard about the power of running things on your own machine. It's a game-changer for privacy, customization, & just having full control over your AI experiments. But let's be honest, setting everything up can feel a bit daunting.
That's where this guide comes in. We're going to walk through, step-by-step, how to get a killer local AI chat setup running using three pretty cool tools: OpenWebUI, Ollama, & Docker. By the end of this, you'll have your own private, ChatGPT-style interface running on your computer, ready to chat with a whole bunch of different AI models.

So, What's the Big Deal with This Setup?

Before we dive into the "how," let's quickly cover the "why."
  • OpenWebUI: Think of this as the slick, user-friendly face of your local AI. It used to be called Ollama WebUI, & it gives you a really nice, ChatGPT-like interface to interact with your language models. It’s self-hosted, works offline, & is packed with features that make chatting with your AI a breeze. You can even do things like Retrieval Augmented Generation (RAG), which basically means you can upload documents & have your AI use them as a knowledge base. Pretty neat, right?
  • Ollama: This is the engine under the hood. Ollama is a fantastic tool that lets you run & manage LLMs locally on your own machine. It takes care of all the complicated stuff, like model optimization & providing an API for other applications (like OpenWebUI) to connect to. This is what gives you the power to run models from big names like Meta & Google right on your desktop, completely offline.
  • Docker: This is the magic that ties it all together. Docker is a platform for building, shipping, & running applications in "containers." A container is a lightweight, standalone package that includes everything an application needs to run: code, libraries, system tools, you name it. By using Docker, we can avoid a ton of setup headaches & ensure that our OpenWebUI & Ollama setup runs consistently, no matter what our host machine looks like.

Getting Started: What You'll Need

The beauty of this setup is that you don't need a supercomputer. A reasonably modern computer will do just fine. The main thing you'll need to install is Docker Desktop.
You can grab it directly from the Docker website. Just download the version for your operating system (Windows, Mac, or Linux) & follow the installation instructions. You might need to restart your computer after the installation is complete.

The Easy Way: Using Docker Compose

The most straightforward way to get OpenWebUI & Ollama running together is with Docker Compose. Compose is a tool for defining & running multi-container Docker applications. With a single file, you can define all the services, networks, & volumes you need.
Here's how to do it:

1. Create a
1 docker-compose.yml
File

First, create a new folder on your computer for this project. Inside that folder, create a new file named
1 docker-compose.yml
. Now, copy & paste the following code into that file:

Copyright © Arsturn 2025