8/11/2025

So You Wanna Be a Dungeon Master? How to Build Your Own Local AI Sidekick with Ollama

Alright, let's be honest. Being a Dungeon Master is one of the most rewarding experiences in tabletop gaming. You get to weave epic tales, create unforgettable characters, & watch your friends squirm as they face down a horde of goblins. But here's the thing nobody tells you: it's also a TON of work. World-building, encounter design, stat blocks, remembering that one weird NPC's name from three sessions ago… it’s a lot to juggle.
What if you could have a little help? A creative partner who’s always on, never gets tired, & has an encyclopedic knowledge of D&D lore? Enter the world of local Large Language Models (LLMs). With a tool called Ollama, you can run powerful AI models right on your own machine, crafting a personalized AI Dungeon Master assistant that's all yours. No subscriptions, no internet lag, just pure, unadulterated creativity.
This is gonna be a deep dive, so grab your favorite beverage & get ready to get nerdy. We're going to cover everything from the ground up, so even if you're new to the whole local AI scene, you'll be able to follow along.

Why Go Local? The Magic of Running Your Own AI

Before we get into the nitty-gritty, let's talk about why you'd even want to do this. We've all played around with the big-name AI chatbots, right? They're impressive, for sure. But running your own AI locally has some pretty sweet advantages, especially for something as creative & personal as D&D.
First off, it's private. Your campaign notes, your half-baked ideas, your secret plot twists—they all stay on your computer. No sending data off to some mysterious server in the cloud. It's just you & your AI, brainstorming in your own digital sandbox.
Second, it's infinitely customizable. You're not stuck with a generic, one-size-fits-all AI. With Ollama, you can fine-tune your model's personality, give it specific knowledge about your homebrew world, & even create different AI assistants for different tasks. Want a snarky goblin who gives cryptic clues? You got it. Need a wise old wizard to help with world-building? No problem.
And finally, it's offline. Once you've downloaded the models, you don't need an internet connection to use them. This is HUGE. No more laggy responses or getting cut off mid-session because your Wi-Fi decided to take a nap. Your AI DM is always ready to go, whether you're at home, at a friend's house, or even in a remote cabin in the woods (the perfect setting for a spooky one-shot, just sayin').

What You'll Need: Gearing Up for Your AI Adventure

Okay, convinced? Good. Now, let's talk about the hardware. Running these models locally does take a bit of horsepower, but you might be surprised at what you can do with a decent gaming PC.
Here's a quick rundown of the essentials:
  • A decent computer: As of late 2023, Ollama runs on Linux, macOS, & Windows via WSL2 (Windows Subsystem for Linux). You'll want a machine with at least 8GB of RAM, but honestly, the more the better. 16GB or even 32GB will give you a much smoother experience & let you run larger, more capable models.
  • Storage space: These models are not small. Some can be a few gigabytes, while others can be tens or even hundreds of gigabytes. I'd recommend having at least 128GB of free space to play around with. An SSD will make loading models much faster, too.
  • A GPU (preferably NVIDIA): While you can run these models on just your CPU, a dedicated graphics card will make a night-and-day difference. It's like the difference between walking & taking a high-speed train. An NVIDIA GPU with a good amount of VRAM (the memory on your graphics card) is ideal. 8GB of VRAM is a good starting point, but again, more is always better.
Don't have a beastly gaming rig? Don't despair! You can still get started with smaller models. The beauty of Ollama is that it supports a wide range of models, so you can find one that works for your setup.

Getting Started: Installing Ollama & Your First Model

Alright, enough talk. Let's get our hands dirty. The first step is to install Ollama, & thankfully, the developers have made this ridiculously easy.
  1. Head over to the Ollama website: Just go to
    1 ollama.ai
    & click the download button. It will automatically detect your operating system & give you the right installer.
  2. Follow the instructions: For macOS & Windows, it's a pretty standard installation process. For Linux, it's a single command you run in your terminal. Seriously, it's that simple.
Once Ollama is installed, it will run quietly in the background, waiting for your commands. Now, we need to give it a brain. In the world of Ollama, these "brains" are called models. There are tons of them to choose from, each with its own strengths & personality.
For our purposes, a good starting point is a model like Llama 3 or Mistral. These are powerful, general-purpose models that are great at creative writing & following instructions. To download one, just open your terminal (or Command Prompt on Windows) & type:

Copyright © Arsturn 2025