8/11/2025

So, you've been hearing all the buzz about running large language models (LLMs) locally, right? It's a pretty exciting space. You get privacy, you can experiment with all sorts of open-source models, & you're not racking up API bills with every single query. One of the easiest ways to get started with this is a tool called Ollama. It's been a game-changer for running models on your own machine.
But then there's this other super powerful tool, Claude Code, from Anthropic. It’s an AI coding assistant that lives in your terminal & can do some seriously impressive stuff, like planning & executing complex coding tasks. The dream, of course, is to get the best of both worlds: the power & agentic capabilities of Claude Code combined with the flexibility of running your own local models with Ollama.
Here's the thing, it's not a straightforward "plug-and-play" situation. Claude Code is designed to work with Anthropic's models. But that DEFINITELY doesn't mean we can't make them work together. It just takes a little bit of clever workflow design.
In this guide, I'm going to walk you through everything you need to know. We'll start with getting Ollama set up on your Windows machine, & then we'll dive into the practical ways you can connect your local Ollama models to your Claude Code workflow. It's going to be fun, so let's get into it.

Part 1: Getting Ollama Up & Running on Windows

Honestly, the Ollama team has made this part incredibly simple. It used to be a much more involved process to get local models running, but now it's just a few clicks.

Step 1: Download & Install Ollama

First, head over to the Ollama website. You'll see download links for different operating systems. Grab the Windows version.
It's a standard
1 .exe
installer. Double-click it, go through the prompts, & let it do its thing. Once it's done, you won't see a big program window pop up. Instead, Ollama runs as a background service. You might see a little llama icon in your system tray, which tells you it's running.
That's pretty much it for the installation. Seriously.

Step 2: Your First Conversation with a Local LLM

Now for the fun part. Let's download a model & chat with it. You'll need to use a command-line tool for this. You can use the classic Command Prompt, PowerShell, or the Windows Terminal. I'm a fan of the Windows Terminal, but any will do.
Open up your terminal & type this command:

Copyright © Arsturn 2025