8/26/2024

Setting Up Ollama with Docker Compose

If you've ever dreamed of running powerful AI models locally, look no further! In this post, we will walk you through setting up Ollama using Docker Compose. This way, you'll be able to leverage the power of conversational AI with a user-friendly interface without having to dive deep into the technicalities. šŸš€ This guide is packed full of practical tips to help you every step of the way!

What is Ollama?

Before jumping into the setup process, letā€™s clarify what Ollama is. Ollama is an open-source framework designed for deploying Large Language Models (LLMs) effortlessly. It empowers users to run significant AI models locally, including capabilities for text generation, translation, coding assistance, and much more. šŸ§  Once you set it up, you can interact with AI models as if you had your own personal assistant.

Prerequisites

To get started with Ollama, you'll need to have a few things in place:
  • Docker: This powerful tool allows you to run applications in containers, making installations cleaner & easier.
  • Docker Compose: A tool for defining and running multi-container Docker applications.

Installing Docker & Docker Compose

If you donā€™t have Docker installed yet, follow these steps:
  1. Install Docker:
    • For Windows & macOS: Download and install Docker Desktop.
    • For Linux: Follow the appropriate installation instructions for your distribution.
  2. Install Docker Compose (if it's not included with your Docker installation): Run the command:
    1 2 bash sudo apt-get install docker-compose

Setting Up Ollama with Docker Compose

Now that you have your prerequisites set up, we can move on to the main event: setting up Ollama with Docker Compose!

Step-by-Step Configuration

Step 1: Clone the Ollama Docker Repository
The first thing you need to do is clone the Ollama Docker setup from GitHub. Open your terminal and run:
1 2 3 bash git clone https://github.com/valiantlynx/ollama-docker.git cd ollama-docker
Step 2: Docker Compose Configuration
Inside the cloned repository, you will find a
1 docker-compose.yml
file, which defines the services required to run Ollama smoothly. Hereā€™s a breakdown of what this file looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 version: '3.8' services: ollama: image: ollama/ollama ports: - "11434:11434" volumes: - ./ollama:/root/.ollama restart: unless-stopped open-webui: image: ghcr.io/open-webui/open-webui:main ports: - "8080:8080" depends_on: - ollama environment: - OLLAMA_API_BASE_URL=http://localhost:11434/api networks: default: external: name: bridge
In the configuration above, we have two services defined:
1 ollama
and
1 open-webui
.
  1. The
    1 ollama
    service is your AI model server. It's set to restart automatically unless stopped and exposes port 11434, which is used for connecting applications.
  2. The
    1 open-webui
    service is a web interface that allows you to interact visually with the Ollama models. It's set to depend on the
    1 ollama
    service, ensuring that your model is up and running before accessing the UI. It exposes port 8080.

Step 3: Start the Ollama Services

Now that our configuration is ready, you can start the services using Docker Compose:
1 2 bash docker-compose up -d
This command will spin up the Ollama service along with the Open Web UI in detached mode. You can monitor the container logs by running:
1 2 bash docker-compose logs -f

Step 4: Accessing Ollama Web UI

Once the services are running, you can access the Ollama interface in your web browser! Open http://localhost:8080, and you should see the Ollama Web UI ready for action.

Step 5: Model Installation

To begin using various models, navigate to the settings section in the Web UI and click on ā€œInstall Modelā€. Choose any model you'd like to experiment with (e.g., llava-phi3). This process may take a few minutes, but once installed, you can jump straight into using it just like ChatGPT!

Step 6: Exploring Langchain with Ollama

One exciting feature of this setup is the integration with Langchain. A third container named app will be created for the purpose of experimentation. Inside this container, you will find examples demonstrating how to leverage Langchain capabilities along with Ollama.
You can modify code snippets here and see how the integration can enhance your interactions with LLMs.

Step 7: Virtual Development Environment with Devcontainer

For those who prefer a more interactive coding experience, the app container serves as your development environment. You can easily boot up your coding experiments here. If you have Visual Studio Code installed, it should prompt you to reopen the project in the container environment when you open the project root!

Final Steps: Cleanup

To stop the running containers and clean up unnecessary resources, run:
1 2 bash docker-compose down

Troubleshooting Common Issues

  • Issue: Port Conflicts: Ensure that your specified ports (like 8080) arenā€™t already in use by other applications.
  • Issue: Models Not Loading: Double-check your model paths and ensure you've installed your models correctly through the Web UI.

Why Choose Ollama with Docker Compose?

Setting up Ollama via Docker Compose offers simplicity & flexibility:
  • Simplicity: No complex configurations. Docker handles it well.
  • Isolation: Each service runs in its own container, preventing potential conflicts.
  • Portability: Your entire setup can easily be transferred across different environments.
  • Scalability: Need to run more models? Simply scale it by adjusting your compose file!

Boost Your Engagement with Arsturn

Are you ready to take your engagement to the NEXT LEVEL? Checkout Arsturn to instantly create custom ChatGPT chatbots for your website. From streamlining operations to providing instant customer service, Arsturn can help you connect with your audience and keep them engaged!
Why wait? Claim your chatbot today; no credit card required. Discover the BEST chatbot solutions trusted by the best companies!

Conclusion

In this guide, we covered everything from prerequisites to troubleshooting with your Ollama setup with Docker Compose. Whether youā€™re a developer looking for a local AI solution or someone just curious to try out LLMs, Ollama makes it easy. What are you waiting for? Dive in and unlock the world of AI!
Happy coding! šŸ³

Arsturn.com/
Claim your chatbot

Copyright Ā© ArsturnĀ 2024