1/28/2025

How to Use DeepSeek Locally on AMD Graphics Cards

In the rapidly evolving world of AI,, having the ability to run models locally represents a significant advantage. Whether you're a developer, a researcher, or an AI enthusiast, running advanced models like DeepSeek locally can provide you with greater privacy and customization options. In this guide, we’ll walk you through the process of setting up DeepSeek R1 on AMD graphics cards, ensuring you harness the power of AI while enjoying complete control over your environment.

What is DeepSeek?

DeepSeek is an open-source AI model designed to compete with leading models such as OpenAI’s ChatGPT and Claude 3.5. It offers a versatile platform for various tasks including math, coding, and reasoning—all while prioritizing your data privacy. The DeepSeek R1 model, in particular, can run locally, giving you more autonomy and the assurance that your data isn’t shared with third-party services.

Why Use AMD Graphics Cards?

AMD GPUs are renowned for their performance in compute-intensive tasks like AI, thanks to their innovative architecture and efficient energy consumption. The integration of AMD’s ROCm software ensures that developers can leverage GPU acceleration for running models smoothly. For users already invested in AMD hardware, running local AI models is both convenient & cost-effective. This guide will emphasize how to take full advantage of your AMD setup!

Prerequisites for Setting Up DeepSeek on AMD

Before diving into the installation process, let’s cover the basic requirements:
  • AMD GPU: Make sure your system has a capable AMD graphics card.
  • Operating System: This guide covers steps for Windows, Linux, & macOS users.
  • Ollama Toolkit: This tool enables you to run local AI models on your machine. Download it here.

Step-by-Step Installation Guide

With the prerequisites out of the way, let’s jump into the setup!

Step 1: Install the Ollama Toolkit

The Ollama toolkit is essential for running DeepSeek locally. Here’s how to install it:
  1. Go to Ollama's official site and download the version that matches your operating system.
  2. Follow the on-screen instructions to install it on your machine. (Linux users, be sure to check compatibility for your specific distro).

Step 2: Pull DeepSeek Models

Ollama provides different model sizes which you can select depending on your hardware capabilities. The available DeepSeek model sizes are as follows:
  • 1.5B (smallest):
    1 ollama run deepseek-r1:1.5b
  • 8B:
    1 ollama run deepseek-r1:8b
  • 14B:
    1 ollama run deepseek-r1:14b
  • 32B:
    1 ollama run deepseek-r1:32b
  • 70B (largest/smartest):
    1 ollama run deepseek-r1:70b
Note: Start with a smaller model if you’re testing the waters or if your GPU capabilities are limited.

Step 3: Setting Up Chatbox

Chatbox offers a free & clean interface to interact with your AI model. Here’s how to set it up:
  1. Download Chatbox from chatboxai.app.
  2. Upon installation, navigate to the settings section.
  3. Switch the model provider to Ollama. Since you’re running models locally, ignore any cloud AI options.
  4. Set the Ollama API host to its default setting:
    1 http://127.0.0.1:11434
    .
  5. Save your settings, and you’re all set to start chatting locally using DeepSeek!

Step 4: Running DeepSeek Locally

Now that everything is set up, it’s time to try running the model:
  1. Open your terminal.
  2. Isolate the model you’d like to run and input the corresponding command. For example, if you're using the 8B model, run:
    1 ollama run deepseek-r1:8b
  3. Once successfully pulled, the model will execute locally on your machine. You can now interact with DeepSeek in Chatbox!

Step 5: Experimenting with Prompts

Now that DeepSeek is up & running, it’s time to test its capabilities:
  1. Use prompts such as:
    • “Explain how TCP works.”
    • “Create a simple game script based on Pac-Man.” These prompts showcase DeepSeek's ability to handle math, coding, and reasoning tasks effectively.

Additional Resources

  • For troubleshooting or advanced setup options, feel free to check out the DeepSeek GitHub repository. There’s a wealth of information to keep your setup running smoothly!
  • Dive into the AMD ROCm documentation for any specific driver requirements for your graphics card.

Why Choose Arsturn?

While setting up powerful local applications like DeepSeek can be an exciting journey, ensuring maximum engagement & conversions for your brand is also crucial. That's where Arsturn steps in!
With Arsturn, you can instantly create custom ChatGPT chatbots tailored to your specific needs. This no-code AI chatbot builder allows you to enhance your digital engagement effortlessly:
  • Customize Your Chatbot: Reflect your brand’s personality and optimize user experience.
  • Seamless Integration: Embed across multiple platforms in just minutes.
  • Insightful Analytics: Gain valuable insights about your audience’s interactions.
Boost engagement & conversions while running your local AI models. Join thousands of companies that are already harnessing the power of conversational AI with Arsturn!

Wrapping It Up

Running DeepSeek locally on AMD GPUs is not just a technically rewarding experience but can significantly enhance your project's capabilities. You gain privacy, control, and access to a powerful AI engine that can adapt and cater to your specific demands.
Plus, with Arsturn, you can easily engage with your audience while enhancing your brand's online presence. Get started today and discover the future of AI!

Copyright © Arsturn 2025