8/27/2024

Integrating Ollama with Django Framework

If you're into the fascinating world of AI development, you might have heard about Ollama. It's making waves by providing developers with the ability to run large language models (LLMs) locally without the need for cloud dependencies. In this blog post, we'll dive into how to seamlessly integrate Ollama with the Django framework, a powerful tool for web development that powers numerous popular applications.

What Is Django?

Django is a high-level Python web framework that encourages developing applications quickly and cleanly. It's a robust, flexible, & scalable foundation for building web applications. Major companies like Instagram, Pinterest, and Mozilla utilize Django for its efficiency and widespread support from a rich ecosystem of libraries & tools. This framework allows developers to streamline the process of building applications, focusing on creating innovative solutions without getting bogged down by repetitive tasks.

Why Use Ollama?

Integrating AI capabilities into applications is becoming increasingly essential. This is where Ollama steps in. Here are a few key reasons why Ollama is an excellent choice for AI integrations:
  1. Open-source AI Models: Ollama enables developers to use open-source AI models, giving them full control over their AI systems. This allows customization to meet specific needs while ensuring transparency in model behavior.
  2. Local Operations: One of the standout features of Ollama is its ability to run locally. This means data doesn't have to be sent to external servers, ensuring that sensitive information remains private & secure while also reducing latency. Faster response times can significantly enhance user experience in AI-powered features.
  3. Rapid Testing & Deployment: With Ollama, you can quickly build & test your AI applications locally before pushing them live. This adaptability is crucial in producing high-quality software.
Now that we understand the benefits let's get our hands dirty & walk through the process of integrating Ollama within a Django application.

Setting Up Your Environment

Before diving into coding, make sure you have Python & Django installed on your machine. You can easily create a virtual environment for Python projects to avoid dependency conflicts. Here’s how:
1 2 3 # Create a virtual environment python3 -m venv env source env/bin/activate
Next, let's install Django & Ollama. Use the following commands:
1 pip install django ollama
With that done, you can start a new Django project:
1 2 3 django-admin startproject ollamaproject cd ollamaproject python manage.py startapp chat
You should now have a basic Django structure set up. Next, let's ensure that our new app is recognized by the project. Open the
1 settings.py
file in the
1 ollamaproject
directory & add
1 'chat'
to the
1 INSTALLED_APPS
list:
1 2 3 4 INSTALLED_APPS = [ ... , 'chat', ]

Docker and Ollama's Installation

For Ollama to work with Django, we can run it inside a Docker container. This is incredibly helpful as it keeps everything organized. Here’s how to pull the Ollama Docker image:
1 docker pull ollama/ollama
After pulling the image, start the Ollama container using the following command:
1 docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Once the container is running, execute the following command to run your desired AI model, in our case, the latest Llama 3 model:
1 docker exec -it ollama ollama run llama3
With this setup, Ollama is live & accessible on your local machine!

Creating the Chatbot Application

To demonstrate how Ollama and Django can work together, let's create a simple chat application where users can interact with the Llama 3 model.

Step 1: Build the Chat View

Edit the
1 views.py
file in the
1 chat
directory to create a view that handles user input & generates a response from the AI model:
1 2 3 4 5 6 7 8 9 10 11 12 13 from django.shortcuts import render from django.http import StreamingHttpResponse from .ollama_api import generate_response from django.views.decorators.csrf import csrf_exempt @csrf_exempt def chat_view(request): if request.method == "POST": user_input = request.POST["user_input"] prompt = f"User: {user_input}\nAI:" response = generate_response(prompt) return StreamingHttpResponse(response, content_type='text/plain') return render(request, "chat.html")

Step 2: Define the Ollama API Interaction

In order to connect to the Ollama API, create a new Python file called
1 ollama_api.py
in the
1 chat
directory. This script will define how to generate responses from the Llama 3 model:
1 2 3 4 5 6 7 8 9 10 import ollama def generate_response(prompt): stream = ollama.chat( model='llama3', messages=[{'role': 'user', 'content': prompt}], stream=True, ) for chunk in stream: yield chunk['message']['content']

Step 3: Configure URL Routing

Now, we need to let Django know about our new view. Open
1 urls.py
in the
1 chat
directory & add the following:
1 2 3 4 5 6 7 from django.contrib import admin from django.urls import path from chat.views import chat_view urlpatterns = [ path('chat/', chat_view, name='chat'), ]

Step 4: Creating the Template

Create an
1 HTML
file named
1 chat.html
inside a new directory called
1 templates
in the
1 chat
app's folder. This file will provide a simple frontend for users to interact with.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 <!DOCTYPE html> <html> <head> <title>Django Chat App</title> <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script> <script> $(document).ready(function () { $("#chat-form").submit(function (event) { event.preventDefault(); var userInput = $("#user-input").val(); $("#user-input").val(""); $("#chat-history").append("<p><strong>User:</strong></p><p>" + userInput + "</p>"); var aiResponseElement = $("<p>"); $("#chat-history").append($("<strong>").text("AI:")); $("#chat-history").append(aiResponseElement); $.ajax({ type: "POST", url: "/chat/", data: { user_input: userInput }, xhrFields: { onprogress: function (xhr) { var response = xhr.target.responseText; aiResponseElement.html(function (i, oldHtml) { return response; }); }, }, }); }); }); </script> </head> <body> <div id="chat-history"></div> <form id="chat-form"> <input type="text" id="user-input" name="user_input" required /> <button type="submit">Send</button> </form> </body> </html>

Step 5: Running the Application

Now that we’ve built our chat functionality, we need to run the Django server to see everything in action:
1 python manage.py runserver
Navigate to
1 http://localhost:8000/chat/
in your web browser, & you should see the chat interface. Here, you can send messages & receive responses from the Llama 3 model.

What's Next?

Congratulations! You’ve successfully integrated Ollama with Django for a simple chat application. Here are a few suggestions to enhance your newly created app:
  • Improve Styling: Add CSS to enhance the appearance of your chat interface. Make it more visually appealing & user-friendly.
  • Performance Optimization: Explore techniques like caching or asynchronous processing to improve response times.
  • Expand Features: Integrate additional features provided by Ollama, such as multimodal responses or different AI models for varied functionalities.

Unlocking Potential with Arsturn

As you continue your journey into the world of AI development, consider utilizing powerful tools like Arsturn. Arsturn offers a NO-CODE AI chatbot builder that allows you to create conversational AI chatbots effortlessly within minutes. Whether you're enhancing your online brand or looking to boost engagement and conversions, Arsturn empowers users to connect meaningfully with their audience.

Why Choose Arsturn?

  • Effortless Creation: Develop custom chatbots tailored to your business needs without needing technical skills.
  • Comprehensive Analytics: Gain valuable insights into your audience's interests and optimize your strategies accordingly.
  • Full Customization: Reflect your brand identity with customizable chatbots that enhance user experience.
  • Seamless Integration: Easily integrate your chatbots across various platforms to engage your customers effectively.
Arsturn is perfect for influencers, businesses, or anyone looking to make their interactions smarter and more engaging. Check it out today & take the first step in harnessing the power of AI for your projects!

Final Thoughts

Integrating Ollama with Django allows you to create dynamic & responsive applications that harness the power of AI. As technology evolves, tools like Ollama & Arsturn will undoubtedly play a crucial role in democratizing access to advanced AI capabilities, pushing the boundaries of what can be achieved in your projects. So why wait? Dive in, explore, & create AI-driven solutions that can revolutionize your processes.
For more insightful resources or assistance, visit Arsturn today!

Copyright © Arsturn 2024