Unleashing the Power of Ollama and Gemma: A Python Playground   Recently updated !


In the realm of large language models (LLMs), Ollama and Gemma stand out as two exceptional tools. Ollama serves as a user-friendly interface, while Gemma, a powerhouse LLM developed by Google AI, offers exceptional capabilities in text generation, translation, and creative content creation.

Getting Started with Ollama and Gemma

Once you have installed Ollama. You can download any of Ollamas many supported models directly to your machine, this will mean that you dont need to pay for your AI model usage.

In this sample we are going to use Googles Gemma2 model.

ollama pull gemma2:27b

Accessing your local AI model

Once you have download the model you can then run it and access your local AI model like you would any other AI system like Gemini or ChatGTP.

ollama run gemma2:27b

You can then exit the model again by simply typing bye.

/bye

Why use a local AI model

For me this boils down to two main factors.

  1. Using a local AI model is free.
  2. Privacy my local AI models are on my own machine they none of my data is leaving my machine.

Draw backs.

Depending upon your machine using a local AI model will return slower then your standard AI chat model on the web.

Code

You can also access these models programmatically. Ollama provides us with a pip package which enables us to access Ollama models directly from our code. In the following example I have created a Python project which does just that.

.env

MODEL_NAME_LATEST=gemma2:27b

requirments.txt

ollama
python-dotenv
colorama

Code

import os
import colorama
import ollama
from dotenv import load_dotenv

# Initialize colorama for colored text output
colorama.init()

# Load environment variables from a .env file (optional)
load_dotenv()

# Define the environment variable name for the model name
CHAT_MODEL_NAME = os.getenv("MODEL_NAME_LATEST")


class ConversationTurns:
    """
    This class represents a collection of conversation turns.

    A conversation turn is a single interaction between the user and the model.
    It includes the role (user or model) and the content of the message.
    """

    def __init__(self):
        self.messages = []  # List to store conversation turns

    def add_turn(self, role, content):
        """
        Adds a new conversation turn to the list.

        Args:
            role (str): The role of the participant (user or model).
            content (str): The content of the message.
        """
        self.messages.append(Turn(role, content))  # Create and append a Turn object

    def to_list(self):
        """
        Converts the conversation turns into a list of dictionaries.

        This is useful for sending the conversation history to the Ollama model.

        Returns:
            list: A list of dictionaries representing conversation turns.
        """
        return [message.to_dict() for message in self.messages]  # Convert Turn objects to dicts


class Turn:
    """
    This class represents a single conversation turn.

    It includes the role of the participant (user or model) and the content of the message.
    """

    def __init__(self, role, content):
        self.role = role
        self.content = content

    def to_dict(self):
        """
        Converts the turn into a dictionary.

        This dictionary format is expected by the send_chat function.

        Returns:
            dict: A dictionary representing the conversation turn.
        """
        return {
            "role": self.role,
            "content": self.content
        }


def send_chat(turn_history, content):
    """
    Sends a message to the Ollama model and updates the conversation history.

    Args:
        turn_history (ConversationTurns): An object storing the conversation history.
        content (str): The user's message to be sent to the model.

    Returns:
        tuple: A tuple containing the updated conversation history and the model's response.
    """

    turn_history.add_turn("user", content)  # Add user turn to history

    # Use Ollama's chat function to get a response from the specified model
    model_response = ollama.chat(
        model=CHAT_MODEL_NAME,
        messages=history.to_list(),
    )

    turn_history.add_turn("model", model_response['message']['content'])  # Add model turn to history

    # Return the updated history and the model's response
    return turn_history, model_response['message']['content']


# Initialize an empty conversation history object
history = ConversationTurns()

# Continuously loop until the user enters "exit"
user_input = ""
while user_input != "exit":
    user_input = input(colorama.Fore.GREEN + "User: ")
    history, model_answer = send_chat(history, user_input)
    print(colorama.Fore.BLU + "Model: " + colorama.Fore.WHITE + model_answer)

Conclusion

Having a local AI model for testing is invaluable. It allows for efficient experimentation and troubleshooting without relying on external APIs. Ollama’s compatibility with Langchain further enhances this approach, enabling you to test your applications locally using free models and seamlessly transition to paid options when ready for production.


About Linda Lawton

My name is Linda Lawton I have more than 20 years experience working as an application developer and a database expert. I have also been working with Google APIs since 2012 and I have been contributing to the Google .Net client library since 2013. In 2013 I became a a Google Developer Experts for Google Analytics.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.