This folder contains examples demonstrating how to use Ollama models with the Agent Framework.
- Install Ollama: Download and install Ollama from ollama.com
- Start Ollama: Ensure Ollama is running on your local machine
- Pull a model: Run
ollama pull mistral(or any other model you prefer)- For function calling examples, use models that support tool calling like
mistralorqwen2.5 - For reasoning examples, use models that support reasoning like
qwen3:8b - For multimodal examples, use models like
gemma3:4b
- For function calling examples, use models that support tool calling like
Note: Not all models support all features. Function calling, reasoning, and multimodal capabilities depend on the specific model you're using.
The recommended way to use Ollama with Agent Framework is via the native OllamaChatClient from the agent-framework-ollama package. This provides full support for Ollama-specific features like reasoning mode.
Alternatively, you can use the OpenAIChatClient configured to point to your local Ollama server, which may be useful if you're already familiar with the OpenAI client interface.
| File | Description |
|---|---|
ollama_agent_basic.py |
Basic Ollama agent with tool calling using native Ollama Chat Client. Shows both streaming and non-streaming responses. |
ollama_agent_reasoning.py |
Ollama agent with reasoning capabilities using native Ollama Chat Client. Shows how to enable thinking/reasoning mode. |
ollama_chat_client.py |
Direct usage of the native Ollama Chat Client with tool calling. |
ollama_chat_multimodal.py |
Ollama Chat Client with multimodal (image) input capabilities. |
ollama_with_openai_chat_client.py |
Alternative approach using OpenAI Chat Client configured to use local Ollama models. |
The examples use environment variables for configuration. Set the appropriate variables based on which example you're running:
Set the following environment variables:
-
OLLAMA_HOST: The base URL for your Ollama server (optional, defaults tohttp://localhost:11434)- Example:
export OLLAMA_HOST="http://localhost:11434"
- Example:
-
OLLAMA_MODEL_ID: The model name to use- Example:
export OLLAMA_MODEL_ID="qwen2.5:8b" - Must be a model you have pulled with Ollama
- Example:
Set the following environment variables:
-
OLLAMA_ENDPOINT: The base URL for your Ollama server with/v1/suffix- Example:
export OLLAMA_ENDPOINT="http://localhost:11434/v1/"
- Example:
-
OLLAMA_MODEL: The model name to use- Example:
export OLLAMA_MODEL="mistral" - Must be a model you have pulled with Ollama
- Example: