Skip to content

Conversation

@shanto12
Copy link

Overview

This PR implements a provider-agnostic LLM client interface that enables seamless switching between different LLM providers without code changes, addressing Issue #1.

Changes

1. Provider-Agnostic Interface (src/llm/client.py)

  • Created LLMClient abstract base class with methods:
    • chat() - for chat completions
    • embed() - for embeddings
    • tts() - for text-to-speech (optional)
  • Defined data classes for ChatMessage, ChatResponse, EmbeddingResponse, and TTSResponse

2. Provider Adapters (src/llm/adapters.py)

Implemented concrete adapters for three providers:

  • OpenAICompatibleAdapter: Works with OpenAI API and any OpenAI-compatible services (xAI, DeepSeek, etc.)
    • Supports custom base_url configuration
    • Implements chat, embed, and tts methods
  • GroqAdapter: Fast inference with Groq API
    • Implements chat method
    • Embedding not supported (raises NotImplementedError)
  • OllamaAdapter: Local model inference

3. Factory Pattern (src/llm/factory.py)

  • LLMClientFactory class for creating provider-specific clients
  • create_client() - Create client with explicit configuration
  • create_from_env() - Create client from environment variables
  • Environment variable support:
    • LLM_PROVIDER - Provider name (openai, groq, ollama)
    • LLM_API_KEY - API key
    • LLM_BASE_URL - Custom base URL (optional)
    • LLM_MODEL - Default model

4. Tests (tests/test_llm_client.py)

  • Factory tests for all three providers
  • Chat completion tests with mocking
  • Error handling tests
  • Provider validation tests

5. Documentation (README.md)

  • Added comprehensive multi-provider LLM support section
  • Configuration examples for all three providers
  • Usage examples showing how to switch providers
  • Clear instructions for environment variable setup

How to Switch Providers Without Code Changes

Simply update environment variables:

# Switch to Groq
export LLM_PROVIDER=groq
export LLM_API_KEY=your_groq_key

# Switch to Ollama (local)
export LLM_PROVIDER=ollama
export LLM_BASE_URL=http://localhost:11434  # Optional

# Switch to OpenAI
export LLM_PROVIDER=openai
export LLM_API_KEY=your_openai_key

Then use the factory method:

from src.llm import LLMClientFactory

client = LLMClientFactory.create_from_env()

The client will automatically use the configured provider!

Testing

Run tests with:

pytest tests/test_llm_client.py

Related Issue

Fixes #1 - Multi-provider LLM support implementation

This file implements a factory class for creating LLM clients based on provider-specific configurations. It includes methods for creating clients from environment variables and retrieving supported providers.
Added initial implementation of the LLM client package with necessary imports and exports.
Add basic tests for LLM client adapters including OpenAI, Groq, and Ollama.
Updated README to improve clarity and organization, added new sections and links.
@vercel
Copy link

vercel bot commented Oct 26, 2025

@shanto12 is attempting to deploy a commit to the md777 Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Multiple AI provider like Groq, ...

1 participant