feat: Add multi-provider LLM support (Fixes #1) #17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Overview
This PR implements a provider-agnostic LLM client interface that enables seamless switching between different LLM providers without code changes, addressing Issue #1.
Changes
1. Provider-Agnostic Interface (
src/llm/client.py)LLMClientabstract base class with methods:chat()- for chat completionsembed()- for embeddingstts()- for text-to-speech (optional)ChatMessage,ChatResponse,EmbeddingResponse, andTTSResponse2. Provider Adapters (
src/llm/adapters.py)Implemented concrete adapters for three providers:
base_urlconfiguration3. Factory Pattern (
src/llm/factory.py)LLMClientFactoryclass for creating provider-specific clientscreate_client()- Create client with explicit configurationcreate_from_env()- Create client from environment variablesLLM_PROVIDER- Provider name (openai, groq, ollama)LLM_API_KEY- API keyLLM_BASE_URL- Custom base URL (optional)LLM_MODEL- Default model4. Tests (
tests/test_llm_client.py)5. Documentation (
README.md)How to Switch Providers Without Code Changes
Simply update environment variables:
Then use the factory method:
The client will automatically use the configured provider!
Testing
Run tests with:
Related Issue
Fixes #1 - Multi-provider LLM support implementation