Merged
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds support for Anthropic's Claude API alongside the existing OpenAI integration, enabling users to work with either provider based on their API key configuration. The changes introduce a unified LLM client interface that abstracts provider-specific differences while maintaining educational transparency about API variations.
Changes:
- Added Anthropic SDK dependency and multi-provider client factory with automatic detection
- Replaced direct OpenAI API calls with provider-agnostic adapter functions across all examples
- Updated documentation to explain dual-provider support and configuration options
Reviewed changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| src/openai_client.py | Removed OpenAI-specific client (replaced by multi-provider client) |
| src/llm_client.py | New unified interface supporting both Anthropic and OpenAI with automatic provider detection |
| pyproject.toml | Added anthropic dependency and migrated dev dependencies to dependency-groups |
| examples/translate_ipa_document.py | Updated to use multi-provider client with provider-specific model selection |
| examples/test_connection.py | Converted to test either provider based on configuration |
| examples/function_calling_call.py | Updated function calling example to work with both providers |
| examples/function_calling_basic.py | Updated basic function calling to support both APIs |
| examples/chat_stream.py | Converted streaming example to multi-provider support |
| examples/chat_history_stream.py | Updated interactive chat with streaming for both providers |
| examples/chat_history.py | Converted chat history example to multi-provider interface |
| examples/chat.py | Updated basic chat example to support both providers |
| docs/session_01_setup.md | Documented multi-provider setup, configuration, and usage patterns |
| .env.example | Added example configuration for both API keys with provider selection guidance |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
DavidTorresIPA
approved these changes
Jan 20, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Pull Request Summary 🚀
What does this PR do? 📝
Enables both anthropic and openai api keys in the llm basics scripts
Why is this change needed? 🤔
We'd like to be able to have an option to use Anthropic API keys in the llm basics tutorial
How was this implemented? 🛠️
Set anthropic as default (using Haiku 4.5 as default model).
How to test or reproduce ? 🧪
just venvoruv syncto update the python virtual environments.python examples/test_connection.pydocs/session_01_setup.mdScreenshots (if applicable) 📷
Checklist ✅
Reviewer Emoji Legend
:code::smiley::+1::100:...and I want the author to know it! This is a way to highlight positive parts of a code review.
:star: :star: :star:And I am providing reasons why it needs to be addressed as well as suggested improvements.
:star: :star:And I am providing suggestions where it could be improved either in this PR or later.
:star:...and consider this a suggestion, not a requirement.
:question:This should be a fully formed question with sufficient information and context that requires a response.
:memo::pick:This does not require any changes and is often better left unsaid. This may include stylistic, formatting, or organization suggestions and should likely be prevented/enforced by linting if they really matter
:recycle:Should include enough context to be actionable and not be considered a nitpick.