feat: add LiteLLM provider adapter (v0.2.6)#14
Merged
Conversation
Adds LiteLLMAdapter with full sync/async patch support for litellm.completion and litellm.acompletion, including streaming. Token extraction follows the OpenAI-compatible format LiteLLM uses (prompt_tokens/completion_tokens). 22 TDD tests cover adapter interface, token extraction, stream wrapping, patch lifecycle, and end-to-end cost recording. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds test_patch_coverage.py covering all previously-uncovered branches: error paths (RuntimeError/AttributeError/Exception swallowing), async stream edge cases (no-usage fallback, broken chunk attrs), and all new LiteLLM sync/async wrappers including the async streaming path. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Covers the ImportError branch (lines 23-24) hit when litellm is absent, by reloading the module with litellm blocked in sys.modules. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add docs/integrations/litellm.md — full guide with basic usage, async, streaming, budget enforcement, fallback, multi-provider, and LangGraph - mkdocs.yml: add LiteLLM to nav (before LangGraph), reorder integrations - index.md: add LiteLLM install tab, update feature card, supported models section, and "What's New in v0.2.6" with LiteLLM + budgeted_graph cards - installation.md: add litellm extra, dependency table row, troubleshooting - quickstart.md: add LiteLLM example, update frameworks section with budgeted_graph helper - integrations/langgraph.md: document budgeted_graph() convenience helper - how-it-works.md: add LiteLLM to patched endpoints and token extraction - extending.md: note that LiteLLM is now built-in - changelog.md: document LiteLLM adapter and LangGraph helper under 0.2.6 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…d_graph() - examples/litellm_basic.py: 6 sections covering basic tracking, multi-provider, streaming, fallback, call-count limit, and async usage with LiteLLM - examples/langgraph_demo.py: updated to showcase the new budgeted_graph() convenience helper alongside direct budget() usage and fallback model demo Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Examples use narrative import ordering for readability and should not be subject to strict linting rules. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
shekel/providers/litellm.py—LiteLLMAdapterimplementing theProviderAdapterABClitellm.completionandlitellm.acompletion(module-level functions, not class methods)stream_options={"include_usage": True}injectionshekel/providers/__init__.pybehindtry/except ImportErrorlitellm = ["litellm>=1.0.0"]optional extra and mypy override topyproject.tomltests/providers/conftest.pywithMockLiteLLMResponse,MockLiteLLMChunk,make_litellm_response,make_litellm_streamDesign notes
_extract_openai_tokens()in wrappersopenai/gpt-4o) — passed through as-is to_pricinginstall_patches(): guarded by"litellm_sync" not in _patch._originalsTest plan
pytest tests/providers/test_litellm_adapter.py -v— 22/22 passpytest tests/ --ignore=tests/integrations/test_groq_integration.py --ignore=tests/integrations/test_gemini_integration.py --ignore=tests/performance/ -q— 405 passed, 0 failuresruff check— clean🤖 Generated with Claude Code