Skip to content

Conversation

@adityapuranik99
Copy link

Summary

Add LiteLLM as a new provider to enable users to connect their LiteLLM proxy server for accessing 100+ LLM providers through a unified OpenAI-compatible API. This is useful for users who want to use services like GitHub Copilot (request-based pricing) or other providers through LiteLLM.

Fixes #(issue number if applicable)

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation
  • Other: ___________

Testing

  • Set LITELLM_BASE_URL=http://localhost:4000 in .env
  • Verify LiteLLM models appear with litellm/ prefix in Agent block model selector
  • Verify "LiteLLM" option appears in Copilot model selector with 🚅 icon
  • Test chat completion through Agent block with a LiteLLM model
  • Verify streaming works

Checklist

  • Code follows project style guidelines
  • Self-reviewed my changes
  • Tests added/updated and passing
  • No new warnings introduced
  • I confirm that I have read and agree to the terms outlined in the Contributor License Agreement (CLA)

Changes

Agent blocks (Provider integration):

  • New provider: apps/sim/providers/litellm/
  • API route for model discovery: /api/providers/litellm/models
  • Environment variables: LITELLM_BASE_URL, LITELLM_API_KEY (optional)

Copilot integration:

  • Added LiteLLM to valid provider IDs
  • Added LiteLLM to model selector options
  • Updated API validation schema
  • Added 🚅 icon for LiteLLM

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 17, 2026

Greptile Summary

Added LiteLLM as a new provider to enable users to connect their LiteLLM proxy server for accessing 100+ LLM providers through a unified OpenAI-compatible API.

Key Changes:

  • Provider Implementation (providers/litellm/): Comprehensive provider following the existing OpenAI/vLLM pattern with tool support, streaming, JSON schema responses, and proper error handling
  • API Route (/api/providers/litellm/models): Dynamic model discovery endpoint with blacklist filtering
  • Copilot Integration: Added LiteLLM to valid provider IDs, model selector options, and API validation schema with 🚅 icon
  • Environment Configuration: New LITELLM_BASE_URL and LITELLM_API_KEY environment variables
  • Model Management: Dynamic model updates through provider store with litellm/ prefix pattern

Implementation Quality:
The implementation demonstrates strong consistency with the existing codebase patterns. The LiteLLM provider closely mirrors the vLLM implementation (both use OpenAI-compatible APIs), reuses shared utilities like createOpenAICompatibleStream, and follows established patterns for tool execution, streaming responses, and error handling. The code includes comprehensive logging, proper initialization checks, and graceful fallbacks when the service is unavailable.

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The implementation follows established patterns from similar providers (vLLM, OpenRouter), properly handles errors and edge cases, includes comprehensive logging, uses existing utilities for code reuse, and integrates cleanly into the provider registry without breaking changes. All changes are additive and well-isolated.
  • No files require special attention

Important Files Changed

Filename Overview
apps/sim/providers/litellm/index.ts Comprehensive LiteLLM provider implementation with proper OpenAI-compatible API integration, tool handling, streaming support, and error management
apps/sim/app/api/providers/litellm/models/route.ts API endpoint for dynamic model discovery with proper error handling, provider blacklist checking, and model filtering
apps/sim/providers/registry.ts Added LiteLLM provider to registry with consistent initialization pattern matching other providers
apps/sim/lib/copilot/config.ts Added litellm to valid provider IDs array for Copilot configuration validation

Sequence Diagram

sequenceDiagram
    participant User
    participant AgentBlock as Agent Block
    participant LiteLLMProvider as LiteLLM Provider
    participant LiteLLMProxy as LiteLLM Proxy Server
    participant LLMBackend as Underlying LLM

    User->>AgentBlock: Execute with litellm/model-name
    AgentBlock->>LiteLLMProvider: executeRequest(request)
    
    alt Initialization (first time)
        LiteLLMProvider->>LiteLLMProxy: GET /v1/models
        LiteLLMProxy-->>LiteLLMProvider: Available models list
        LiteLLMProvider->>LiteLLMProvider: Store models with litellm/ prefix
    end
    
    LiteLLMProvider->>LiteLLMProvider: Strip litellm/ prefix from model
    LiteLLMProvider->>LiteLLMProvider: Build OpenAI-compatible payload
    
    alt Streaming Request
        LiteLLMProvider->>LiteLLMProxy: POST /v1/chat/completions (stream=true)
        LiteLLMProxy->>LLMBackend: Forward to actual provider
        LLMBackend-->>LiteLLMProxy: Stream chunks
        LiteLLMProxy-->>LiteLLMProvider: Stream chunks
        LiteLLMProvider->>LiteLLMProvider: Create ReadableStream
        LiteLLMProvider-->>AgentBlock: StreamingExecution
    else Non-streaming with Tools
        loop Tool Call Iterations
            LiteLLMProvider->>LiteLLMProxy: POST /v1/chat/completions
            LiteLLMProxy->>LLMBackend: Forward request
            LLMBackend-->>LiteLLMProxy: Response with tool_calls
            LiteLLMProxy-->>LiteLLMProvider: Response
            LiteLLMProvider->>LiteLLMProvider: Execute tools locally
            LiteLLMProvider->>LiteLLMProvider: Add tool results to messages
        end
        LiteLLMProvider-->>AgentBlock: ProviderResponse
    end
    
    AgentBlock-->>User: Execution result
Loading

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 17, 2026

Greptile's behavior is changing!

From now on, if a review finishes with no comments, we will not post an additional "statistics" comment to confirm that our review found nothing to comment on. However, you can confirm that we reviewed your changes in the status check section.

This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR".

@vercel
Copy link

vercel bot commented Jan 17, 2026

Someone is attempting to deploy a commit to the Sim Team on Vercel.

A member of the Team first needs to authorize it.

@adityapuranik99
Copy link
Author

@waleedlatif1 please help merge, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant