fix: [Security Fix] remove litellm #1239
Conversation
Remove usage of the agentd library and its OpenAI patching/tool decorator. Instantiate AsyncOpenAI directly (HTTP/2 or HTTP/1.1 fallback) and remove imports of agentd.patch and agentd.tool_decorator. Add runtime dependencies for openai, pyyaml, and tiktoken in pyproject.toml to support direct OpenAI client usage.
|
|
||
| if use_http2: | ||
| self._patched_async_client = patch_openai_with_mcp(AsyncOpenAI()) | ||
| self._patched_async_client = AsyncOpenAI() |
There was a problem hiding this comment.
awaiting the integration tests results
Remove the `litellm` package from both Dockerfile.langflow and Dockerfile.langflow.dev to avoid conflicts/compatibility issues. In Dockerfile.langflow the `pip uninstall -y litellm` was added to the RUN that installs `uv` and prepares /app/langflow-data; in Dockerfile.langflow.dev a `RUN uv pip uninstall litellm` line was added after the dependency sync. This ensures built images do not include `litellm`.
|
@phact Tests Passed! |
There was a problem hiding this comment.
Pull request overview
This PR aims to address a LiteLLM-related security concern by removing the agentd dependency and associated OpenAI patching, ensuring litellm is removed from Langflow Docker images, and making previously-transitive dependencies explicit in pyproject.toml.
Changes:
- Remove
agentdusage (tool decorator + OpenAI MCP patch hook) and update the OpenAI client initialization accordingly. - Ensure
litellmis uninstalled inDockerfile.langflowandDockerfile.langflow.dev. - Add explicit runtime deps (
openai,pyyaml,tiktoken) and refreshuv.lockto reflect the dependency graph changes.
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
uv.lock |
Removes agentd/litellm and related transitive packages; bumps openai and refreshes lock resolution. |
src/services/search_service.py |
Drops agentd tool decorator/import from search_tool. |
src/config/settings.py |
Removes agentd MCP patching and initializes a plain AsyncOpenAI client instead. |
pyproject.toml |
Removes agentd; adds explicit dependencies for OpenAI + YAML + token counting. |
Dockerfile.langflow.dev |
Uninstalls litellm from the Langflow-from-source dev image. |
Dockerfile.langflow |
Uninstalls litellm from the pinned Langflow base image. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if use_http2: | ||
| self._patched_async_client = patch_openai_with_mcp(AsyncOpenAI()) | ||
| self._patched_async_client = AsyncOpenAI() | ||
| logger.info("OpenAI client initialized with HTTP/2") | ||
| else: | ||
| http_client = httpx.AsyncClient( | ||
| http2=False, | ||
| timeout=httpx.Timeout(60.0, connect=10.0) | ||
| ) | ||
| self._patched_async_client = patch_openai_with_mcp( | ||
| AsyncOpenAI(http_client=http_client) | ||
| ) | ||
| self._patched_async_client = AsyncOpenAI(http_client=http_client) | ||
| logger.info("OpenAI client initialized with HTTP/1.1 (fallback)") |
There was a problem hiding this comment.
After removing patch_openai_with_mcp, self._patched_async_client is now a plain AsyncOpenAI client. Code elsewhere (and even the property docstring) still assumes this client is “patched” to route non-OpenAI providers (e.g., ollama/…, watsonx/…, anthropic/… model prefixes). With a vanilla OpenAI client those prefixed models will be sent to the OpenAI API and fail. Either (a) remove/disable the provider-prefix routing behavior and rename _patched_async_client/aliases accordingly, or (b) replace the removed patch with an explicit provider-aware client/router implementation so non-OpenAI embeddings/LLM calls continue to work.
| # Return to app directory and install the project | ||
| WORKDIR /app | ||
| RUN uv sync --frozen --no-dev --no-editable --extra postgresql | ||
| RUN uv pip uninstall litellm |
There was a problem hiding this comment.
uv pip uninstall litellm is missing a non-interactive confirmation flag. If litellm is installed, this can prompt for confirmation and hang/fail the Docker build. Use the equivalent of -y (and optionally make it tolerant if already absent) so the build is deterministic.
| RUN uv pip uninstall litellm | |
| RUN uv pip uninstall -y litellm || true |
This pull request removes the dependency on the
agentdpackage and its related imports and decorators, eliminates thelitellmpackage from the Docker images, and adds several new dependencies topyproject.toml. The changes simplify the codebase by removing unnecessary abstractions and streamline dependency management.Dependency removal and cleanup:
Removed the
agentddependency frompyproject.tomland deleted all related imports and usage, including thetooldecorator insrc/services/search_service.pyand thepatch_openai_with_mcpimport and usage insrc/config/settings.py. [1] [2] [3] [4] [5]Uninstalled the
litellmpackage in bothDockerfile.langflowandDockerfile.langflow.devto ensure it is not included in the environment. [1] [2]Dependency additions:
openai,pyyaml, andtiktokenas dependencies inpyproject.tomlto support new or existing features.These changes improve maintainability by removing unused or unnecessary dependencies and clarifying which packages are required for the project.