Skip to content

fix: [Security Fix] remove litellm #1239

Merged
edwinjosechittilappilly merged 2 commits intomainfrom
fix-litellm-issue
Mar 24, 2026
Merged

fix: [Security Fix] remove litellm #1239
edwinjosechittilappilly merged 2 commits intomainfrom
fix-litellm-issue

Conversation

@edwinjosechittilappilly
Copy link
Copy Markdown
Collaborator

@edwinjosechittilappilly edwinjosechittilappilly commented Mar 24, 2026

This pull request removes the dependency on the agentd package and its related imports and decorators, eliminates the litellm package from the Docker images, and adds several new dependencies to pyproject.toml. The changes simplify the codebase by removing unnecessary abstractions and streamline dependency management.

Dependency removal and cleanup:

  • Removed the agentd dependency from pyproject.toml and deleted all related imports and usage, including the tool decorator in src/services/search_service.py and the patch_openai_with_mcp import and usage in src/config/settings.py. [1] [2] [3] [4] [5]

  • Uninstalled the litellm package in both Dockerfile.langflow and Dockerfile.langflow.dev to ensure it is not included in the environment. [1] [2]

Dependency additions:

  • Added openai, pyyaml, and tiktoken as dependencies in pyproject.toml to support new or existing features.

These changes improve maintainability by removing unused or unnecessary dependencies and clarifying which packages are required for the project.

Remove usage of the agentd library and its OpenAI patching/tool decorator. Instantiate AsyncOpenAI directly (HTTP/2 or HTTP/1.1 fallback) and remove imports of agentd.patch and agentd.tool_decorator. Add runtime dependencies for openai, pyyaml, and tiktoken in pyproject.toml to support direct OpenAI client usage.
@github-actions github-actions bot added the backend 🔷 Issues related to backend services (OpenSearch, Langflow, APIs) label Mar 24, 2026

if use_http2:
self._patched_async_client = patch_openai_with_mcp(AsyncOpenAI())
self._patched_async_client = AsyncOpenAI()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this break tests?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awaiting the integration tests results

Remove the `litellm` package from both Dockerfile.langflow and Dockerfile.langflow.dev to avoid conflicts/compatibility issues. In Dockerfile.langflow the `pip uninstall -y litellm` was added to the RUN that installs `uv` and prepares /app/langflow-data; in Dockerfile.langflow.dev a `RUN uv pip uninstall litellm` line was added after the dependency sync. This ensures built images do not include `litellm`.
@github-actions github-actions bot added docker bug 🔴 Something isn't working. labels Mar 24, 2026
@edwinjosechittilappilly
Copy link
Copy Markdown
Collaborator Author

@phact Tests Passed!

@edwinjosechittilappilly edwinjosechittilappilly marked this pull request as ready for review March 24, 2026 15:46
Copilot AI review requested due to automatic review settings March 24, 2026 15:46
@edwinjosechittilappilly edwinjosechittilappilly changed the title fix: litellm security fix fix: [Security Fix] remove litellm Mar 24, 2026
@github-actions github-actions bot added bug 🔴 Something isn't working. and removed bug 🔴 Something isn't working. labels Mar 24, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to address a LiteLLM-related security concern by removing the agentd dependency and associated OpenAI patching, ensuring litellm is removed from Langflow Docker images, and making previously-transitive dependencies explicit in pyproject.toml.

Changes:

  • Remove agentd usage (tool decorator + OpenAI MCP patch hook) and update the OpenAI client initialization accordingly.
  • Ensure litellm is uninstalled in Dockerfile.langflow and Dockerfile.langflow.dev.
  • Add explicit runtime deps (openai, pyyaml, tiktoken) and refresh uv.lock to reflect the dependency graph changes.

Reviewed changes

Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
uv.lock Removes agentd/litellm and related transitive packages; bumps openai and refreshes lock resolution.
src/services/search_service.py Drops agentd tool decorator/import from search_tool.
src/config/settings.py Removes agentd MCP patching and initializes a plain AsyncOpenAI client instead.
pyproject.toml Removes agentd; adds explicit dependencies for OpenAI + YAML + token counting.
Dockerfile.langflow.dev Uninstalls litellm from the Langflow-from-source dev image.
Dockerfile.langflow Uninstalls litellm from the pinned Langflow base image.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 538 to 547
if use_http2:
self._patched_async_client = patch_openai_with_mcp(AsyncOpenAI())
self._patched_async_client = AsyncOpenAI()
logger.info("OpenAI client initialized with HTTP/2")
else:
http_client = httpx.AsyncClient(
http2=False,
timeout=httpx.Timeout(60.0, connect=10.0)
)
self._patched_async_client = patch_openai_with_mcp(
AsyncOpenAI(http_client=http_client)
)
self._patched_async_client = AsyncOpenAI(http_client=http_client)
logger.info("OpenAI client initialized with HTTP/1.1 (fallback)")
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After removing patch_openai_with_mcp, self._patched_async_client is now a plain AsyncOpenAI client. Code elsewhere (and even the property docstring) still assumes this client is “patched” to route non-OpenAI providers (e.g., ollama/…, watsonx/…, anthropic/… model prefixes). With a vanilla OpenAI client those prefixed models will be sent to the OpenAI API and fail. Either (a) remove/disable the provider-prefix routing behavior and rename _patched_async_client/aliases accordingly, or (b) replace the removed patch with an explicit provider-aware client/router implementation so non-OpenAI embeddings/LLM calls continue to work.

Copilot uses AI. Check for mistakes.
# Return to app directory and install the project
WORKDIR /app
RUN uv sync --frozen --no-dev --no-editable --extra postgresql
RUN uv pip uninstall litellm
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uv pip uninstall litellm is missing a non-interactive confirmation flag. If litellm is installed, this can prompt for confirmation and hang/fail the Docker build. Use the equivalent of -y (and optionally make it tolerant if already absent) so the build is deterministic.

Suggested change
RUN uv pip uninstall litellm
RUN uv pip uninstall -y litellm || true

Copilot uses AI. Check for mistakes.
@github-actions github-actions bot added the lgtm label Mar 24, 2026
@edwinjosechittilappilly edwinjosechittilappilly merged commit 612090e into main Mar 24, 2026
21 of 22 checks passed
@github-actions github-actions bot deleted the fix-litellm-issue branch March 24, 2026 16:09
@edwinjosechittilappilly edwinjosechittilappilly restored the fix-litellm-issue branch March 24, 2026 16:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend 🔷 Issues related to backend services (OpenSearch, Langflow, APIs) bug 🔴 Something isn't working. docker lgtm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants