Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Dockerfile.langflow
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ FROM langflowai/langflow:1.8.0
# Install uv and pre-create the Langflow data directory.
# The base image already runs as uid=1000 and owns /app, so no root or chown needed.
RUN pip install uv \
&& pip uninstall -y litellm \
&& mkdir -p /app/langflow-data

EXPOSE 7860
Expand Down
1 change: 1 addition & 0 deletions Dockerfile.langflow.dev
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ RUN mkdir -p /app/src/backend/base/langflow/frontend && \
# Return to app directory and install the project
WORKDIR /app
RUN uv sync --frozen --no-dev --no-editable --extra postgresql
RUN uv pip uninstall litellm
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uv pip uninstall litellm is missing a non-interactive confirmation flag. If litellm is installed, this can prompt for confirmation and hang/fail the Docker build. Use the equivalent of -y (and optionally make it tolerant if already absent) so the build is deterministic.

Suggested change
RUN uv pip uninstall litellm
RUN uv pip uninstall -y litellm || true

Copilot uses AI. Check for mistakes.

# Expose ports
EXPOSE 7860
Expand Down
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ classifiers = [
"Topic :: Software Development :: Libraries :: Python Modules",
]
dependencies = [
"agentd>=0.2.2",
"aiofiles>=24.1.0",
"cryptography>=45.0.6",
"google-api-python-client>=2.143.0",
Expand All @@ -44,6 +43,9 @@ dependencies = [
"structlog>=25.4.0",
"zxcvbn>=4.5.0",
"ibm-secrets-manager-sdk>=2.1.19",
"openai>=1.0.0",
"pyyaml>=6.0",
"tiktoken>=0.7.0",
]

[dependency-groups]
Expand Down
7 changes: 2 additions & 5 deletions src/config/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
from utils.env_utils import get_env_int, get_env_float

import httpx
from agentd.patch import patch_openai_with_mcp
from dotenv import load_dotenv
from openai import AsyncOpenAI
from opensearchpy import AsyncOpenSearch
Expand Down Expand Up @@ -537,16 +536,14 @@ def run_probe_in_thread():
use_http2 = future.result(timeout=15)

if use_http2:
self._patched_async_client = patch_openai_with_mcp(AsyncOpenAI())
self._patched_async_client = AsyncOpenAI()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this break tests?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awaiting the integration tests results

logger.info("OpenAI client initialized with HTTP/2")
else:
http_client = httpx.AsyncClient(
http2=False,
timeout=httpx.Timeout(60.0, connect=10.0)
)
self._patched_async_client = patch_openai_with_mcp(
AsyncOpenAI(http_client=http_client)
)
self._patched_async_client = AsyncOpenAI(http_client=http_client)
logger.info("OpenAI client initialized with HTTP/1.1 (fallback)")
Comment on lines 538 to 547
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After removing patch_openai_with_mcp, self._patched_async_client is now a plain AsyncOpenAI client. Code elsewhere (and even the property docstring) still assumes this client is “patched” to route non-OpenAI providers (e.g., ollama/…, watsonx/…, anthropic/… model prefixes). With a vanilla OpenAI client those prefixed models will be sent to the OpenAI API and fail. Either (a) remove/disable the provider-prefix routing behavior and rename _patched_async_client/aliases accordingly, or (b) replace the removed patch with an explicit provider-aware client/router implementation so non-OpenAI embeddings/LLM calls continue to work.

Copilot uses AI. Check for mistakes.
logger.info("Successfully initialized OpenAI client")
except Exception as e:
Expand Down
2 changes: 0 additions & 2 deletions src/services/search_service.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import copy
import json
from typing import Any, Dict
from agentd.tool_decorator import tool
from config.settings import EMBED_MODEL, clients, get_embedding_model, get_index_name, WATSONX_EMBEDDING_DIMENSIONS
from auth_context import get_auth_context
from utils.logging_config import get_logger
Expand All @@ -17,7 +16,6 @@ class SearchService:
def __init__(self, session_manager=None):
self.session_manager = session_manager

@tool
async def search_tool(self, query: str, embedding_model: str = None) -> Dict[str, Any]:
"""
Use this tool to search for documents relevant to the query.
Expand Down
Loading
Loading