Skip to content

Conversation

@edwinokonkwo
Copy link

@edwinokonkwo edwinokonkwo commented Nov 18, 2025

Summary

This PR adds LangChain provider support to the LaunchDarkly AI Python SDK
The Python SDK follows a monorepo structure with packages split across:

  • packages/core/ - Core AI SDK with client, models, tracking, and provider abstractions
  • packages/langchain/ - LangChain specific provider implementation as a separate installable package
    This architecture allows users to install just what they need mirroring our TypeScript SDK's modular design.

Before:

python-server-sdk-ai/
└── ldai/
    ├── client.py
    ├── models.py
    ├── providers/
    │   ├── __init__.py
    │   ├── ai_provider.py
    │   └── langchain/          # langchain was bundled in
    └── ...

After:

python-server-sdk-ai/                  
├── packages/
│   ├── core/                          # Core SDK
│   │   ├── ldai/
│   │   ├── tests/
│   │   └── pyproject.toml
│   └── langchain/                     # LangChain provider is sperate package
│       ├── ldai/providers/langchain/
│       ├── tests/
│       └── pyproject.toml

@edwinokonkwo edwinokonkwo requested a review from a team as a code owner November 18, 2025 15:18
@edwinokonkwo edwinokonkwo marked this pull request as draft November 19, 2025 07:34
@edwinokonkwo edwinokonkwo changed the title [REL-10772] Implement Langchain provider for online evals feat: [REL-10772] Implement Langchain provider for online evals Nov 19, 2025
@edwinokonkwo edwinokonkwo marked this pull request as ready for review November 19, 2025 17:54
Comment on lines +405 to +410
self._client.track(
"$ld:ai:agent:function:multiple",
context,
agent_count,
agent_count
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracking the config ids that were pulled here would be a good facet to add imo

Comment on lines +459 to +461
all_variables = {}
if variables:
all_variables.update(variables)
Copy link

@andrewklatzke andrewklatzke Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit/suggestion: all_variables = variables or {}

self._logger.warning('Judge configuration must include messages')
return None

if random.random() > sampling_rate:
Copy link

@andrewklatzke andrewklatzke Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be random.random() <= sampling_rate

A sampling rate of 0.1 should be 10% of the invocations, since the range of random() is 0-1, that leaves 90% of the results as greater than 0.1. So I think we want lte to capture 10%.

A sample rate of 1 should be 100%, so <= will always return true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants