Skip to content

Conversation

@jgieringer
Copy link
Collaborator

@jgieringer jgieringer commented Jan 12, 2026

Description

This PR attempts a user scoring their own API endpoint.
Added llm_clients/endpoint_llm.py with class EndpointLLM which inherits LLMInterface to act as a provider agent 🚀

This is definitely a first/second draft.
In terms of customizability, if a user wants to score their own API endpoint, they will still need to update the following within llm_clients/ to ensure proper endpoints, environment variables, and loading work as expected:

  • endpoint_llm.py
  • llm_factory.py
  • config.py

Issue

Solves SAF-141

@jgieringer jgieringer requested a review from sator-labs January 12, 2026 16:37
self,
persona_config: dict,
agent,
agent_config: dict,
Copy link
Collaborator

@emily-vanark emily-vanark Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, our internal solution tracks conversation_id without separate agents instantiated per conversation by passing a converastion_id generated in conversation_simulator.py to generate_response, and that conversation_id is then prioritized in the internal LLM function...

Copy link
Collaborator Author

@jgieringer jgieringer Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@emily-vanark Are you referring to this interaction with LLMO?

Is creating a conversation_id on the user side before using the provider's api a secure best practice? I've never considered this!

Thinking out-loud, what I have implemented:

  1. user sends message to provider api
  2. provider api receives message
  3. provider api creates conversation_id
  4. provider responds with response-message & conversation_id
  5. user has choice to start new conversation or continue conversation by including conversation_id to future messages

vs. what I think the llmo integration is doing:

  1. user creates conversation_id
  2. user sends message & conversation_id
  3. provider api receives message & conversation_id
    • this assumes the api will accept a rogue conversation_id
  4. provider api runs get_or_create to access existing chat or create new convo with provided id
    • possible security issue for a production api, as wouldn't it need to make sure conversation_id isn't someone else's conversation? 🔐
  5. provider responds with response-message & conversation_id
  6. user has choice to start new conversation or continue conversation by including conversation_id to future messages

Is that a fair comparison of what's happening? If so, I think my concern at point 4 is valid!

Commenting on code-breaking topic at the other portion of code.

cc @sator-labs

name=agent_config.get("name", "Agent"),
system_prompt=self.AGENT_SYSTEM_PROMPT,
**agent_kwargs,
)
Copy link
Collaborator Author

@jgieringer jgieringer Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is a breaking code-change. If anything, not biased at all here (😂), I like that both the user && provider agents are being instantiated within the same scope.

Before:

  1. The provider-agent was instantiated within the run_conversations scope
  2. then passed into the run_single_conversation scope where the user-agent is then instantiated

This means while the personas change, the provider-agent remains the same.

In the main branch implementation, the conversation_id created by my api was never updated due to the provider-agent remaining the same/never refreshing. Instantiating it within run_single_conversation ensured a new conversation had begun, and thus a new conversation_id would be generated by the api. I think this ensures a fresh start with the new persona without any concern of old conversations being preserved in any way, especially as the LLMInterface could be further implemented in the future 🔐.

@sator-labs can you elaborate on what breaks with this change? 🙇

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated tests to reflect this change as well 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants