-
Notifications
You must be signed in to change notification settings - Fork 6
Support Custom API Endpoint #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| self, | ||
| persona_config: dict, | ||
| agent, | ||
| agent_config: dict, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, our internal solution tracks conversation_id without separate agents instantiated per conversation by passing a converastion_id generated in conversation_simulator.py to generate_response, and that conversation_id is then prioritized in the internal LLM function...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@emily-vanark Are you referring to this interaction with LLMO?
Is creating a conversation_id on the user side before using the provider's api a secure best practice? I've never considered this!
Thinking out-loud, what I have implemented:
- user sends message to provider api
- provider api receives message
- provider api creates
conversation_id - provider responds with response-message &
conversation_id - user has choice to start new conversation or continue conversation by including
conversation_idto future messages
vs. what I think the llmo integration is doing:
- user creates
conversation_id - user sends message &
conversation_id - provider api receives message &
conversation_id- this assumes the api will accept a rogue
conversation_id
- this assumes the api will accept a rogue
- provider api runs
get_or_createto access existing chat or create new convo with provided id- possible security issue for a production api, as wouldn't it need to make sure
conversation_idisn't someone else's conversation? 🔐
- possible security issue for a production api, as wouldn't it need to make sure
- provider responds with response-message &
conversation_id - user has choice to start new conversation or continue conversation by including
conversation_idto future messages
Is that a fair comparison of what's happening? If so, I think my concern at point 4 is valid!
Commenting on code-breaking topic at the other portion of code.
cc @sator-labs
| name=agent_config.get("name", "Agent"), | ||
| system_prompt=self.AGENT_SYSTEM_PROMPT, | ||
| **agent_kwargs, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is a breaking code-change. If anything, not biased at all here (😂), I like that both the user && provider agents are being instantiated within the same scope.
Before:
- The provider-agent was instantiated within the
run_conversationsscope - then passed into the
run_single_conversationscope where the user-agent is then instantiated
This means while the personas change, the provider-agent remains the same.
In the main branch implementation, the conversation_id created by my api was never updated due to the provider-agent remaining the same/never refreshing. Instantiating it within run_single_conversation ensured a new conversation had begun, and thus a new conversation_id would be generated by the api. I think this ensures a fresh start with the new persona without any concern of old conversations being preserved in any way, especially as the LLMInterface could be further implemented in the future 🔐.
@sator-labs can you elaborate on what breaks with this change? 🙇
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated tests to reflect this change as well 🚀
Description
This PR attempts a user scoring their own API endpoint.
Added llm_clients/endpoint_llm.py with class EndpointLLM which inherits LLMInterface to act as a provider agent 🚀
This is definitely a first/second draft.
In terms of customizability, if a user wants to score their own API endpoint, they will still need to update the following within llm_clients/ to ensure proper endpoints, environment variables, and loading work as expected:
Issue
Solves SAF-141