Fix A2A LLM parameter forwarding in InternalInstructor (Issue #3927) #3928
+37
−4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Fixes issue #3927 where A2A (Agent-to-Agent) delegation was losing LLM configuration parameters when checking if remote agents are relevant.
The bug was in
InternalInstructor.to_pydantic()which only passed the model name to the instructor client, dropping critical parameters likeapi_key,api_base, andtemperature. This caused A2A to fail when using LiteLLM proxy configurations.Changes:
to_pydantic()to build a params dict and forward LLM configuration for LiteLLM instancesis_litellm=Trueto avoid breaking non-LiteLLM pathsmax_tokensovermax_completion_tokenswhen both are presenttype: ignore[import-untyped]commentReview & Testing Checklist for Human
api_key,api_base, andtemperaturesettings (this is the core issue from [BUG] A2A looses information about LLM provided when checking if A2A remote agent is relevant #3927)Recommended Test Plan
Notes
LLM._prepare_completion_params()(lines 588-635 in llm.py)