Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

Summary

Fixes issue #3927 where A2A (Agent-to-Agent) delegation was losing LLM configuration parameters when checking if remote agents are relevant.

The bug was in InternalInstructor.to_pydantic() which only passed the model name to the instructor client, dropping critical parameters like api_key, api_base, and temperature. This caused A2A to fail when using LiteLLM proxy configurations.

Changes:

  • Modified to_pydantic() to build a params dict and forward LLM configuration for LiteLLM instances
  • Only forwards parameters when is_litellm=True to avoid breaking non-LiteLLM paths
  • Filters out None values to prevent errors
  • Prefers max_tokens over max_completion_tokens when both are present
  • Removed unused type: ignore[import-untyped] comment

Review & Testing Checklist for Human

⚠️ IMPORTANT: No automated tests were added for this fix due to mocking complexity. Manual testing is critical.

  • Test A2A delegation with custom LLM parameters - Verify that A2A works correctly when using LiteLLM with custom api_key, api_base, and temperature settings (this is the core issue from [BUG] A2A looses information about LLM provided when checking if A2A remote agent is relevant #3927)
  • Test non-LiteLLM paths - Verify that regular OpenAI/Anthropic/etc. LLM usage still works correctly (no regression)
  • Verify parameter completeness - Check that the forwarded parameter list matches what LiteLLM expects and that no critical parameters are missing
  • Test edge cases - Verify behavior with None values, both max_tokens and max_completion_tokens set, and string LLM values
  • Review CI test results - Ensure existing A2A tests pass and no regressions were introduced

Recommended Test Plan

  1. Set up a LiteLLM proxy with custom configuration
  2. Create a crew with A2A enabled using the proxy (custom api_base, api_key)
  3. Trigger A2A delegation and verify it works correctly
  4. Test without custom parameters to ensure backward compatibility
  5. Test with non-LiteLLM providers (OpenAI, Anthropic) to ensure no regression

Notes

devin-ai-integration bot and others added 2 commits November 16, 2025 10:38
- Forward LLM configuration parameters (api_key, api_base, temperature, etc.) to instructor client for LiteLLM instances
- Only forward parameters for LiteLLM instances (is_litellm=True) to avoid breaking non-LiteLLM paths
- Filter out None values to prevent errors
- Prefer max_tokens over max_completion_tokens when both are present
- Fixes issue #3927 where A2A delegation lost LLM configuration when checking if remote agents are relevant

Co-Authored-By: João <joao@crewai.com>
Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants