-
Notifications
You must be signed in to change notification settings - Fork 131
Description
Problem
The SDKs inject prompt text into LLM calls in several places, but this text is hardcoded, invisible to users, and not overridable. As we add more features (tool-calling, schema enforcement, memory injection), the amount of SDK-injected text will grow. Without a principled abstraction, this becomes a debugging and customization nightmare.
Current Injection Points
Python SDK (agent_ai.py)
-
Schema instruction (lines 282-289) — hardcoded instruction block:
schema_instruction = ( "IMPORTANT: You must exactly adhere to the output schema provided below. " "Do not add or omit any fields..." )
-
Tool-calling loop (
tool_calling.py) — NO system prompt injected for tool usage (the LLM gets tools but zero instructions on how to use them) -
Tool limit message — hardcoded string sent to LLM:
{"error": "Tool call limit reached. Please provide a final response."} -
Tool error framing — hardcoded error format:
{"error": str(e), "tool": func_name} -
Tool results — raw
json.dumps(result)with no framing or metadata
Go SDK (ai/tool_calling.go)
Same patterns: hardcoded error strings, no system prompt for tool usage, raw tool results.
Proposed Solution
1. PromptTemplates class (Python) / PromptConfig struct (Go)
All SDK-injected text lives in one discoverable, overridable location:
@dataclass
class PromptTemplates:
"""All text the SDK injects into LLM calls. Override any field to customize."""
schema_instruction: str = (
"IMPORTANT: You must exactly adhere to the output schema provided below..."
)
tool_system_prompt: Optional[str] = (
"You have access to capabilities from an agent network. "
"Use tools when the user's request requires action."
)
tool_limit_message: str = "Tool call limit reached. Please provide a final response."
tool_error_template: str = '{"error": "{error}", "tool": "{tool_name}"}'
tool_result_template: Optional[str] = None # None = raw JSON (current behavior)# Users can inspect
print(app.ai_config.prompt_templates.schema_instruction)
# Users can override
app.ai_config.prompt_templates.tool_system_prompt = "My custom agent instructions..."
# Users can suppress
app.ai_config.prompt_templates.tool_system_prompt = None # No system prompt injected2. Trace-level visibility
The ToolCallTrace (and future trace types) should tag which messages were SDK-injected vs user-provided:
@dataclass
class TracedMessage:
message: dict # The actual message
source: str # "user" | "sdk.schema_instruction" | "sdk.tool_result" | etc.
injected_at: float # timestamp3. Parity across SDKs
Both Python and Go SDKs should have the same template fields and override mechanism so behavior is consistent regardless of SDK choice.
Scope
- Create
PromptTemplatesin Python SDK - Create
PromptConfigin Go SDK - Migrate existing hardcoded schema instruction to use templates
- Add default tool system prompt (with override)
- Add tool result framing template
- Add message source tagging to traces
- Documentation: "Customizing SDK Prompts" guide
- Tests for override behavior
Context
Spawned from PR #228 review (tool-calling support). The tool-calling feature makes this more urgent since it introduces a multi-turn loop where SDK-injected text compounds across turns.
Relates to #225