Conversation
…ToolSchemasPrompt in buildJsonToolSystemPrompt
…masPrompt via providerOptions
…chemasPrompt via providerOptions
…rOption for core and web-llm
|
@mikechao is attempting to deploy a commit to the browser-ai OSS Program Team on Vercel. A member of the Team first needs to authorize it. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Hi @mikechao,
Thank you for the PR!
It looks solid, but do you think we could come up with different names for the beforeToolSchemasPrompt and afterToolSchemasPrompt as they're a bit ambigous?
Perhaps something like toolCallingInstructionsBefore and toolCallingInstructionsAfter would be more fitting? Or even more precise - toolCallingInstructions and toolCallingResponseFormat etc. I'm open for discussions :)
Also, we need to be careful about letting developers control the tool-call response format. The library's parser expects tool calls inside ```tool_call fences with a specific JSON shape (name + `arguments`). If custom instructions tell the model to respond in a different format, automatic tool-call detection will silently fail. So we might need to document clearly that these options are for customizing the surrounding instructions only (e.g., tone, constraints, behavior) and that the response format must not be changed.
|
Hey @jakobhoeg Yeah I wasn't so sure about the names either. I do like the suggestion of As for the tool_call fences. Should we just do something like ${toolCallingInstructionsBefore}
# Available Tools
${toolsJson}
# Tool Calling Instructions
${parallelInstruction}
To call a tool, output JSON in this exact format inside a \`\`\`tool_call code fence:
\`\`\`tool_call
{"name": "tool_name", "arguments": {"param1": "value1", "param2": "value2"}}
\`\`\`
Tool responses will be provided in \`\`\`tool_result fences. Each line contains JSON like:
\`\`\`tool_result
{"id": "call_123", "name": "tool_name", "result": {...}, "error": false}
\`\`\`
Use the \`result\` payload (and treat \`error\` as a boolean flag) when continuing the conversation.
${toolCallingInstructionsAfter}And document that? |
Sounds good to me! |
This PR addresses #143.
Adds beforeToolSchemasPrompt and afterToolSchemasPrompt per-call provider options to both @browser-ai/core and @browser-ai/web-llm. When either is set, the default tool-use instruction block is replaced with the provided text, while the generated tool schemas JSON is still injected automatically between them.
This is intentionally different from the overrideTooluseInstruction: true approach suggested in the issue — that approach would require callers to manually re-inject the tool schemas, which is error-prone. The before/after wrapper keeps schemas injection automatic while giving full control over the surrounding instructions.
If neither property is set, the existing default behavior is preserved.
Example Usage: