-
Notifications
You must be signed in to change notification settings - Fork 598
🐛 mcp-chat: Chaned tool calls are not supported #8443
Description
Workspace
mcp-chat
📜 Description
While using the mcp-chat plugin, I noticed that queries which require multiple tool calls where the output of the first call is needed to construct the second one never complete and the LLM does not return a response.
For example, when asking:
“Show logs for 1st and 2nd pods in the
defaultnamespace”
the request logically requires the model to first retrieve the list of pods in the default namespace and then use that result to determine the names of the first and second pods before requesting their logs. In other words, the second tool call depends on the result of the first one.
Another scenario where chained tool calls are needed is when interaction with the Backstage catalog is needed:
"Show me the available REST API endpoints of component
my-component"
In this scenario, the LLM appears to get stuck and no response is produced. After some time, the request fails on the backend with a 500 error and a fetch failed exception coming from the OpenAI provider.
👍 Expected behavior
The LLM should be able to perform sequential tool calls when needed. In the example above, it should first retrieve the list of pods in the default namespace, identify the first and second pods from the result, then request their logs and return the final response to the user.
👎 Actual Behavior with Screenshots
mcp-chat error Request failed with status 500 fetch failed type="errorHandler" stack="TypeError: fetch failed\n at node:internal/deps/undici/undici:16416:13\n at async OpenAIProvider.makeRequest (/Users/mattia/Desktop/eoc/playgroung/backstage-community-plugins/workspaces/mcp-chat/plugins/mcp-chat-backend/src/providers/base-provider.ts:77:22)\n at async OpenAIProvider.sendMessage (/Users/mattia/Desktop/eoc/playgroung/backstage-community-plugins/workspaces/mcp-chat/plugins/mcp-chat-backend/src/providers/openai-provider.ts:30:22)\n at async MCPClientServiceImpl.processQuery (/Users/mattia/Desktop/eoc/playgroung/backstage-community-plugins/workspaces/mcp-chat/plugins/mcp-chat-backend/src/services/MCPClientServiceImpl.ts:513:24)\n at async <anonymous> (/Users/mattia/Desktop/eoc/playgroung/backstage-community-plugins/workspaces/mcp-chat/plugins/mcp-chat-backend/src/routes/chatRoutes.ts:85:7)" cause={"errno":-60,"code":"ETIMEDOUT","syscall":"read"}
👟 Reproduction steps
- Set up the
mcp-chatplugin - Ask a question like “Show logs for 1st and 2nd pods in the default namespace”
📃 Provide the context for the Bug.
I managed to solve the issue locally by modifying the following lines: https://github.com/backstage/community-plugins/blob/main/workspaces/mcp-chat/plugins/mcp-chat-backend/src/services/MCPClientServiceImpl.ts#L449-L519
With this:
const allToolCalls: any[] = [];
const allToolResponses: any[] = [];
const MAX_TOOL_ITERATIONS = 10;
let response = await this.llmProvider.sendMessage(messages, llmTools);
let replyMessage = response.choices[0].message;
this.logger.info(
`LLM response received with ${
replyMessage.tool_calls?.length || 0
} tool calls`,
);
let iterations = 0;
while (
replyMessage.tool_calls &&
replyMessage.tool_calls.length > 0 &&
iterations < MAX_TOOL_ITERATIONS
) {
iterations++;
const toolCalls = replyMessage.tool_calls;
for (const toolCall of toolCalls) {
try {
const toolResponse = await executeToolCall(
toolCall,
this.tools,
this.mcpClients,
);
allToolResponses.push(toolResponse);
messages.push({
role: 'assistant',
content: null,
tool_calls: [toolCall],
});
messages.push({
role: 'tool',
content: toolResponse.result,
tool_call_id: toolCall.id,
});
} catch (error) {
const errorMessage = `Error executing tool '${
toolCall.function.name
}': ${error instanceof Error ? error.message : error}`;
this.logger.warn(errorMessage);
const errorResponse = {
id: toolCall.id,
name: toolCall.function.name,
arguments: JSON.parse(toolCall.function.arguments || '{}'),
result: errorMessage,
serverId: 'error',
};
allToolResponses.push(errorResponse);
messages.push({
role: 'assistant',
content: null,
tool_calls: [toolCall],
});
messages.push({
role: 'tool',
content: errorMessage,
tool_call_id: toolCall.id,
});
}
}
allToolCalls.push(...toolCalls);
// Send tool results back to LLM — it may request more tools
response = await this.llmProvider.sendMessage(messages, llmTools);
replyMessage = response.choices[0].message;
this.logger.info(
`LLM follow-up response (iteration ${iterations}) with ${
replyMessage.tool_calls?.length || 0
} tool calls`,
);
}
if (iterations >= MAX_TOOL_ITERATIONS) {
this.logger.warn('Reached maximum tool call iterations, stopping loop');
}
return {
reply: replyMessage.content || '',
toolCalls: allToolCalls,
toolResponses: allToolResponses,
};
I would like to ask for others opinion before opening a PR
👀 Have you spent some time to check if this bug has been raised before?
- I checked and didn't find similar issue
🏢 Have you read the Code of Conduct?
- I have read the Code of Conduct
Are you willing to submit PR?
Yes I am willing to submit a PR!