-
Notifications
You must be signed in to change notification settings - Fork 63
Bug: Multi Model Conversations Pipe appends all participant responses as role: assistant, breaking Round 2+ #67
Description
Bug Description
The Multi Model Conversations Pipe appends ALL participant responses as \ in the conversation history, creating consecutive assistant messages. LLMs expect strictly alternating /\ turns. This causes garbage output (random tokens, Chinese characters, incoherent text) from Round 2 onward.
Steps to Reproduce
- Set up a conversation with 2 participants (e.g., deepseek-v3.2 and gemini-3-pro-preview)
- Run Round 1 — both models respond correctly
- Run Round 2 — the second model (and sometimes the first) produces garbage output
Root Cause
Around lines 479-481 of , all participant responses are appended with \ regardless of which participant generated them. When building the context for Participant B, Participant A's response appears as \ followed by Participant B's own prior response also as \ — consecutive assistant messages that violate the expected turn structure.
Expected Behavior
When building conversation history for each participant, the OTHER participants' responses should be set to \ (not ). Each model should only see its own responses as \ and everything else as , ensuring proper alternating turns.
Test Results
| Test | Result |
|---|---|
| deepseek-v3.2 direct API call | PASS - English, coherent |
| gemini-3-pro-preview direct API call | PASS - English, coherent |
| Pipe Round 1 (first exchange) | PASS - proper output |
| Pipe Round 2+ (subsequent exchanges) | FAIL - garbage/Chinese/tokens |
Environment
- Open WebUI (latest at time of testing, March 2026)
- Models tested: deepseek-v3.2, gemini-3-pro-preview (via LiteLLM proxy)
- Both models work perfectly via direct API calls — issue is exclusively in the pipe's history construction
Workaround
We built a standalone orchestration script that manages conversation history externally with proper role alternation, bypassing the pipe entirely.