Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| const target = | ||
| assistantMessages[Math.max(0, assistantMessages.length - Math.max(1, numTurns))] ?? | ||
| null; |
There was a problem hiding this comment.
🟢 Low Layers/OpenCodeAdapter.ts:1123
The rollbackThread calculation uses assistantMessages.length - Math.max(1, numTurns) which targets the wrong message index. With 5 messages and numTurns=1, this computes index 4 (the last message), so session.revert keeps all messages instead of removing the last turn. The formula should be assistantMessages.length - numTurns - 1 to target the message immediately before the turns being rolled back.
const target =
- assistantMessages[Math.max(0, assistantMessages.length - Math.max(1, numTurns))] ??
+ assistantMessages[Math.max(0, assistantMessages.length - numTurns - 1)] ??
null;🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/OpenCodeAdapter.ts around lines 1123-1125:
The `rollbackThread` calculation uses `assistantMessages.length - Math.max(1, numTurns)` which targets the wrong message index. With 5 messages and `numTurns=1`, this computes index 4 (the last message), so `session.revert` keeps all messages instead of removing the last turn. The formula should be `assistantMessages.length - numTurns - 1` to target the message immediately before the turns being rolled back.
Evidence trail:
1. apps/server/src/provider/Layers/OpenCodeAdapter.ts lines 1120-1145 (REVIEWED_COMMIT) - shows the formula `assistantMessages[Math.max(0, assistantMessages.length - Math.max(1, numTurns))]`
2. GitHub issue anomalyco/opencode#5911 - demonstrates `session.revert` semantics: the script selects first user message ID and calls revert expecting everything AFTER that message to be removed, confirming "revert TO" semantics
3. Math verification: With length=5 and numTurns=1: formula gives 5-1=4 (last message); reverting TO last message removes nothing after it
| }, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Duplicate text streaming logic across event handlers
Medium Severity
The message.updated handler and the message.part.updated handler contain nearly identical ~50-line blocks for emitting content.delta events and item.completed events for assistant text parts. The logic for tracking emittedTextLengthByPartId, slicing deltas, checking completedAssistantPartIds, and building the event payloads is copy-pasted between the two handlers. Extracting a shared helper (e.g. emitAssistantTextDelta) would reduce the maintenance burden and risk of inconsistent bug fixes between the two code paths.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 2811ba6. Configure here.
ApprovabilityVerdict: Needs human review 1 blocking correctness issue found. This PR introduces a major new provider integration (OpenCode) with ~3,650 lines of new code spanning server adapters, provider registry, text generation, settings, and UI components. Additionally, there are unresolved review comments identifying high-severity bugs (model parameter type mismatch causing runtime failures) and medium-severity issues (permission rule logic, state management). Human review is needed to evaluate the new integration and address the identified bugs. You can customize Macroscope's approvability policy. Learn more. |
| try: async () => { | ||
| await context.client.session.promptAsync({ | ||
| sessionID: context.openCodeSessionId, | ||
| model: parsedModel, |
There was a problem hiding this comment.
Passing parsed object instead of string for model parameter
High Severity
parseOpenCodeModelSlug returns a ParsedOpenCodeModelSlug object ({ providerID, modelID }), but it's passed directly as the model parameter to session.promptAsync and session.prompt. The SDK expects a string like "openai/gpt-5", not an object. This will cause every OpenCode prompt call to fail at runtime. The original slug string or toOpenCodeModelSlug(parsedModel.providerID, parsedModel.modelID) is likely what was intended.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 1e1f31c. Configure here.
| { permission: "external_directory", pattern: "*", action: "ask" }, | ||
| { permission: "doom_loop", pattern: "*", action: "ask" }, | ||
| { permission: "question", pattern: "*", action: "allow" }, | ||
| ]; |
There was a problem hiding this comment.
Wildcard allow rule negates approval-required permission rules
Medium Severity
In buildOpenCodePermissionRules, the non-full-access branch starts with { permission: "*", pattern: "*", action: "allow" } — the exact same rule used for full-access mode. This wildcard allow-all rule likely makes the subsequent "ask" rules for bash, edit, etc. ineffective, causing approval-required mode to behave identically to full-access. The first rule probably needs to be a default "ask" or "deny" rather than "allow".
Reviewed by Cursor Bugbot for commit 1e1f31c. Configure here.
- preserve OpenCode variant and agent selections - hide unsupported interaction mode controls - add provider snapshot and UI coverage
| return Effect.gen(function* () { | ||
| const versionExit = yield* Effect.exit( | ||
| Effect.tryPromise(() => | ||
| runOpenCodeCommand({ | ||
| binaryPath: input.settings.binaryPath, | ||
| args: ["--version"], | ||
| }), | ||
| ), | ||
| ); |
There was a problem hiding this comment.
🟢 Low Layers/OpenCodeProvider.ts:59
When the OpenCode binary is not found, Cause.squash returns an UnknownError whose message property does not contain the original "ENOENT" or "not found" text. isCommandMissingCause then fails to recognize the error as a missing command, causing the function to return installed: true with message "Failed to probe OpenCode CLI." instead of installed: false with "OpenCode CLI not found on PATH.". Consider either adding a catch function to Effect.tryPromise to convert the error to a known type, or checking UnknownError.cause for the original error message.
+ return Effect.gen(function* () {
+ const versionExit = yield* Effect.exit(
+ Effect.tryPromise({
+ try: () =>
+ runOpenCodeCommand({
+ binaryPath: input.settings.binaryPath,
+ args: ["--version"],
+ }),
+ catch: (error) => error,
+ }),
+ );🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/OpenCodeProvider.ts around lines 59-67:
When the OpenCode binary is not found, `Cause.squash` returns an `UnknownError` whose `message` property does not contain the original "ENOENT" or "not found" text. `isCommandMissingCause` then fails to recognize the error as a missing command, causing the function to return `installed: true` with message "Failed to probe OpenCode CLI." instead of `installed: false` with "OpenCode CLI not found on PATH.". Consider either adding a `catch` function to `Effect.tryPromise` to convert the error to a known type, or checking `UnknownError.cause` for the original error message.
Evidence trail:
1. Effect-TS core.ts (https://github.com/Effect-TS/effect) lines 2375-2381: `UnknownException` constructor uses `super(message ?? "An unknown error occurred", { cause })` and stores original in `this.error`
2. OpenCodeProvider.ts (REVIEWED_COMMIT) line 54-57: `Effect.tryPromise(() => runOpenCodeCommand(...))` without catch handler
3. providerSnapshot.ts (REVIEWED_COMMIT) lines 35-39: `isCommandMissingCause` checks `error.message.toLowerCase()` for "enoent" or "notfound"
4. opencodeRuntime.ts (REVIEWED_COMMIT) lines 401-404: `child.once("error", reject)` passes ENOENT error to Promise rejection
5. OpenCodeProvider.ts line 59: `return fallback(Cause.squash(versionExit.cause))` passes the squashed UnknownException to fallback
| }), | ||
| }), | ||
| (server) => Effect.sync(() => server.close()), | ||
| ); |
There was a problem hiding this comment.
Each text generation spawns and tears down server process
Medium Severity
Every call to runOpenCodeJson (commit messages, PR content, branch names, thread titles) spawns a brand new OpenCode server child process via startOpenCodeServerProcess, waits for it to become ready, creates a session, sends a single prompt, then immediately shuts down the server. This involves finding a free port, spawning a process, waiting up to 5 seconds for startup, and then killing it — all for a single text generation request. The adapter (OpenCodeAdapter) maintains a long-lived server per session, but the text generation layer does not reuse any server instance.
Reviewed by Cursor Bugbot for commit 5ebbf95. Configure here.
- Keep a warm OpenCode process alive across back-to-back requests - Close the shared server after the idle TTL and add coverage for reuse
- Reuse an existing OpenCode server for provider sessions and text generation - Add `serverUrl` to settings, UI, and runtime connection handling - Update tests for configured-server behavior
| }): Effect.Effect<ServerProvider> { | ||
| const checkedAt = new Date().toISOString(); | ||
| const customModels = input.settings.customModels; | ||
| const isExternalServer = input.settings.serverUrl.length > 0; |
There was a problem hiding this comment.
🟢 Low Layers/OpenCodeProvider.ts:31
When input.settings.serverUrl contains only whitespace, isExternalServer becomes true but connectToOpenCodeServer treats the trimmed empty string as falsy and spawns a local process instead. This produces misleading error messages that reference "the configured OpenCode server" when the actual failure was spawning the local binary. Consider trimming the value before the length check to align the two locations.
- const isExternalServer = input.settings.serverUrl.length > 0;
+ const isExternalServer = input.settings.serverUrl.trim().length > 0;🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/OpenCodeProvider.ts around line 31:
When `input.settings.serverUrl` contains only whitespace, `isExternalServer` becomes `true` but `connectToOpenCodeServer` treats the trimmed empty string as falsy and spawns a local process instead. This produces misleading error messages that reference "the configured OpenCode server" when the actual failure was spawning the local binary. Consider trimming the value before the length check to align the two locations.
Evidence trail:
apps/server/src/provider/Layers/OpenCodeProvider.ts line 31: `const isExternalServer = input.settings.serverUrl.length > 0;`
apps/server/src/provider/Layers/OpenCodeProvider.ts lines 47-52: fallback error message uses `isExternalServer` to choose between "Failed to connect to the configured OpenCode server" vs "Failed to probe OpenCode CLI"
apps/server/src/provider/opencodeRuntime.ts lines 321-339: `connectToOpenCodeServer` function trims serverUrl before checking truthiness: `const serverUrl = input.serverUrl?.trim(); if (serverUrl) { ... }` - whitespace-only becomes empty string (falsy) and falls through to `startOpenCodeServerProcess`
| }, | ||
| }); | ||
|
|
||
| yield* Effect.tryPromise({ |
There was a problem hiding this comment.
🟡 Medium Layers/OpenCodeAdapter.ts:936
In sendTurn, if client.session.promptAsync fails, the session state remains status: "running" with activeTurnId set, and the client has received turn.started, but no turn actually exists on the OpenCode server. No turn.completed or turn.aborted event is emitted to correct this, leaving the system in an inconsistent state. Consider rolling back the state mutations and emitting a turn.aborted event when promptAsync throws.
🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/OpenCodeAdapter.ts around line 936:
In `sendTurn`, if `client.session.promptAsync` fails, the session state remains `status: "running"` with `activeTurnId` set, and the client has received `turn.started`, but no turn actually exists on the OpenCode server. No `turn.completed` or `turn.aborted` event is emitted to correct this, leaving the system in an inconsistent state. Consider rolling back the state mutations and emitting a `turn.aborted` event when `promptAsync` throws.
Evidence trail:
apps/server/src/provider/Layers/OpenCodeAdapter.ts lines 912-952 (sendTurn function - state mutation at 912-923, turn.started emission at 925-933, promptAsync call at 935-952 with no error cleanup); line 977 (only turn.aborted emission in interruptTurn); lines 694, 722 (turn.completed emissions in event pump, not in sendTurn error path). Verified at REVIEWED_COMMIT.
|
|
||
| const promptEffort = resolveEffort(caps, rawEffort) ?? null; | ||
| const promptEffort = | ||
| provider === "opencode" ? (rawEffort ?? null) : (resolveEffort(caps, rawEffort) ?? null); |
There was a problem hiding this comment.
OpenCode promptEffort bypasses capability-based validation
Low Severity
For OpenCode, promptEffort is set to the raw variant value (rawEffort ?? null) without validating it against the model's available variantOptions. In contrast, TraitsPicker's getSelectedTraits correctly uses resolveNamedOption(effortLevels, rawEffort) for OpenCode. If a user has a saved variant that's removed from a model's capabilities, promptEffort would reflect a stale/invalid value even though modelOptionsForDispatch is properly normalized. This inconsistency could surface an invalid label in any UI that depends on promptEffort.
Reviewed by Cursor Bugbot for commit 4d1cf0b. Configure here.
- Plumb OpenCode server passwords through settings and contracts - Send basic auth to external OpenCode servers when configured - Surface the password field in the web settings UI
| }, | ||
| parts: [{ type: "text", text: input.prompt }, ...fileParts], | ||
| }); | ||
| const structured = result.data?.info.structured; |
There was a problem hiding this comment.
🟢 Low Layers/OpenCodeTextGeneration.ts:236
At line 236, result.data?.info.structured only guards data with optional chaining, not info. When result.data exists but result.data.info is undefined, the expression throws TypeError: Cannot read property 'structured' of undefined. This causes the outer catch to convert it to a TextGenerationError with a confusing message instead of the intended "OpenCode returned no structured output." error. Consider using result.data?.info?.structured to safely handle both cases.
| const structured = result.data?.info.structured; | |
| const structured = result.data?.info?.structured; |
🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/git/Layers/OpenCodeTextGeneration.ts around line 236:
At line 236, `result.data?.info.structured` only guards `data` with optional chaining, not `info`. When `result.data` exists but `result.data.info` is `undefined`, the expression throws `TypeError: Cannot read property 'structured' of undefined`. This causes the outer `catch` to convert it to a `TextGenerationError` with a confusing message instead of the intended "OpenCode returned no structured output." error. Consider using `result.data?.info?.structured` to safely handle both cases.
Evidence trail:
apps/server/src/git/Layers/OpenCodeTextGeneration.ts:236 shows `result.data?.info.structured` - optional chaining on `data` but not `info`. Lines 237-239 show the intended error message for undefined structured output. Lines 241-247 show the catch block that wraps exceptions. GitHub issue https://github.com/anomalyco/opencode/issues/15756 demonstrates the SDK can return `{ data: {} }` (empty object) making `data.info` undefined. apps/server/package.json shows `@opencode-ai/sdk: ^1.3.15`.
| const existing = sessions.get(input.threadId); | ||
| if (existing) { | ||
| yield* Effect.tryPromise({ | ||
| try: () => stopOpenCodeContext(existing), | ||
| catch: (cause) => | ||
| new ProviderAdapterProcessError({ | ||
| provider: PROVIDER, | ||
| threadId: input.threadId, | ||
| detail: "Failed to stop existing OpenCode session.", | ||
| cause, | ||
| }), | ||
| }); | ||
| sessions.delete(input.threadId); | ||
| } |
There was a problem hiding this comment.
🟢 Low Layers/OpenCodeAdapter.ts:784
Between checking sessions.get(input.threadId) at line 784 and inserting at line 857, startSession yields via multiple Effect.tryPromise calls. If two concurrent calls for the same threadId interleave, both see no existing session, both create new sessions, and the second sessions.set overwrites the first. The first session becomes orphaned: its background startEventPump task continues, the process exit handler stays attached, and stopSession/stopAll cannot clean it up since it's absent from the map.
- const existing = sessions.get(input.threadId);
- if (existing) {
- yield* Effect.tryPromise({
- try: () => stopOpenCodeContext(existing),
- catch: (cause) =>
- new ProviderAdapterProcessError({
- provider: PROVIDER,
- threadId: input.threadId,
- detail: "Failed to stop existing OpenCode session.",
- cause,
- }),
- });
- sessions.delete(input.threadId);
- }
+ // Atomic check-and-set: if another fiber already started this session,
+ // we stop our in-progress work and return the existing session.
+ if (sessions.has(input.threadId)) {
+ return sessions.get(input.threadId)!.session;
+ }
+ sessions.set(input.threadId, null as any); // Reserve slot to prevent concurrent creation🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/OpenCodeAdapter.ts around lines 784-797:
Between checking `sessions.get(input.threadId)` at line 784 and inserting at line 857, `startSession` yields via multiple `Effect.tryPromise` calls. If two concurrent calls for the same `threadId` interleave, both see no existing session, both create new sessions, and the second `sessions.set` overwrites the first. The first session becomes orphaned: its background `startEventPump` task continues, the process exit handler stays attached, and `stopSession`/`stopAll` cannot clean it up since it's absent from the map.
Evidence trail:
apps/server/src/provider/Layers/OpenCodeAdapter.ts lines 767-858 (startSession function with check at 784, yields at 770-777, 786-795, 799-823, and set at 857); line 325 (`const sessions = new Map<ThreadId, OpenCodeSessionContext>()`); lines 357-358 (startEventPump with `void (async () => {...})` background task); line 756 (exit handler attachment `context.server.process?.once("exit", ...)`); lines 1051-1064 (stopSession uses `ensureSessionContext(sessions, threadId)`); lines 1154-1160 (stopAll iterates `sessions.values()` only)
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 6 total unresolved issues (including 5 from previous reviews).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit ea16d83. Configure here.
| > | ||
| {option.label} | ||
| {option.value === defaultEffort ? " (default)" : ""} | ||
| {provider !== "opencode" && option.value === defaultEffort ? " (default)" : ""} |
There was a problem hiding this comment.
Variant options never show "(default)" marker in menu
Low Severity
For OpenCode variant options, the "(default)" label is never shown because defaultEffort is derived from getDefaultEffort(caps), which reads from caps.reasoningEffortLevels — an empty array for OpenCode. The guard provider !== "opencode" short-circuits to false, so even though variant options carry isDefault: true, the marker is suppressed. Meanwhile, agent options at line 391 correctly use option.isDefault to show "(default)". This inconsistency means the user sees no default indicator for variants despite one existing in the data.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit ea16d83. Configure here.


summary
testing
Note
Medium Risk
Adds a new end-to-end provider with server process spawning/connection, streaming session/event handling, and new settings/UI paths; failures could impact provider routing and runtime stability but changes are largely additive and covered by targeted tests.
Overview
Adds OpenCode as a first-class provider across server and web, including settings persistence/order, registries, and provider/model pickers.
On the server, introduces an
opencodeRuntime+OpenCodeAdapterto connect/spawn an OpenCode server, create/abort sessions, stream events, and handle approvals/questions; and addsOpenCodeTextGenerationLiverouted viaRoutingTextGenerationLive, including warm server reuse with an idle TTL and support for commit/PR/branch/thread-title generation.On the web, enables OpenCode in the provider picker and settings (server URL/password fields), adds OpenCode trait controls for variant + agent, refactors model selection creation via
createModelSelection, and hides the interaction-mode toggle for OpenCode viagetComposerProviderControls, with new/updated tests validating these behaviors.Reviewed by Cursor Bugbot for commit ea16d83. Bugbot is set up for automated code reviews on this repo. Configure here.
Note
Add OpenCode as a supported AI provider across server and web
opencodeprovider end-to-end: contracts (ProviderKind,ModelSelection,ServerSettings), server adapter/registry layers, routing, and web UI (provider picker, traits menu, settings panel).OpenCodeAdapterandOpenCodeProviderserver layers; the adapter spawns a local OpenCode server process on demand, reuses it across requests, and shuts it down after 30s idle, or connects to a pre-configured external server URL.OpenCodeTextGenerationfor commit messages, PR content, branch names, and thread titles via structured JSON calls to the OpenCode SDK.variantandagentoptions, and hides the interaction mode toggle for theopencodeprovider.serverUrlandserverPasswordfields to the settings UI provider card for OpenCode configuration.binaryPath); misconfiguration will surface as provider errors at runtime.Macroscope summarized ea16d83.