Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (3)
📒 Files selected for processing (10)
📝 WalkthroughWalkthroughThis pull request adds Prettier configuration, refines UI styling for placeholder nodes (reducing width, updating colors and typography), adjusts the Background component with explicit color, updates placeholder text, and introduces a new LLM agent execution framework with client integration, type definitions, and async task execution logic. Changes
Sequence DiagramsequenceDiagram
participant Agent as AgentExecution
participant LLM as LLMClient
participant API as Gemini API
Agent->>Agent: Execute(ExecuteParams)
Agent->>Agent: Validate input & build context
Agent->>Agent: Initialize system & user messages
loop maxIterations
Agent->>LLM: call(messages, options)
LLM->>LLM: Build request payload
LLM->>API: POST /generateContent
API-->>LLM: Response with content & tokens
LLM->>LLM: Parse assistant message
LLM-->>Agent: {text, inputTokens, outputTokens}
Agent->>Agent: Add to message history
Agent->>Agent: Evaluate stop condition
end
Agent-->>Agent: Return ExecuteResult (undefined)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR introduces a new packages/LLM module intended to call the Gemini API and adds an initial “agent executor” wrapper around it, plus some workflow-canvas UI styling tweaks and a new root Prettier configuration.
Changes:
- Add
LLMClient(Gemini API HTTP client) and accompanying.d.ts/source maps. - Add an
AgentExecutionexecutor scaffold and agent message/type declarations. - Update workflow UI styling (ReactFlow background + node visuals) and add
.prettierrc.
Reviewed changes
Copilot reviewed 7 out of 13 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/LLM/src/LlmClient.js | Implements Gemini API call logic and token accounting. |
| packages/LLM/src/LlmClient.d.ts | Declares TS surface for LLMClient. |
| packages/LLM/src/LlmClient.d.ts.map | Source map for typings. |
| packages/LLM/src/Agent/executor.js | Adds an agent execution loop scaffold calling LLMClient. |
| packages/LLM/src/Agent/executor.d.ts | Declares TS surface for agent executor. |
| packages/LLM/src/Agent/executor.d.ts.map | Source map for typings. |
| packages/LLM/src/Agent/types.d.ts | Declares agent message/types used by executor. |
| packages/LLM/src/Agent/types.d.ts.map | Source map for typings. |
| packages/LLM/src/Agent/types.js | Runtime stub module for types. |
| apps/web/app/workflows/[id]/page.tsx | Sets a custom ReactFlow Background color. |
| apps/web/app/workflows/[id]/components/nodes/PlaceholderNode.tsx | Updates placeholder node label text. |
| apps/web/app/components/nodes/BaseNode.tsx | Adjusts placeholder node sizing/colors and hides helper text. |
| .prettierrc | Adds Prettier configuration. |
| // if(!stopReason.){ | ||
| // return { | ||
| // success: true, | ||
| // result: result.text, | ||
| // iterations: context.iterationCount + 1, | ||
| // tokensUsed: result.inputTokens + result.outputTokens, | ||
| // stopReason: "completed" | ||
| // } | ||
| // } | ||
| } | ||
| catch (e) { | ||
| console.warn("Internal Server Error"); | ||
| } | ||
| } | ||
| } | ||
| catch (e) { | ||
| console.warn("Internal Server Error"); |
There was a problem hiding this comment.
The while (context.iterationCount < context.maxIterations) loop never increments iterationCount, never mutates context.messages, and never returns/breaks on success or failure. As written, this can become an infinite loop (or a tight retry loop if maxIterations is set), causing the worker to hang/consume CPU. Please increment the counter and implement a clear stop/return path (including a stop condition on success and a backoff/abort on repeated errors).
| // if(!stopReason.){ | |
| // return { | |
| // success: true, | |
| // result: result.text, | |
| // iterations: context.iterationCount + 1, | |
| // tokensUsed: result.inputTokens + result.outputTokens, | |
| // stopReason: "completed" | |
| // } | |
| // } | |
| } | |
| catch (e) { | |
| console.warn("Internal Server Error"); | |
| } | |
| } | |
| } | |
| catch (e) { | |
| console.warn("Internal Server Error"); | |
| // increment iteration count on successful call | |
| context.iterationCount++; | |
| // on success, return a structured response | |
| return { | |
| success: true, | |
| result: result && typeof result.text !== "undefined" ? result.text : result, | |
| iterations: context.iterationCount, | |
| tokensUsed: (result && typeof result.inputTokens === "number" ? result.inputTokens : 0) + | |
| (result && typeof result.outputTokens === "number" ? result.outputTokens : 0), | |
| stopReason: "completed" | |
| }; | |
| } | |
| catch (e) { | |
| // increment iteration count on error | |
| context.iterationCount++; | |
| console.warn("Internal Server Error", e); | |
| // if we've exhausted the maximum number of iterations, abort and return failure | |
| if (context.iterationCount >= context.maxIterations) { | |
| return { | |
| success: false, | |
| error: e && e.message ? e.message : "Internal Server Error", | |
| iterations: context.iterationCount, | |
| stopReason: "error" | |
| }; | |
| } | |
| // simple backoff before retrying to avoid a tight loop on repeated errors | |
| const backoffMs = 100 * context.iterationCount; | |
| await new Promise(resolve => setTimeout(resolve, backoffMs)); | |
| } | |
| } | |
| } | |
| catch (e) { | |
| console.warn("Internal Server Error", e); |
| const input = context.messages; | ||
| const result = await this.llmClinet.call(input); | ||
| // if(!stopReason.){ | ||
| // return { | ||
| // success: true, | ||
| // result: result.text, | ||
| // iterations: context.iterationCount + 1, | ||
| // tokensUsed: result.inputTokens + result.outputTokens, | ||
| // stopReason: "completed" | ||
| // } | ||
| // } | ||
| } | ||
| catch (e) { | ||
| console.warn("Internal Server Error"); | ||
| } | ||
| } | ||
| } | ||
| catch (e) { | ||
| console.warn("Internal Server Error"); | ||
| } | ||
| return; | ||
| } |
There was a problem hiding this comment.
Execute is declared (via typings) to return an ExecuteResult | undefined, but the current implementation always returns undefined and ignores result from the LLM call. Please either implement the ExecuteResult return path (success/error/stopReason, token accounting, etc.) or change the method signature to match the actual behavior.
| bg-white | ||
| text-2xl | ||
| group-hover:bg-blue-500/20 | ||
| group-hover:border-blue-500 |
There was a problem hiding this comment.
In the placeholder icon container, group-hover:border-blue-500 is present but there is no border/border-* class on this element anymore, so the hover border color change will have no visible effect. Either re-add an explicit border width/style or remove the hover border class to avoid dead styling.
| group-hover:border-blue-500 |
| @@ -0,0 +1,5 @@ | |||
| { | |||
| "semi": true, | |||
| "singleQuote": true | |||
There was a problem hiding this comment.
This repo currently uses double quotes broadly (e.g., apps/web/app/components/nodes/BaseNode.tsx:1, packages/LLM/src/LlmClient.js:1). Setting Prettier singleQuote: true will either force widespread reformatting or leave the codebase inconsistent. Please align the Prettier config with the existing quote style (or reformat the repo consistently in the same PR).
| "singleQuote": true | |
| "singleQuote": false |
| throw new Error("GEMINI API KEY not specified (why env is not working?)"); | ||
| if (!GEMINI_URL) | ||
| throw new Error("GEMINI URL not specified (why env is not working?)"); |
There was a problem hiding this comment.
The thrown env errors include a rhetorical/unhelpful phrase ("why env is not working?") and the API key message doesn't match the actual env var name (GEMINI_API_KEY). Please replace these with actionable messages (e.g., which env var is missing and where it should be set) without editorial commentary.
| throw new Error("GEMINI API KEY not specified (why env is not working?)"); | |
| if (!GEMINI_URL) | |
| throw new Error("GEMINI URL not specified (why env is not working?)"); | |
| throw new Error("Environment variable GEMINI_API_KEY is not set. Please configure GEMINI_API_KEY in your environment or .env file."); | |
| if (!GEMINI_URL) | |
| throw new Error("Environment variable GEMINI_URL is not set. Please configure GEMINI_URL in your environment or .env file."); |
| export class AgentExecution { | ||
| constructor() { | ||
| this.llmClinet = new LLMClient(); | ||
| } | ||
| async Execute(paramas) { | ||
| if (!paramas) | ||
| return; |
There was a problem hiding this comment.
There are multiple typos in names (llmClinet, paramas, Execute PascalCase) which makes the API harder to use and search for. Please rename these to conventional/correct spellings (e.g., llmClient, params, execute) and keep method naming consistent with the rest of the codebase.
| @@ -0,0 +1,2 @@ | |||
| // apps/worker/src/agent/types.ts | |||
There was a problem hiding this comment.
This file comment references a different path (apps/worker/src/agent/types.ts), which is misleading now that the file lives under packages/LLM/src/Agent/. Please update/remove the comment so it reflects the current source location (or omit it if this file is generated).
| // apps/worker/src/agent/types.ts |
| generationConfig: { | ||
| // stopSequencies: [ | ||
| // "Title" | ||
| // ], | ||
| temperature: options.temperature, | ||
| maxOutputTokens: options.maxOutputTokens | ||
| } |
There was a problem hiding this comment.
options is treated as required (options.temperature, options.maxOutputTokens), but the only call site (AgentExecution) calls call(input) without passing options. This will throw a TypeError at runtime. Please make options truly optional by supplying defaults (or using optional chaining) and/or updating the call site to always pass a complete options object.
| console.log("LLM Response:", response.data); | ||
| const actuaResponse = response.data.candidates[0].content.parts[0]; | ||
| // console.log("THe outpusssst is ", actuaResponse.text); |
There was a problem hiding this comment.
console.log("LLM Response:", response.data) logs the full model response (and likely includes user prompt/PII and token usage). This can leak sensitive data into logs in production. Consider removing this log or gating it behind a debug flag/logger with redaction.
| console.log("LLM Response:", response.data); | |
| const actuaResponse = response.data.candidates[0].content.parts[0]; | |
| // console.log("THe outpusssst is ", actuaResponse.text); | |
| if (process.env.LLM_DEBUG === "true") { | |
| console.log("LLM response metadata:", { | |
| candidateCount: response.data?.candidates?.length, | |
| usageMetadata: response.data?.usageMetadata, | |
| }); | |
| } | |
| const actuaResponse = response.data.candidates[0].content.parts[0]; | |
| // Use actuaResponse.text as needed in callers. |
| contents: [ | ||
| { | ||
| parts: [ | ||
| { | ||
| text: prompt, | ||
| }, | ||
| ], |
There was a problem hiding this comment.
call(prompt, ...) builds the Gemini payload with text: prompt, but the only call site passes an array of message objects (context.messages). That will serialize incorrectly (e.g., [object Object]) and won't produce the intended prompt. Either change LLMClient.call to accept a string, or convert the messages array into Gemini contents format before sending.
Summary by CodeRabbit
New Features
Style
Chores