Skip to content

Refracting#72

Merged
Vamsi-o merged 1 commit intomainfrom
fix/bg
Mar 1, 2026
Merged

Refracting#72
Vamsi-o merged 1 commit intomainfrom
fix/bg

Conversation

@Vamsi-o
Copy link
Contributor

@Vamsi-o Vamsi-o commented Mar 1, 2026

Summary by CodeRabbit

  • New Features

    • Added LLM agent execution framework with support for multi-turn interactions and token management.
    • Integrated LLM client for API-based model inference.
  • Style

    • Redesigned placeholder nodes with reduced width, streamlined icons, and updated typography.
    • Updated workflow background color for improved visual consistency.
  • Chores

    • Added code formatting configuration.

Copilot AI review requested due to automatic review settings March 1, 2026 13:19
@coderabbitai
Copy link

coderabbitai bot commented Mar 1, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f5056fd and d97071d.

⛔ Files ignored due to path filters (3)
  • packages/LLM/src/Agent/executor.d.ts.map is excluded by !**/*.map
  • packages/LLM/src/Agent/types.d.ts.map is excluded by !**/*.map
  • packages/LLM/src/LlmClient.d.ts.map is excluded by !**/*.map
📒 Files selected for processing (10)
  • .prettierrc
  • apps/web/app/components/nodes/BaseNode.tsx
  • apps/web/app/workflows/[id]/components/nodes/PlaceholderNode.tsx
  • apps/web/app/workflows/[id]/page.tsx
  • packages/LLM/src/Agent/executor.d.ts
  • packages/LLM/src/Agent/executor.js
  • packages/LLM/src/Agent/types.d.ts
  • packages/LLM/src/Agent/types.js
  • packages/LLM/src/LlmClient.d.ts
  • packages/LLM/src/LlmClient.js

📝 Walkthrough

Walkthrough

This pull request adds Prettier configuration, refines UI styling for placeholder nodes (reducing width, updating colors and typography), adjusts the Background component with explicit color, updates placeholder text, and introduces a new LLM agent execution framework with client integration, type definitions, and async task execution logic.

Changes

Cohort / File(s) Summary
Configuration
.prettierrc
Adds Prettier formatting rules enforcing semicolons and single quotes.
UI Styling Updates
apps/web/app/components/nodes/BaseNode.tsx, apps/web/app/workflows/[id]/components/nodes/PlaceholderNode.tsx, apps/web/app/workflows/[id]/page.tsx
Reduces node width from 240px to 140px, changes background from translucent gray to white, shrinks icon container, updates label to bold black text, removes "Click to configure" subtext, updates placeholder text, and adds explicit background color to Background component.
LLM Agent Types
packages/LLM/src/Agent/types.d.ts, packages/LLM/src/Agent/types.js
Introduces TypeScript interfaces for message structures (SystemMessage, UserMessage, AssistantMessage, ToolMessage), execution parameters (ExecuteParams, ExecuteResult), agent context, and stop conditions; includes empty module placeholder.
LLM Agent Executor
packages/LLM/src/Agent/executor.d.ts, packages/LLM/src/Agent/executor.js
Adds AgentExecution class with Execute method that validates input, builds context, manages iterative LLM calls with message history, includes error handling via try/catch, and returns undefined (incomplete result assembly).
LLM Client
packages/LLM/src/LlmClient.d.ts, packages/LLM/src/LlmClient.js
Implements LLMClient class with async call method that interfaces with Gemini API, constructs POST requests with temperature/maxOutputTokens options, parses responses, and returns text with token counts; includes environment variable configuration and error handling.

Sequence Diagram

sequenceDiagram
    participant Agent as AgentExecution
    participant LLM as LLMClient
    participant API as Gemini API
    
    Agent->>Agent: Execute(ExecuteParams)
    Agent->>Agent: Validate input & build context
    Agent->>Agent: Initialize system & user messages
    
    loop maxIterations
        Agent->>LLM: call(messages, options)
        LLM->>LLM: Build request payload
        LLM->>API: POST /generateContent
        API-->>LLM: Response with content & tokens
        LLM->>LLM: Parse assistant message
        LLM-->>Agent: {text, inputTokens, outputTokens}
        Agent->>Agent: Add to message history
        Agent->>Agent: Evaluate stop condition
    end
    
    Agent-->>Agent: Return ExecuteResult (undefined)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • Fix/config #56: Modifies BaseNode.tsx styling and node configuration display logic, directly overlapping with placeholder node visual refinements in this PR.

Poem

🐰 A client now chats with Gemini's mind,
Agent executes tasks, responses aligned,
Messages flow in loops of iteration,
While nodes shrink and polish with bright presentation,
LLM logic hops forward with precision! ✨

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/bg

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Vamsi-o Vamsi-o merged commit db6c941 into main Mar 1, 2026
2 of 3 checks passed
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new packages/LLM module intended to call the Gemini API and adds an initial “agent executor” wrapper around it, plus some workflow-canvas UI styling tweaks and a new root Prettier configuration.

Changes:

  • Add LLMClient (Gemini API HTTP client) and accompanying .d.ts/source maps.
  • Add an AgentExecution executor scaffold and agent message/type declarations.
  • Update workflow UI styling (ReactFlow background + node visuals) and add .prettierrc.

Reviewed changes

Copilot reviewed 7 out of 13 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
packages/LLM/src/LlmClient.js Implements Gemini API call logic and token accounting.
packages/LLM/src/LlmClient.d.ts Declares TS surface for LLMClient.
packages/LLM/src/LlmClient.d.ts.map Source map for typings.
packages/LLM/src/Agent/executor.js Adds an agent execution loop scaffold calling LLMClient.
packages/LLM/src/Agent/executor.d.ts Declares TS surface for agent executor.
packages/LLM/src/Agent/executor.d.ts.map Source map for typings.
packages/LLM/src/Agent/types.d.ts Declares agent message/types used by executor.
packages/LLM/src/Agent/types.d.ts.map Source map for typings.
packages/LLM/src/Agent/types.js Runtime stub module for types.
apps/web/app/workflows/[id]/page.tsx Sets a custom ReactFlow Background color.
apps/web/app/workflows/[id]/components/nodes/PlaceholderNode.tsx Updates placeholder node label text.
apps/web/app/components/nodes/BaseNode.tsx Adjusts placeholder node sizing/colors and hides helper text.
.prettierrc Adds Prettier configuration.

Comment on lines +31 to +47
// if(!stopReason.){
// return {
// success: true,
// result: result.text,
// iterations: context.iterationCount + 1,
// tokensUsed: result.inputTokens + result.outputTokens,
// stopReason: "completed"
// }
// }
}
catch (e) {
console.warn("Internal Server Error");
}
}
}
catch (e) {
console.warn("Internal Server Error");
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The while (context.iterationCount < context.maxIterations) loop never increments iterationCount, never mutates context.messages, and never returns/breaks on success or failure. As written, this can become an infinite loop (or a tight retry loop if maxIterations is set), causing the worker to hang/consume CPU. Please increment the counter and implement a clear stop/return path (including a stop condition on success and a backoff/abort on repeated errors).

Suggested change
// if(!stopReason.){
// return {
// success: true,
// result: result.text,
// iterations: context.iterationCount + 1,
// tokensUsed: result.inputTokens + result.outputTokens,
// stopReason: "completed"
// }
// }
}
catch (e) {
console.warn("Internal Server Error");
}
}
}
catch (e) {
console.warn("Internal Server Error");
// increment iteration count on successful call
context.iterationCount++;
// on success, return a structured response
return {
success: true,
result: result && typeof result.text !== "undefined" ? result.text : result,
iterations: context.iterationCount,
tokensUsed: (result && typeof result.inputTokens === "number" ? result.inputTokens : 0) +
(result && typeof result.outputTokens === "number" ? result.outputTokens : 0),
stopReason: "completed"
};
}
catch (e) {
// increment iteration count on error
context.iterationCount++;
console.warn("Internal Server Error", e);
// if we've exhausted the maximum number of iterations, abort and return failure
if (context.iterationCount >= context.maxIterations) {
return {
success: false,
error: e && e.message ? e.message : "Internal Server Error",
iterations: context.iterationCount,
stopReason: "error"
};
}
// simple backoff before retrying to avoid a tight loop on repeated errors
const backoffMs = 100 * context.iterationCount;
await new Promise(resolve => setTimeout(resolve, backoffMs));
}
}
}
catch (e) {
console.warn("Internal Server Error", e);

Copilot uses AI. Check for mistakes.
Comment on lines +29 to +50
const input = context.messages;
const result = await this.llmClinet.call(input);
// if(!stopReason.){
// return {
// success: true,
// result: result.text,
// iterations: context.iterationCount + 1,
// tokensUsed: result.inputTokens + result.outputTokens,
// stopReason: "completed"
// }
// }
}
catch (e) {
console.warn("Internal Server Error");
}
}
}
catch (e) {
console.warn("Internal Server Error");
}
return;
}
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execute is declared (via typings) to return an ExecuteResult | undefined, but the current implementation always returns undefined and ignores result from the LLM call. Please either implement the ExecuteResult return path (success/error/stopReason, token accounting, etc.) or change the method signature to match the actual behavior.

Copilot uses AI. Check for mistakes.
bg-white
text-2xl
group-hover:bg-blue-500/20
group-hover:border-blue-500
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the placeholder icon container, group-hover:border-blue-500 is present but there is no border/border-* class on this element anymore, so the hover border color change will have no visible effect. Either re-add an explicit border width/style or remove the hover border class to avoid dead styling.

Suggested change
group-hover:border-blue-500

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,5 @@
{
"semi": true,
"singleQuote": true
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This repo currently uses double quotes broadly (e.g., apps/web/app/components/nodes/BaseNode.tsx:1, packages/LLM/src/LlmClient.js:1). Setting Prettier singleQuote: true will either force widespread reformatting or leave the codebase inconsistent. Please align the Prettier config with the existing quote style (or reformat the repo consistently in the same PR).

Suggested change
"singleQuote": true
"singleQuote": false

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +12
throw new Error("GEMINI API KEY not specified (why env is not working?)");
if (!GEMINI_URL)
throw new Error("GEMINI URL not specified (why env is not working?)");
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thrown env errors include a rhetorical/unhelpful phrase ("why env is not working?") and the API key message doesn't match the actual env var name (GEMINI_API_KEY). Please replace these with actionable messages (e.g., which env var is missing and where it should be set) without editorial commentary.

Suggested change
throw new Error("GEMINI API KEY not specified (why env is not working?)");
if (!GEMINI_URL)
throw new Error("GEMINI URL not specified (why env is not working?)");
throw new Error("Environment variable GEMINI_API_KEY is not set. Please configure GEMINI_API_KEY in your environment or .env file.");
if (!GEMINI_URL)
throw new Error("Environment variable GEMINI_URL is not set. Please configure GEMINI_URL in your environment or .env file.");

Copilot uses AI. Check for mistakes.
Comment on lines +2 to +8
export class AgentExecution {
constructor() {
this.llmClinet = new LLMClient();
}
async Execute(paramas) {
if (!paramas)
return;
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are multiple typos in names (llmClinet, paramas, Execute PascalCase) which makes the API harder to use and search for. Please rename these to conventional/correct spellings (e.g., llmClient, params, execute) and keep method naming consistent with the rest of the codebase.

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,2 @@
// apps/worker/src/agent/types.ts
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file comment references a different path (apps/worker/src/agent/types.ts), which is misleading now that the file lives under packages/LLM/src/Agent/. Please update/remove the comment so it reflects the current source location (or omit it if this file is generated).

Suggested change
// apps/worker/src/agent/types.ts

Copilot uses AI. Check for mistakes.
Comment on lines +24 to +30
generationConfig: {
// stopSequencies: [
// "Title"
// ],
temperature: options.temperature,
maxOutputTokens: options.maxOutputTokens
}
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

options is treated as required (options.temperature, options.maxOutputTokens), but the only call site (AgentExecution) calls call(input) without passing options. This will throw a TypeError at runtime. Please make options truly optional by supplying defaults (or using optional chaining) and/or updating the call site to always pass a complete options object.

Copilot uses AI. Check for mistakes.
Comment on lines +38 to +40
console.log("LLM Response:", response.data);
const actuaResponse = response.data.candidates[0].content.parts[0];
// console.log("THe outpusssst is ", actuaResponse.text);
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

console.log("LLM Response:", response.data) logs the full model response (and likely includes user prompt/PII and token usage). This can leak sensitive data into logs in production. Consider removing this log or gating it behind a debug flag/logger with redaction.

Suggested change
console.log("LLM Response:", response.data);
const actuaResponse = response.data.candidates[0].content.parts[0];
// console.log("THe outpusssst is ", actuaResponse.text);
if (process.env.LLM_DEBUG === "true") {
console.log("LLM response metadata:", {
candidateCount: response.data?.candidates?.length,
usageMetadata: response.data?.usageMetadata,
});
}
const actuaResponse = response.data.candidates[0].content.parts[0];
// Use actuaResponse.text as needed in callers.

Copilot uses AI. Check for mistakes.
Comment on lines +15 to +21
contents: [
{
parts: [
{
text: prompt,
},
],
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

call(prompt, ...) builds the Gemini payload with text: prompt, but the only call site passes an array of message objects (context.messages). That will serialize incorrectly (e.g., [object Object]) and won't produce the intended prompt. Either change LLMClient.call to accept a string, or convert the messages array into Gemini contents format before sending.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants