Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"semi": true,
"singleQuote": true
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This repo currently uses double quotes broadly (e.g., apps/web/app/components/nodes/BaseNode.tsx:1, packages/LLM/src/LlmClient.js:1). Setting Prettier singleQuote: true will either force widespread reformatting or leave the codebase inconsistent. Please align the Prettier config with the existing quote style (or reformat the repo consistently in the same PR).

Suggested change
"singleQuote": true
"singleQuote": false

Copilot uses AI. Check for mistakes.
}

18 changes: 7 additions & 11 deletions apps/web/app/components/nodes/BaseNode.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -40,29 +40,25 @@ export default function BaseNode({ id, type, data }: BaseNodeProps) {
onClick={onConfigure}
className="
group
w-[240px]
w-[140px]
px-4 py-6
bg-gray-800/40
bg-white
border-2 border-dashed border-gray-600
rounded-lg
cursor-pointer
transition-all duration-200
hover:border-blue-500
hover:bg-gray-800/60
hover:bg-white
hover:shadow-lg hover:shadow-blue-500/20
flex flex-col items-center gap-3
"
>
{/* Icon */}
<div
className="
w-12 h-12
rounded-full
bg-gray-700/50
border border-gray-600
flex items-center justify-center
w-7 h-7 flex items-center justify-center
bg-white
text-2xl
group-hover:bg-blue-500/20
group-hover:border-blue-500
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the placeholder icon container, group-hover:border-blue-500 is present but there is no border/border-* class on this element anymore, so the hover border color change will have no visible effect. Either re-add an explicit border width/style or remove the hover border class to avoid dead styling.

Suggested change
group-hover:border-blue-500

Copilot uses AI. Check for mistakes.
transition-all duration-200
"
Expand All @@ -71,10 +67,10 @@ export default function BaseNode({ id, type, data }: BaseNodeProps) {
</div>
{/* Label */}
<div className="text-center">
<p className="text-gray-300 font-medium text-sm group-hover:text-blue-400 transition-colors">
<p className="text-black font-bold text-sm group-hover:text-blue-400 transition-colors">
{label}
</p>
<p className="text-gray-500 text-xs mt-1">Click to configure</p>
{/* <p className="text-gray-500 text-xs mt-1">Click to configure</p> */}
</div>

{/* Handles */}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ export function PlaceholderNode({ data }: PlaceholderNodeProps) {
<div className="w-12 h-12 rounded-full bg-blue-100 flex items-center justify-center mb-3">
<span className="text-2xl">➕</span>
</div>
<p className="text-sm font-medium text-gray-600">Add Action</p>
<p className="text-sm font-medium text-gray-600">Add what the hell is Action</p>
</div>
</div>
);
Expand Down
2 changes: 1 addition & 1 deletion apps/web/app/workflows/[id]/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -551,7 +551,7 @@ export default function WorkflowCanvas() {
nodeTypes={nodeTypes}
fitView
>
<Background />
<Background bgColor="#fdfdfd" />
<Controls />


Expand Down
7 changes: 7 additions & 0 deletions packages/LLM/src/Agent/executor.d.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
import { ExecuteParams, ExecuteResult } from "./types.js";
export declare class AgentExecution {
private llmClinet;
constructor();
Execute(paramas: ExecuteParams): Promise<ExecuteResult | undefined>;
}
//# sourceMappingURL=executor.d.ts.map
1 change: 1 addition & 0 deletions packages/LLM/src/Agent/executor.d.ts.map

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

51 changes: 51 additions & 0 deletions packages/LLM/src/Agent/executor.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import LLMClient from "../LlmClient.js";
export class AgentExecution {
constructor() {
this.llmClinet = new LLMClient();
}
async Execute(paramas) {
if (!paramas)
return;
Comment on lines +2 to +8
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are multiple typos in names (llmClinet, paramas, Execute PascalCase) which makes the API harder to use and search for. Please rename these to conventional/correct spellings (e.g., llmClient, params, execute) and keep method naming consistent with the rest of the codebase.

Copilot uses AI. Check for mistakes.
const prompt = paramas.task;
try {
const context = {
task: prompt,
model: paramas.model,
maxIterations: paramas.maxIterations,
systemPrompt: paramas.systemPrompt,
messages: [],
iterationCount: 0,
totalTokens: 0
};
context.messages.push({
role: "system",
content: context.systemPrompt
}, {
role: "user",
content: prompt
});
while (context.iterationCount < context.maxIterations) {
Comment on lines +11 to +27
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ExecuteParams allows systemPrompt and maxIterations to be omitted, but this implementation uses them as required: content: context.systemPrompt can become undefined, and while (0 < undefined) will skip the loop entirely. Please provide defaults (e.g., a default system prompt and a default maxIterations) and validate these values up front.

Copilot uses AI. Check for mistakes.
try {
const input = context.messages;
const result = await this.llmClinet.call(input);
// if(!stopReason.){
// return {
// success: true,
// result: result.text,
// iterations: context.iterationCount + 1,
// tokensUsed: result.inputTokens + result.outputTokens,
// stopReason: "completed"
// }
// }
}
catch (e) {
console.warn("Internal Server Error");
}
}
}
catch (e) {
console.warn("Internal Server Error");
Comment on lines +31 to +47
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The while (context.iterationCount < context.maxIterations) loop never increments iterationCount, never mutates context.messages, and never returns/breaks on success or failure. As written, this can become an infinite loop (or a tight retry loop if maxIterations is set), causing the worker to hang/consume CPU. Please increment the counter and implement a clear stop/return path (including a stop condition on success and a backoff/abort on repeated errors).

Suggested change
// if(!stopReason.){
// return {
// success: true,
// result: result.text,
// iterations: context.iterationCount + 1,
// tokensUsed: result.inputTokens + result.outputTokens,
// stopReason: "completed"
// }
// }
}
catch (e) {
console.warn("Internal Server Error");
}
}
}
catch (e) {
console.warn("Internal Server Error");
// increment iteration count on successful call
context.iterationCount++;
// on success, return a structured response
return {
success: true,
result: result && typeof result.text !== "undefined" ? result.text : result,
iterations: context.iterationCount,
tokensUsed: (result && typeof result.inputTokens === "number" ? result.inputTokens : 0) +
(result && typeof result.outputTokens === "number" ? result.outputTokens : 0),
stopReason: "completed"
};
}
catch (e) {
// increment iteration count on error
context.iterationCount++;
console.warn("Internal Server Error", e);
// if we've exhausted the maximum number of iterations, abort and return failure
if (context.iterationCount >= context.maxIterations) {
return {
success: false,
error: e && e.message ? e.message : "Internal Server Error",
iterations: context.iterationCount,
stopReason: "error"
};
}
// simple backoff before retrying to avoid a tight loop on repeated errors
const backoffMs = 100 * context.iterationCount;
await new Promise(resolve => setTimeout(resolve, backoffMs));
}
}
}
catch (e) {
console.warn("Internal Server Error", e);

Copilot uses AI. Check for mistakes.
}
return;
}
Comment on lines +29 to +50
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execute is declared (via typings) to return an ExecuteResult | undefined, but the current implementation always returns undefined and ignores result from the LLM call. Please either implement the ExecuteResult return path (success/error/stopReason, token accounting, etc.) or change the method signature to match the actual behavior.

Copilot uses AI. Check for mistakes.
}
55 changes: 55 additions & 0 deletions packages/LLM/src/Agent/types.d.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
export interface SystemMessage {
role: "system";
content: string;
}
export interface UserMessage {
role: "user";
content: string;
}
export interface ToolCall {
id: string;
type: "function";
function: {
name: string;
arguments: string;
};
}
export interface AssistantMessage {
role: "assistant";
content: string | null;
tool_calls?: ToolCall[];
}
export interface ToolMessage {
role: "tool";
tool_call_id: string;
content: string;
}
export type Message = SystemMessage | UserMessage | AssistantMessage | ToolMessage;
export interface ExecuteParams {
task: string;
toolNames: string[];
model: string;
systemPrompt?: string;
maxIterations?: number;
}
export interface ExecuteResult {
success: boolean;
result: string;
iterations: number;
tokensUsed: number;
stopReason: "completed" | "max_iterations" | "error";
}
export interface AgentContext {
task: string;
model: string;
maxIterations: number;
systemPrompt: string;
messages: Message[];
iterationCount: number;
totalTokens: number;
}
export interface StopCheck {
shouldStop: boolean;
reason: "completed" | "max_iterations" | "error" | "continue";
}
//# sourceMappingURL=types.d.ts.map
1 change: 1 addition & 0 deletions packages/LLM/src/Agent/types.d.ts.map

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions packages/LLM/src/Agent/types.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
// apps/worker/src/agent/types.ts
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file comment references a different path (apps/worker/src/agent/types.ts), which is misleading now that the file lives under packages/LLM/src/Agent/. Please update/remove the comment so it reflects the current source location (or omit it if this file is generated).

Suggested change
// apps/worker/src/agent/types.ts

Copilot uses AI. Check for mistakes.
export {};
13 changes: 13 additions & 0 deletions packages/LLM/src/LlmClient.d.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
declare class LLMClient {
call(prompt: any[], options?: {
temperature: number;
maxOutputTokens: number;
}): Promise<{
text: string;
inputTokens: number;
outputTokens: number;
totalCount: number;
}>;
}
export default LLMClient;
//# sourceMappingURL=LlmClient.d.ts.map
1 change: 1 addition & 0 deletions packages/LLM/src/LlmClient.d.ts.map

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

64 changes: 64 additions & 0 deletions packages/LLM/src/LlmClient.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
import axios from "axios";
import dotenv from "dotenv";
dotenv.config();
class LLMClient {
async call(prompt, options) {
const GEMINI_URL = process.env.GEMINI_URL ||
"https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent";
const GEMINI_API_KEY = process.env.GEMINI_API_KEY;
if (!GEMINI_API_KEY)
throw new Error("GEMINI API KEY not specified (why env is not working?)");
if (!GEMINI_URL)
throw new Error("GEMINI URL not specified (why env is not working?)");
Comment on lines +10 to +12
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thrown env errors include a rhetorical/unhelpful phrase ("why env is not working?") and the API key message doesn't match the actual env var name (GEMINI_API_KEY). Please replace these with actionable messages (e.g., which env var is missing and where it should be set) without editorial commentary.

Suggested change
throw new Error("GEMINI API KEY not specified (why env is not working?)");
if (!GEMINI_URL)
throw new Error("GEMINI URL not specified (why env is not working?)");
throw new Error("Environment variable GEMINI_API_KEY is not set. Please configure GEMINI_API_KEY in your environment or .env file.");
if (!GEMINI_URL)
throw new Error("Environment variable GEMINI_URL is not set. Please configure GEMINI_URL in your environment or .env file.");

Copilot uses AI. Check for mistakes.
console.log("making the gemini call");
const payload = {
contents: [
{
parts: [
{
text: prompt,
},
],
Comment on lines +15 to +21
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

call(prompt, ...) builds the Gemini payload with text: prompt, but the only call site passes an array of message objects (context.messages). That will serialize incorrectly (e.g., [object Object]) and won't produce the intended prompt. Either change LLMClient.call to accept a string, or convert the messages array into Gemini contents format before sending.

Copilot uses AI. Check for mistakes.
},
],
generationConfig: {
// stopSequencies: [
// "Title"
// ],
temperature: options.temperature,
maxOutputTokens: options.maxOutputTokens
}
Comment on lines +24 to +30
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

options is treated as required (options.temperature, options.maxOutputTokens), but the only call site (AgentExecution) calls call(input) without passing options. This will throw a TypeError at runtime. Please make options truly optional by supplying defaults (or using optional chaining) and/or updating the call site to always pass a complete options object.

Copilot uses AI. Check for mistakes.
};
try {
const response = await axios.post(`${GEMINI_URL}?key=${GEMINI_API_KEY}`, payload, {
headers: {
"Content-Type": "application/json",
},
});
console.log("LLM Response:", response.data);
const actuaResponse = response.data.candidates[0].content.parts[0];
// console.log("THe outpusssst is ", actuaResponse.text);
Comment on lines +38 to +40
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

console.log("LLM Response:", response.data) logs the full model response (and likely includes user prompt/PII and token usage). This can leak sensitive data into logs in production. Consider removing this log or gating it behind a debug flag/logger with redaction.

Suggested change
console.log("LLM Response:", response.data);
const actuaResponse = response.data.candidates[0].content.parts[0];
// console.log("THe outpusssst is ", actuaResponse.text);
if (process.env.LLM_DEBUG === "true") {
console.log("LLM response metadata:", {
candidateCount: response.data?.candidates?.length,
usageMetadata: response.data?.usageMetadata,
});
}
const actuaResponse = response.data.candidates[0].content.parts[0];
// Use actuaResponse.text as needed in callers.

Copilot uses AI. Check for mistakes.
const inputToknes = response.data.usageMetadata.promptTokenCount;
const outputTOkens = response.data.usageMetadata.candidatesTokenCount;
const totalTokenCount = response.data.usageMetadata.totalTokenCount;
return {
text: actuaResponse.text,
inputTokens: inputToknes,
outputTokens: outputTOkens,
Comment on lines +39 to +47
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several identifiers here look misspelled (actuaResponse, inputToknes, outputTOkens), which makes the code harder to read and maintain. Please rename them to correctly spelled names to avoid confusion and reduce the chance of propagating typos into the API.

Suggested change
const actuaResponse = response.data.candidates[0].content.parts[0];
// console.log("THe outpusssst is ", actuaResponse.text);
const inputToknes = response.data.usageMetadata.promptTokenCount;
const outputTOkens = response.data.usageMetadata.candidatesTokenCount;
const totalTokenCount = response.data.usageMetadata.totalTokenCount;
return {
text: actuaResponse.text,
inputTokens: inputToknes,
outputTokens: outputTOkens,
const actualResponse = response.data.candidates[0].content.parts[0];
// console.log("THe outpusssst is ", actualResponse.text);
const inputTokens = response.data.usageMetadata.promptTokenCount;
const outputTokens = response.data.usageMetadata.candidatesTokenCount;
const totalTokenCount = response.data.usageMetadata.totalTokenCount;
return {
text: actualResponse.text,
inputTokens: inputTokens,
outputTokens: outputTokens,

Copilot uses AI. Check for mistakes.
totalCount: totalTokenCount
};
}
catch (e) {
if (e?.response) {
console.error(`LLM call failed with status ${e.response.status}:`);
console.log(e.response.data.error.message);
}
else {
console.error("Error in calling the LLM:", e);
}
throw new Error("Failed to fetch response from Gemini API: " +
(e?.message || "Unknown error"));
}
}
}
export default LLMClient;