Skip to content

Conversation

@matt-aitken
Copy link
Member

@matt-aitken matt-aitken commented Jan 7, 2026

TRQL (pronounced Treacle like the delicious British dark sweet syrup) is the TRiggerQueryLanguage. It allows users to safely write queries on their data. The queries are safely turned into ClickHouse queries which are tenant-safe and not SQL injectable.

query-alpha-demo-100mb.mp4

This started out as a translation of HogQL by PostHog from Python to TypeScript.

Features

  • Tenant safe queries.
  • Many underlying ClickHouse features including functions and aggregations.
  • Virtual columns, which are exposed to users as real columns but are actually expressions.
  • Transformations of data types and where clauses.
  • Simple JSON path querying.
  • Limits on execution time.
  • Reporting of query statistics.

Query page

There’s a new Query page (currently behind a feature flag) where you can write TRQL queries and execute them against your environment, project or organization.

Features

  • Executing TRQL queries
  • Syntax highlighting and errors
  • Autocomplete
  • AI generation/editing of queries
  • Help and examples
  • Table with auto-inferred data types from the table schema
  • Table cell renderers for our special types like Run ids, environments, machines, tasks, queues, etc.
  • Copy/export as CSV/JSON
  • Line and bar graphs with grouping and stacking
  • History of queries

@changeset-bot
Copy link

changeset-bot bot commented Jan 7, 2026

⚠️ No Changeset found

Latest commit: 7e5a0cb

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

Walkthrough

Adds a large TSQL subsystem as an internal package (@internal/tsql) with ANTLR grammars, generated lexer/parser, AST, parser/converter, compiler to ClickHouse SQL, validators, escape/serialization utilities, function metadata, schema/models, executors, and extensive tests. Extends ClickHouse client with queryWithStats and typed TSQL execution helpers. Integrates TSQL and query features across backend and frontend: AI query service and streaming route, query execution service and history persistence, DB migration and Prisma model additions, many new frontend components (TSQLEditor, AIQueryInput, charts, results table, CopyableTableCell, AlphaBadge, QueueName), hooks, export utilities, styles, and dependency updates.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~180 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description provides a comprehensive overview of TRQL features, the Query page functionality, and implementation details. However, it does not follow the template structure with the required checklist sections (Checklist, Testing, Changelog sections are missing or incomplete). Complete the PR description by adding the missing template sections: Testing (describe testing steps), Changelog (summarize changes), Screenshots (include relevant images), and confirm all checklist items.
Docstring Coverage ⚠️ Warning Docstring coverage is 49.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The pull request title "TRQL and the Query page" is concise and clearly summarizes the main additions: a new query language (TRQL) and a new Query page UI.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vibe-kanban-cloud
Copy link

Review Complete

Your review story is ready!

View Story

Comment !reviewfast on this PR to re-generate the story.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

🤖 Fix all issues with AI agents
In @apps/webapp/app/components/code/AIQueryInput.tsx:
- Around line 163-164: submitQuery currently calls processStreamEvent but
processStreamEvent is not included in submitQuery's dependency array, causing
stale closures; fix by either moving the processStreamEvent function declaration
above the submitQuery definition so it can be referenced in the dependency
array, or wrap processStreamEvent in a stable ref (useRef) and reference
ref.current inside submitQuery; then add processStreamEvent (or the ref) to the
submitQuery useCallback dependency list and ensure onQueryGenerated remains
correctly referenced to avoid circular dependencies.

In @apps/webapp/app/components/code/QueryResultsChart.tsx:
- Around line 292-332: The sortData function currently always uses __rawDate
when present, so add an xAxisColumn parameter to sortData and only perform the
date comparison when sortByColumn === xAxisColumn and both a.__rawDate and
b.__rawDate exist; otherwise fall through to the numeric/string comparison.
Update calls to sortData (e.g., where transformDataForChart or the component
sorts the results) to pass the current xAxisColumn. Ensure the function
signature is updated (sortData(data, sortByColumn, sortDirection, xAxisColumn))
and the date branch becomes: if (sortByColumn === xAxisColumn && aDate && bDate)
{ ... } so sorting by other columns uses the numeric/string logic.

In @apps/webapp/app/components/code/TSQLEditor.tsx:
- Around line 208-211: The onBlur handler in TSQLEditor reads
editor.current?.textContent (DOM) which can differ from the editor's document;
instead, call into the editor view/state to get the canonical document text
(e.g., use the editor view/state on the editor ref such as
editor.current?.view.state.doc.toString() or
editor.current?.state.doc.toString()) and pass that string to the onBlur prop;
update the onBlur arrow function to guard for onBlur and editor.current then
extract the document via the editor's state rather than textContent.

In @apps/webapp/app/components/primitives/Table.tsx:
- Around line 286-293: Replace the non-semantic <span> used for the copy action
with a <button> element (keep the same className including "absolute -right-2
top-1/2 z-10 hidden -translate-y-1/2 cursor-pointer
group-hover/copyable-cell:flex"), add type="button", preserve the existing
onClick handler logic (e.stopPropagation(); e.preventDefault(); copy();) and add
an accessible label via aria-label (e.g., aria-label="Copy cell"). This ensures
the interactive element is semantic and keyboard-accessible while keeping the
same styling and behavior.

In @apps/webapp/app/components/runs/v3/TaskRunStatus.tsx:
- Around line 243-249: The reverse lookup in runStatusFromFriendlyTitle fails
because runStatusTitleFromStatus maps both PENDING_VERSION and
WAITING_FOR_DEPLOY to the same friendly string ("Pending version"), so
titlesStatusesArray can’t distinguish them; fix by choosing one of: (A) give
each status a unique friendly title in runStatusTitleFromStatus so
runStatusFromFriendlyTitle works unchanged, or (B) change
runStatusFromFriendlyTitle to use an explicit reverse map (e.g., build a
Map<string, TaskRunStatus | TaskRunStatus[]> from titlesStatusesArray) and
either return an array of matching statuses, return the first by documented
priority, or throw an explicit error on ambiguous titles (mentioning
PENDING_VERSION and WAITING_FOR_DEPLOY) to make the collision handling
deterministic and obvious.

In
@apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx:
- Around line 132-159: The switch case labeled "finish" declares const finalText
and const query which leak into other cases; wrap the entire case body in its
own block (add { ... } immediately after case "finish":) so finalText and query
are block-scoped, keep the sendEvent logic intact and ensure the break remains
inside that block; locate the "finish" case in the switch handling result events
to apply this change.

In @apps/webapp/app/services/queryService.server.ts:
- Around line 97-98: The code parses stats.byte_seconds into byteSeconds and
multiplies to compute costInCents but does not guard against parseFloat
returning NaN; update the logic in queryService.server.ts around the
byteSeconds/costInCents calculation to validate the parsed value (e.g., use
Number.isFinite or !isNaN on byteSeconds), and if invalid set a safe default
(such as 0) or handle the error path (log/warn and skip cost computation) before
multiplying with env.CENTS_PER_QUERY_BYTE_SECOND so costInCents never becomes
NaN.

In @apps/webapp/test/components/code/tsql/tsqlCompletion.test.ts:
- Around line 72-173: The test file is in the wrong location per repo
guidelines—move the test from
apps/webapp/test/components/code/tsql/tsqlCompletion.test.ts to sit beside the
implementation at apps/webapp/app/components/code/tsql/tsqlCompletion.test.ts;
update any relative imports in the test if paths change and ensure your test
runner/tsconfig includes the new folder (adjust include/glob patterns if
necessary) so createTSQLCompletion tests (the describe block for
createTSQLCompletion) keep running from the new location.

In @apps/webapp/test/components/code/tsql/tsqlLinter.test.ts:
- Around line 49-78: The test file for getTSQLError is placed in the wrong
directory; move the test to sit beside the implementation (same directory as the
tsql linter module), update any relative imports to reference getTSQLError from
its local implementation (ensure the test still imports getTSQLError correctly),
and run the test suite to confirm paths and test discovery are unchanged; keep
the test content (describe/getTSQLError cases) intact during the move.

In @internal-packages/clickhouse/src/client/client.ts:
- Around line 363-366: parsedSummary can yield elapsed_ns = 0 causing
elapsedSeconds to be 0 and byteSeconds to become Infinity; guard the computation
in the block that calculates readBytes/elapsedSeconds (variables parsedSummary,
readBytes, elapsedNs, elapsedSeconds, byteSeconds) by checking if elapsedSeconds
is <= 0 (or extremely small) and in that case set byteSeconds to 0 (or null)
before creating the stats object so you never produce "Infinity" strings for
downstream consumers.
- Line 362: The debug logging in the method is inconsistent: replace the call to
this.logger.log("parsedSummary", parsedSummary) with the same debug-level logger
used elsewhere (use this.logger.debug) so the parsedSummary output follows the
existing debug logging conventions in the method; locate the occurrence of
"parsedSummary" in client.ts and change its logging method to debug to match the
other debug entries (e.g., the lines using this.logger.debug earlier in the
function).

In @internal-packages/clickhouse/tsconfig.build.json:
- Line 19: The tsconfig override explicitly disables noImplicitAny which weakens
type safety despite strict: true; remove the "noImplicitAny": false entry (or
set it to true) from tsconfig.build.json so the compiler inherits noImplicitAny
from strict mode, then run the build/type-check and fix any resulting
implicit-any errors in the code referenced by this package.

In @internal-packages/tsql/src/grammar/parser.test.ts:
- Around line 1-4: The test file TSQL parser uses the global test helpers but
never imports them; add an import for vitest test helpers (e.g., import {
describe, it, expect } from "vitest";) to
internal-packages/tsql/src/grammar/parser.test.ts so calls to describe, it, and
expect used in the file resolve at runtime; place the import alongside the other
top-level imports near the top of the file before tests run.

In @internal-packages/tsql/src/query/ast.ts:
- Around line 185-187: IntervalType currently sets data_type to the wrong
literal ("unknown"); update the IntervalType definition so its data_type uses a
distinct literal (e.g., "interval") instead of "unknown" to mirror how
DecimalType defines its own data_type; modify the IntervalType interface (which
extends ConstantType) to declare data_type: "interval" (or the project's
established interval literal) so type discrimination works correctly across the
AST.
- Around line 654-671: The returned object from createSelectSetQueryFromQueries
is missing the required expression_type field for a SelectSetQuery; update the
returned object literal in createSelectSetQueryFromQueries to include
expression_type: "select_set_query" (alongside initial_select_query and
subsequent_select_queries) so the object conforms to the SelectSetQuery type
(and keep subsequent_select_queries mapping to SelectSetNode as before).
- Around line 157-159: DecimalType incorrectly sets data_type to "unknown";
change DecimalType (which extends ConstantType) to use data_type: "decimal" and
add "decimal" to the ConstantDataType union in constants.ts so the type is
recognized project-wide; update any related type guards or switch handling that
pattern-matches ConstantDataType (search for usages of DecimalType and
ConstantDataType) to handle the new "decimal" value.

In @internal-packages/tsql/src/query/escape.ts:
- Around line 188-206: The TSQL branch in SQLValueEscaper.visitDateTime builds a
datetime string but ignores this.timezone; update the this.dialect === "tsql"
branch in visitDateTime to include the timezone as the second argument (e.g.
return `toDateTime(${this.visitString(datetimeString)},
${this.visitString(this.timezone)})`) so it mirrors the ClickHouse branch's use
of visitString(this.timezone); ensure visitString is used for the timezone value
to maintain proper escaping/quoting.

In @internal-packages/tsql/src/query/parser.test.ts:
- Line 7: The imported local name `SyntaxError` shadows the global Error
constructor; rename the import (e.g., `import { SyntaxError as TsqlSyntaxError }
from "./errors.js";`) and update all references in this test file that intend
the tsql module's error to use the new name (replace usages of `SyntaxError`
such as the test assertion that expects the parser to throw the module error
with `TsqlSyntaxError`).
🧹 Nitpick comments (37)
internal-packages/clickhouse/src/taskRuns.ts (1)

48-48: LGTM! New field correctly integrated into TaskRunV2 schema.

The addition of max_duration_in_seconds aligns with the database schema migration and retrieved learnings about task duration management. The use of .nullish() is appropriate for this optional field.

Optional: Consider adding range validation

For extra safety, you could add .min(0) to match the ClickHouse UInt32 unsigned constraint:

-  max_duration_in_seconds: z.number().int().nullish(),
+  max_duration_in_seconds: z.number().int().min(0).nullish(),

However, this is optional since the pattern is consistent with other duration fields in the schema and validation may occur elsewhere.

apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.queues/route.tsx (1)

520-520: Consider explicit prop passing for clarity.

The spread operator works correctly, but explicitly passing props makes the data flow clearer and avoids passing unnecessary properties:

<QueueName name={queue.name} type={queue.type} paused={queue.paused} />

This is a minor style preference—the current implementation is functional.

♻️ Optional refactor
-                              <QueueName {...queue} />
+                              <QueueName name={queue.name} type={queue.type} paused={queue.paused} />
internal-packages/tsql/package.json (2)

16-16: Grammar build script may not work on Windows.

The grammar:build:typescript script uses Unix shell commands (cat, tail, sed) which won't work on Windows without WSL or similar environment. Consider adding documentation about platform requirements or using a cross-platform build tool like Node.js scripts.

Alternative: Cross-platform Node.js script

Create a scripts/build-grammar.js file:

import { readFileSync, writeFileSync, unlinkSync } from 'fs';
import { execSync } from 'child_process';

const lexerTS = readFileSync('src/grammar/TSQLLexer.typescript.g4', 'utf8');
const lexerCommon = readFileSync('src/grammar/TSQLLexer.common.g4', 'utf8');
const lexerCommonLines = lexerCommon.split('\n').slice(1);
const modifiedLines = lexerCommonLines.map(line => 
  line.replace(/isOpeningTag/g, 'self.isOpeningTag')
);
const combined = lexerTS + '\n' + modifiedLines.join('\n');
writeFileSync('src/grammar/TSQLLexer.g4', combined);

execSync('antlr4ts src/grammar/TSQLLexer.g4');
unlinkSync('src/grammar/TSQLLexer.g4');
execSync('antlr4ts -visitor -no-listener -Dlanguage=TypeScript src/grammar/TSQLParser.g4');

Then update package.json:

-    "grammar:build:typescript": "cat src/grammar/TSQLLexer.typescript.g4 > src/grammar/TSQLLexer.g4 && tail -n +2 src/grammar/TSQLLexer.common.g4 |sed s/isOpeningTag/self.isOpeningTag/ >> src/grammar/TSQLLexer.g4 && antlr4ts src/grammar/TSQLLexer.g4 && rm src/grammar/TSQLLexer.g4 && antlr4ts -visitor -no-listener -Dlanguage=TypeScript src/grammar/TSQLParser.g4",
+    "grammar:build:typescript": "node scripts/build-grammar.js",

10-11: Consider evaluating zod upgrade to 4.x series if breaking changes are manageable.

The package uses antlr4ts@0.5.0-alpha.4, which is the latest (and only) available version for this package. Using an alpha version is unavoidable here. However, zod@3.25.76 is significantly outdated—zod 4.3.5 is the current stable release. If the codebase can accommodate zod 4's breaking changes, upgrading would align with the latest stable version.

internal-packages/tsql/src/grammar/TSQLLexer.typescript.g4 (1)

9-16: Consider using String.fromCodePoint for full Unicode support.

String.fromCharCode only handles code points in the BMP (0-0xFFFF). For characters above U+FFFF, fromCodePoint is needed. Since the check includes c > 0x10FFFF, it seems full Unicode was intended.

♻️ Suggested improvement
 private _peekChar(k: number): string {
     // Return the k-th look-ahead as a *single-char string* or '\0' at EOF.
     const c = this._input.LA(k);          // int code point or IntStream.EOF (-1)
     if (c < 0 || c > 0x10FFFF) {         // EOF or out-of-range → sentinel
         return '\0';
     }
-    return String.fromCharCode(c);
+    return String.fromCodePoint(c);
 }
internal-packages/tsql/src/query/parse_string.ts (1)

4-4: Consider renaming the import to avoid shadowing global SyntaxError.

The static analysis tool correctly flags that importing SyntaxError shadows the global. This can cause confusion when debugging. Consider using an alias like TSQLSyntaxError.

♻️ Suggested fix
-import { SyntaxError } from './errors';
+import { SyntaxError as TSQLSyntaxError } from './errors';

Then update line 43:

-        throw new SyntaxError(`Invalid string literal, must start and end with the same quote type: ${text}`);
+        throw new TSQLSyntaxError(`Invalid string literal, must start and end with the same quote type: ${text}`);
apps/webapp/evals/aiQuery.eval.ts (1)

45-52: Add error handling for JSON parsing.

If the AI model returns malformed JSON, JSON.parse will throw an unhandled exception. Consider wrapping in try/catch and returning a score of 0 for parse failures.

♻️ Suggested fix
     // Parse the output to extract the query
-    const outputParsed = JSON.parse(output) as ParsedQueryResult;
-    const expectedParsed = JSON.parse(expected) as ParsedQueryResult;
+    let outputParsed: ParsedQueryResult;
+    let expectedParsed: ParsedQueryResult;
+    try {
+      outputParsed = JSON.parse(output) as ParsedQueryResult;
+      expectedParsed = JSON.parse(expected) as ParsedQueryResult;
+    } catch {
+      // Malformed JSON should score 0
+      return 0;
+    }
internal-packages/database/prisma/schema.prisma (1)

2402-2439: Well-structured audit model with appropriate indexes.

The CustomerQuery model follows a sensible audit-log pattern (no updatedAt since records are immutable). The cascade behaviors are appropriate: Cascade for org/project/environment and SetNull for user.

Consider adding indexes for project/environment scoped queries.

If query history will be displayed at project or environment scope (not just organization), consider adding indexes:

@@index([projectId, createdAt(sort: Desc)])
@@index([environmentId, createdAt(sort: Desc)])

This would optimize history lookups when scoped to a specific project or environment. If queries are always fetched at org level and filtered client-side, the current indexes are sufficient.

internal-packages/tsql/src/query/constants.ts (2)

36-44: Prefer string union or const object over enum.

Per coding guidelines, enums should be avoided in favor of string unions or const objects.

♻️ Suggested refactor using const object
-export enum LimitContext {
-  QUERY = "query",
-  QUERY_ASYNC = "query_async",
-  EXPORT = "export",
-  COHORT_CALCULATION = "cohort_calculation",
-  HEATMAPS = "heatmaps",
-  SAVED_QUERY = "saved_query",
-  RETENTION = "retention",
-}
+export const LimitContext = {
+  QUERY: "query",
+  QUERY_ASYNC: "query_async",
+  EXPORT: "export",
+  COHORT_CALCULATION: "cohort_calculation",
+  HEATMAPS: "heatmaps",
+  SAVED_QUERY: "saved_query",
+  RETENTION: "retention",
+} as const;
+
+export type LimitContext = (typeof LimitContext)[keyof typeof LimitContext];

Based on coding guidelines, enums should be avoided.


47-71: Prefer types over interfaces.

Per coding guidelines, types should be used instead of interfaces.

♻️ Suggested refactor using types
-// Settings applied at the SELECT level
-export interface TSQLQuerySettings {
-  optimize_aggregation_in_order?: boolean;
-  date_time_output_format?: string;
-  date_time_input_format?: string;
-  join_algorithm?: string;
-}
-
-// Settings applied on top of all TSQL queries
-export interface TSQLGlobalSettings extends TSQLQuerySettings {
+// Settings applied at the SELECT level
+export type TSQLQuerySettings = {
+  optimize_aggregation_in_order?: boolean;
+  date_time_output_format?: string;
+  date_time_input_format?: string;
+  join_algorithm?: string;
+};
+
+// Settings applied on top of all TSQL queries
+export type TSQLGlobalSettings = TSQLQuerySettings & {
   readonly?: number;
   max_execution_time?: number;
   // ... rest of fields
-}
+};

Based on coding guidelines, types should be used over interfaces.

internal-packages/tsql/src/query/models.ts (2)

6-66: Prefer types over interfaces.

Per coding guidelines, types should be used instead of interfaces. While interfaces work here, consistency with the codebase guidelines is preferred.

Example transformation for a few interfaces:

export type FieldOrTable = {
  hidden?: boolean;
};

export type DatabaseField = FieldOrTable & {
  name: string;
  array?: boolean;
  nullable?: boolean;
  is_nullable?(): boolean;
  get_constant_type?(): ConstantType;
  default_value?(): any;
};

Based on coding guidelines, types should be used over interfaces.


247-253: Avoid any type - import the actual type.

The comment indicates LazyJoinType exists in ast.ts. Import and use the actual type for better type safety.

♻️ Suggested fix
-import type { Expr, ConstantType } from "./ast";
+import type { Expr, ConstantType, LazyJoinType } from "./ast";

// ... at line 251
-  lazy_join_type: any; // LazyJoinType from ast.ts
+  lazy_join_type: LazyJoinType;
apps/webapp/app/components/navigation/SideMenu.tsx (1)

8-8: Remove unused icon imports.

CircleStackIcon and MagnifyingGlassCircleIcon are imported but not used in this file. Only TableCellsIcon is used for the Query menu item.

♻️ Suggested fix
 import {
   ArrowPathRoundedSquareIcon,
   ArrowRightOnRectangleIcon,
   BeakerIcon,
   BellAlertIcon,
   ChartBarIcon,
   ChevronRightIcon,
-  CircleStackIcon,
   ClockIcon,
   Cog8ToothIcon,
   CogIcon,
   FolderIcon,
   FolderOpenIcon,
   GlobeAmericasIcon,
   IdentificationIcon,
   KeyIcon,
-  MagnifyingGlassCircleIcon,
   PencilSquareIcon,
   PlusIcon,
   RectangleStackIcon,
   ServerStackIcon,
   Squares2X2Icon,
   TableCellsIcon,
   UsersIcon,
 } from "@heroicons/react/20/solid";

Also applies to: 17-17

apps/webapp/app/components/code/tsql/tsqlLinter.test.ts (1)

66-71: Consider making the error format assertion more resilient.

The test assumes error messages contain the word "line" for position information. If the error format from the parser changes, this test could become brittle.

♻️ Alternative approach

Consider either:

  1. Testing for a more specific pattern (e.g., position/column info)
  2. Or simply verifying that a non-null, non-empty error is returned without asserting internal format:
     it("should include position information in error", () => {
       const error = getTSQLError("SELECT * FORM users");
       expect(error).not.toBeNull();
-      // Error message should contain line/column info
-      expect(error).toContain("line");
+      // Verify error contains useful diagnostic information
+      expect(error!.length).toBeGreaterThan(0);
     });
apps/webapp/app/components/AlphaBadge.tsx (1)

25-31: Consider adding flex styling for proper alignment.

AlphaTitle renders a span and AlphaBadge as siblings within a fragment. Depending on usage context, this may result in misaligned elements. Consider wrapping in a flex container for consistent alignment.

♻️ Suggested improvement
 export function AlphaTitle({ children }: { children: React.ReactNode }) {
   return (
-    <>
-      <span>{children}</span>
+    <span className="inline-flex items-center gap-1">
+      <span>{children}</span>
       <AlphaBadge />
-    </>
+    </span>
   );
 }
apps/webapp/app/components/code/tsql/tsqlCompletion.test.ts (1)

5-28: Consider using a more type-safe mock instead of as any.

The as any cast at line 27 bypasses TypeScript's type checking. Consider defining a proper type for the mock context or using a partial type assertion.

♻️ More type-safe alternative
+import type { CompletionContext } from "@codemirror/autocomplete";
+
 // Helper to create a mock completion context
 function createMockContext(doc: string, pos: number, explicit = false) {
   return {
     state: {
       doc: {
         toString: () => doc,
       },
     },
     pos,
     explicit,
     matchBefore: (regex: RegExp) => {
       const beforePos = doc.slice(0, pos);
       const match = beforePos.match(new RegExp(regex.source + "$"));
       if (match) {
         return {
           from: pos - match[0].length,
           to: pos,
           text: match[0],
         };
       }
       return null;
     },
-  } as any;
+  } as Partial<CompletionContext> as CompletionContext;
 }
internal-packages/clickhouse/src/client/client.ts (1)

234-304: Consider extracting common logic to reduce duplication.

The queryWithStats method shares significant code with the query method (lines 85-232). Consider extracting common logic for parameter validation, span setup, and error handling into shared helper functions.

This would improve maintainability and reduce the risk of divergence between the two methods when future changes are made.

apps/webapp/app/components/code/tsql/tsqlLinter.ts (1)

4-4: Consider renaming the imported SyntaxError to avoid shadowing the global.

The static analysis tool flags that importing SyntaxError shadows the global SyntaxError class. While this works correctly since the local import takes precedence, it could cause confusion during debugging or if someone inadvertently expects the global behavior.

♻️ Suggested fix
-import { parseTSQLSelect, SyntaxError, QueryError, validateQuery } from "@internal/tsql";
+import { parseTSQLSelect, SyntaxError as TSQLSyntaxError, QueryError, validateQuery } from "@internal/tsql";

Then update the usage on line 120:

-    if (error instanceof SyntaxError) {
+    if (error instanceof TSQLSyntaxError) {
internal-packages/tsql/src/index.ts (1)

210-233: Prefer type over interface for CompileTSQLOptions.

As per coding guidelines, use types over interfaces for TypeScript definitions.

Suggested change
-export interface CompileTSQLOptions {
+export type CompileTSQLOptions = {
   /** The organization ID for tenant isolation (required) */
   organizationId: string;
   /** The project ID for tenant isolation (optional - omit to query across all projects) */
   projectId?: string;
   /** The environment ID for tenant isolation (optional - omit to query across all environments) */
   environmentId?: string;
   /** Schema definitions for allowed tables and columns */
   tableSchema: TableSchema[];
   /** Optional query settings */
   settings?: Partial<QuerySettings>;
   /**
    * Runtime field mappings for dynamic value translation.
    * Maps internal ClickHouse values to external user-facing values.
    *
    * @example
    * ```typescript
    * {
    *   project: { "cm12345": "my-project-ref" },
    * }
    * ```
    */
   fieldMappings?: FieldMappings;
-}
+};
apps/webapp/app/services/queryService.server.ts (1)

100-112: Consider handling potential errors from history recording.

The prisma.customerQuery.create call is awaited but errors are not caught. If this fails (e.g., database issue), the entire executeQuery function will throw even though the actual query succeeded. Consider wrapping this in a try-catch or making it fire-and-forget.

Suggested approach
-    await prisma.customerQuery.create({
-      data: {
-        query: options.query,
-        scope: scopeToEnum[scope],
-        stats: { ...stats },
-        costInCents,
-        source: history.source,
-        organizationId,
-        projectId: scope === "project" || scope === "environment" ? projectId : null,
-        environmentId: scope === "environment" ? environmentId : null,
-        userId: history.userId ?? null,
-      },
-    });
+    // Record history but don't fail the query if recording fails
+    try {
+      await prisma.customerQuery.create({
+        data: {
+          query: options.query,
+          scope: scopeToEnum[scope],
+          stats: { ...stats },
+          costInCents,
+          source: history.source,
+          organizationId,
+          projectId: scope === "project" || scope === "environment" ? projectId : null,
+          environmentId: scope === "environment" ? environmentId : null,
+          userId: history.userId ?? null,
+        },
+      });
+    } catch (historyError) {
+      console.error("Failed to record query history:", historyError);
+    }
internal-packages/clickhouse/src/client/types.ts (1)

19-37: Consider using type instead of interface for consistency with coding guidelines.

The coding guidelines prefer types over interfaces for TypeScript definitions.

Suggested change
-export interface QueryStats {
+export type QueryStats = {
   read_rows: string;
   read_bytes: string;
   written_rows: string;
   written_bytes: string;
   total_rows_to_read: string;
   result_rows: string;
   result_bytes: string;
   elapsed_ns: string;
   byte_seconds: string;
-}
+};

-export interface QueryResultWithStats<TOutput> {
+export type QueryResultWithStats<TOutput> = {
   rows: TOutput[];
   stats: QueryStats;
-}
+};
internal-packages/tsql/src/query/errors.ts (1)

39-41: Consider renaming SyntaxError to avoid shadowing the global.

The SyntaxError class shadows the global SyntaxError. While the import is explicit where used, this can cause confusion. Consider renaming to TSQLSyntaxError for clarity.

This is a style suggestion. If the shadowing is intentional and the team prefers the cleaner name, this can be safely ignored. The current approach works correctly since imports are explicit.

internal-packages/clickhouse/src/client/tsql.ts (2)

31-69: Consider using type instead of interface per coding guidelines.

The coding guidelines specify using types over interfaces for TypeScript definitions in this codebase.

♻️ Suggested refactor
-export interface ExecuteTSQLOptions<TOut extends z.ZodSchema> {
+export type ExecuteTSQLOptions<TOut extends z.ZodSchema> = {
   /** The name of the operation (for logging/tracing) */
   name: string;
   // ... rest of properties
-}
+};

74-78: Consider using type instead of interface for consistency.

♻️ Suggested refactor
-export interface TSQLQuerySuccess<T> {
+export type TSQLQuerySuccess<T> = {
   rows: T[];
   columns: OutputColumnMetadata[];
   stats: QueryStats;
-}
+};
internal-packages/tsql/src/query/functions.ts (2)

9-28: Consider using type instead of interface per coding guidelines.

♻️ Suggested refactor
-export interface TSQLFunctionMeta {
+export type TSQLFunctionMeta = {
   /** The ClickHouse function name to use */
   clickhouseName: string;
   // ... rest of properties
-}
+};

564-589: Case-insensitive lookup logic has a subtle edge case.

The findFunction logic correctly handles case sensitivity, but when the exact-case lookup fails and lowercase lookup succeeds for a case-sensitive function, returning undefined is correct. However, consider adding a comment explaining this behavior for maintainability since it's non-obvious.

📝 Add clarifying comment
 function findFunction(
   name: string,
   functions: Record<string, TSQLFunctionMeta>
 ): TSQLFunctionMeta | undefined {
+  // First try exact case match
   const func = functions[name];
   if (func !== undefined) {
     return func;
   }

+  // Try lowercase lookup for case-insensitive functions
   const lowerFunc = functions[name.toLowerCase()];
   if (lowerFunc === undefined) {
     return undefined;
   }

-  // If we haven't found a function with the case preserved, but we have found it in lowercase,
-  // then the function names are different case-wise only.
+  // If the function requires exact case matching (caseSensitive: true),
+  // reject the lowercase match since the original name had different casing
   if (lowerFunc.caseSensitive) {
     return undefined;
   }

   return lowerFunc;
 }
apps/webapp/app/v3/services/aiQueryService.server.ts (3)

29-32: Consider using type instead of interface per coding guidelines.

♻️ Suggested refactor
-export interface AIQueryOptions {
+export type AIQueryOptions = {
   mode?: "new" | "edit";
   currentQuery?: string;
-}
+};

37-41: Consider using type instead of interface for internal types.


71-104: Duplicate tool definitions between streamQuery and call methods.

The validateTSQLQuery and getTableSchema tools are defined identically in both methods. Extract these to a private method or property to reduce duplication and ensure consistency.

♻️ Proposed refactor to DRY up tool definitions
+  private getTools() {
+    return {
+      validateTSQLQuery: tool({
+        description:
+          "Validate a TSQL query for syntax errors and schema compliance. Always use this tool to verify your query before returning it to the user.",
+        parameters: z.object({
+          query: z.string().describe("The TSQL query to validate"),
+        }),
+        execute: async ({ query }) => {
+          return this.validateQuery(query);
+        },
+      }),
+      getTableSchema: tool({
+        description:
+          "Get detailed schema information about available tables and columns. Use this to understand what data is available and how to query it.",
+        parameters: z.object({
+          tableName: z
+            .string()
+            .optional()
+            .describe("Optional: specific table name to get details for"),
+        }),
+        execute: async ({ tableName }) => {
+          return this.getSchemaInfo(tableName);
+        },
+      }),
+    };
+  }

   streamQuery(prompt: string, options: AIQueryOptions = {}) {
     // ...
     return streamText({
       model: this.model,
       system: systemPrompt,
       prompt: userPrompt,
-      tools: {
-        validateTSQLQuery: tool({ ... }),
-        getTableSchema: tool({ ... }),
-      },
+      tools: this.getTools(),
       maxSteps: 5,
       // ...
     });
   }

Also applies to: 126-159

internal-packages/tsql/src/query/ast.ts (2)

248-254: Consider using const objects instead of enums per coding guidelines.

The coding guidelines suggest avoiding enums in favor of string unions or const objects.

♻️ Suggested refactor using const object
-export enum ArithmeticOperationOp {
-  Add = "+",
-  Sub = "-",
-  Mult = "*",
-  Div = "/",
-  Mod = "%",
-}
+export const ArithmeticOperationOp = {
+  Add: "+",
+  Sub: "-",
+  Mult: "*",
+  Div: "/",
+  Mod: "%",
+} as const;
+
+export type ArithmeticOperationOp = (typeof ArithmeticOperationOp)[keyof typeof ArithmeticOperationOp];

256-277: Consider using const objects instead of enums per coding guidelines.

internal-packages/tsql/src/query/database.ts (2)

102-120: Consider using const object instead of enum per coding guidelines.

♻️ Suggested refactor
-export enum DatabaseSerializedFieldType {
-  STRING = "string",
-  INTEGER = "integer",
-  // ... etc
-}
+export const DatabaseSerializedFieldType = {
+  STRING: "string",
+  INTEGER: "integer",
+  FLOAT: "float",
+  DECIMAL: "decimal",
+  BOOLEAN: "boolean",
+  DATE: "date",
+  DATETIME: "datetime",
+  UUID: "uuid",
+  ARRAY: "array",
+  JSON: "json",
+  TUPLE: "tuple",
+  UNKNOWN: "unknown",
+  EXPRESSION: "expression",
+  VIEW: "view",
+  LAZY_TABLE: "lazy_table",
+  VIRTUAL_TABLE: "virtual_table",
+  FIELD_TRAVERSER: "field_traverser",
+} as const;
+
+export type DatabaseSerializedFieldType = (typeof DatabaseSerializedFieldType)[keyof typeof DatabaseSerializedFieldType];

25-130: Consider using type instead of interface for schema definitions.

Multiple interfaces in this range could be converted to types per coding guidelines.

apps/webapp/app/v3/querySchemas.ts (1)

400-409: Avoid logging inside whereTransform to reduce noise and potential PII leakage

bulk_action_group_ids.whereTransform currently logs every value it transforms. This will fire on each query that filters on this column, can generate a lot of log noise, and leaks raw bulk IDs into logs without strong justification.

Consider removing the log or gating it behind a dedicated debug flag.

apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (1)

2003-2021: Clarify units for byte_seconds in formatQueryStats

formatQueryStats passes byte_seconds through formatBytes and then appends "s", yielding values like "12.3 KBs". Since byte_seconds is a rate-like or composite metric (bytes * seconds), this label can be misleading.

Consider either:

  • Using a dedicated formatter that reflects the intended unit (e.g., "12.3 MiB·s"), or
  • Renaming the label to something more generic (e.g., "12.3 MB-equivalent"), or omitting it if it’s not user-facingly meaningful.
apps/webapp/app/components/code/tsql/tsqlCompletion.ts (1)

326-348: Escape enum values before wrapping them in quotes for completion labels

createEnumValueCompletions currently builds labels like:

label: `'${userFriendlyValue}'`,

or:

label: `'${value}'`,

If any user-friendly or allowed value ever contains a single quote, the inserted completion will produce invalid SQL (or at least require manual fixing). Today your enums (statuses, environment types) are probably simple strings, but this is an easy future footgun.

Consider adding a small helper to escape single quotes (e.g., replacing ' with '') before wrapping, so the completion always inserts syntactically valid string literals.

apps/webapp/app/components/code/ChartConfigPanel.tsx (1)

14-32: Align ChartConfiguration shape with current UI semantics and TS style guidelines

Two minor points here:

  1. Type vs interface
    The repo guidelines prefer type aliases over interfaces for TS. ChartConfiguration can be trivially expressed as a type without behavior change:

    export type ChartConfiguration = {
      chartType: ChartType;
      xAxisColumn: string | null;
      yAxisColumns: string[];
      groupByColumn: string | null;
      stacked: boolean;
      sortByColumn: string | null;
      sortDirection: SortDirection;
    };
  2. Single vs multiple Y‑axis columns
    yAxisColumns is typed as string[], but the panel only reads/writes yAxisColumns[0]. If multi-series Y support isn’t planned yet, consider simplifying this to a single yAxisColumn: string | null to better reflect current behavior (or extend the UI to genuinely support multiple Y selections).

Also applies to: 276-283

Comment on lines +208 to +211
onBlur={() => {
if (!onBlur) return;
onBlur(editor.current?.textContent ?? "");
}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use editor view state instead of DOM textContent for onBlur.

The onBlur handler reads textContent from the DOM element, which may not accurately reflect the editor's actual state. The DOM's textContent can include extra whitespace or differ from the editor's document content.

Proposed fix
         ref={editor}
         onBlur={() => {
           if (!onBlur) return;
-          onBlur(editor.current?.textContent ?? "");
+          if (!view) return;
+          onBlur(view.state.doc.toString());
         }}
       />

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @apps/webapp/app/components/code/TSQLEditor.tsx around lines 208 - 211, The
onBlur handler in TSQLEditor reads editor.current?.textContent (DOM) which can
differ from the editor's document; instead, call into the editor view/state to
get the canonical document text (e.g., use the editor view/state on the editor
ref such as editor.current?.view.state.doc.toString() or
editor.current?.state.doc.toString()) and pass that string to the onBlur prop;
update the onBlur arrow function to guard for onBlur and editor.current then
extract the document via the editor's state rather than textContent.

Comment on lines 286 to 301
<span
onClick={(e) => {
e.stopPropagation();
e.preventDefault();
copy();
}}
className="absolute -right-2 top-1/2 z-10 hidden -translate-y-1/2 cursor-pointer group-hover/copyable-cell:flex"
>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a button element instead of a span for the copy action.

The copy action uses a <span> with an onClick handler, which lacks semantic meaning and is not keyboard-accessible by default. Interactive elements should use appropriate semantic HTML elements.

Proposed fix
-          <span
+          <button
+            type="button"
             onClick={(e) => {
               e.stopPropagation();
               e.preventDefault();
               copy();
             }}
             className="absolute -right-2 top-1/2 z-10 hidden -translate-y-1/2 cursor-pointer group-hover/copyable-cell:flex"
           >
             <SimpleTooltip
               button={
                 <span
                   className={cn(
                     "flex size-6 items-center justify-center rounded border border-charcoal-650 bg-charcoal-750",
                     copied
                       ? "text-green-500"
                       : "text-text-dimmed hover:border-charcoal-600 hover:bg-charcoal-700 hover:text-text-bright"
                   )}
                 >
                   {copied ? (
                     <ClipboardCheckIcon className="size-3.5" />
                   ) : (
                     <ClipboardIcon className="size-3.5" />
                   )}
                 </span>
               }
               content={copied ? "Copied!" : "Copy"}
               disableHoverableContent
             />
-          </span>
+          </button>
🤖 Prompt for AI Agents
In @apps/webapp/app/components/primitives/Table.tsx around lines 286 - 293,
Replace the non-semantic <span> used for the copy action with a <button> element
(keep the same className including "absolute -right-2 top-1/2 z-10 hidden
-translate-y-1/2 cursor-pointer group-hover/copyable-cell:flex"), add
type="button", preserve the existing onClick handler logic (e.stopPropagation();
e.preventDefault(); copy();) and add an accessible label via aria-label (e.g.,
aria-label="Copy cell"). This ensures the interactive element is semantic and
keyboard-accessible while keeping the same styling and behavior.

Comment on lines +97 to +98
const byteSeconds = parseFloat(stats.byte_seconds);
const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against invalid byte_seconds value.

parseFloat will return NaN if stats.byte_seconds is empty or invalid, which would result in NaN being stored as costInCents. Consider adding validation.

Suggested fix
-    const byteSeconds = parseFloat(stats.byte_seconds);
-    const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;
+    const byteSeconds = parseFloat(stats.byte_seconds) || 0;
+    const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const byteSeconds = parseFloat(stats.byte_seconds);
const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;
const byteSeconds = parseFloat(stats.byte_seconds) || 0;
const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;
🤖 Prompt for AI Agents
In @apps/webapp/app/services/queryService.server.ts around lines 97 - 98, The
code parses stats.byte_seconds into byteSeconds and multiplies to compute
costInCents but does not guard against parseFloat returning NaN; update the
logic in queryService.server.ts around the byteSeconds/costInCents calculation
to validate the parsed value (e.g., use Number.isFinite or !isNaN on
byteSeconds), and if invalid set a safe default (such as 0) or handle the error
path (log/warn and skip cost computation) before multiplying with
env.CENTS_PER_QUERY_BYTE_SECOND so costInCents never becomes NaN.

Comment on lines +1 to +4
import { CharStreams, CommonTokenStream } from "antlr4ts";
import { TSQLLexer } from "./TSQLLexer.js";
import { TSQLParser } from "./TSQLParser.js";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing vitest imports will cause runtime failures.

The test file uses describe, it, and expect starting at line 5, but these functions are not imported from vitest. This will cause the tests to fail at runtime.

🐛 Add missing vitest imports
+import { describe, it, expect } from "vitest";
 import { CharStreams, CommonTokenStream } from "antlr4ts";
 import { TSQLLexer } from "./TSQLLexer.js";
 import { TSQLParser } from "./TSQLParser.js";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { CharStreams, CommonTokenStream } from "antlr4ts";
import { TSQLLexer } from "./TSQLLexer.js";
import { TSQLParser } from "./TSQLParser.js";
import { describe, it, expect } from "vitest";
import { CharStreams, CommonTokenStream } from "antlr4ts";
import { TSQLLexer } from "./TSQLLexer.js";
import { TSQLParser } from "./TSQLParser.js";
🤖 Prompt for AI Agents
In @internal-packages/tsql/src/grammar/parser.test.ts around lines 1 - 4, The
test file TSQL parser uses the global test helpers but never imports them; add
an import for vitest test helpers (e.g., import { describe, it, expect } from
"vitest";) to internal-packages/tsql/src/grammar/parser.test.ts so calls to
describe, it, and expect used in the file resolve at runtime; place the import
alongside the other top-level imports near the top of the file before tests run.

Comment on lines +157 to +159
export interface DecimalType extends ConstantType {
data_type: "unknown";
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

DecimalType has incorrect data_type value.

DecimalType extends ConstantType with data_type: "unknown", but it should likely have its own distinct data type like "decimal". This appears to be a bug or incomplete implementation from the port.

🐛 Suggested fix
 export interface DecimalType extends ConstantType {
-  data_type: "unknown";
+  data_type: "decimal";
 }

Note: This requires adding "decimal" to the ConstantDataType union in constants.ts.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @internal-packages/tsql/src/query/ast.ts around lines 157 - 159, DecimalType
incorrectly sets data_type to "unknown"; change DecimalType (which extends
ConstantType) to use data_type: "decimal" and add "decimal" to the
ConstantDataType union in constants.ts so the type is recognized project-wide;
update any related type guards or switch handling that pattern-matches
ConstantDataType (search for usages of DecimalType and ConstantDataType) to
handle the new "decimal" value.

Comment on lines +185 to +187
export interface IntervalType extends ConstantType {
data_type: "unknown";
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

IntervalType has incorrect data_type value.

Similar to DecimalType, IntervalType should have its own data type rather than "unknown".

🤖 Prompt for AI Agents
In @internal-packages/tsql/src/query/ast.ts around lines 185 - 187, IntervalType
currently sets data_type to the wrong literal ("unknown"); update the
IntervalType definition so its data_type uses a distinct literal (e.g.,
"interval") instead of "unknown" to mirror how DecimalType defines its own
data_type; modify the IntervalType interface (which extends ConstantType) to
declare data_type: "interval" (or the project's established interval literal) so
type discrimination works correctly across the AST.

Comment on lines +188 to +206
private visitDateTime(value: Date): string {
// Format: YYYY-MM-DD HH:MM:SS.ffffff
const pad = (n: number, len: number = 2) => String(n).padStart(len, "0");

const year = value.getUTCFullYear();
const month = pad(value.getUTCMonth() + 1);
const day = pad(value.getUTCDate());
const hours = pad(value.getUTCHours());
const minutes = pad(value.getUTCMinutes());
const seconds = pad(value.getUTCSeconds());
const ms = pad(value.getUTCMilliseconds(), 3);

const datetimeString = `${year}-${month}-${day} ${hours}:${minutes}:${seconds}.${ms}000`;

if (this.dialect === "tsql") {
return `toDateTime(${this.visitString(datetimeString)})`;
}
return `toDateTime64(${this.visitString(datetimeString)}, 6, ${this.visitString(this.timezone)})`;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n internal-packages/tsql/src/query/escape.ts | sed -n '180,210p'

Repository: triggerdotdev/trigger.dev

Length of output: 1344


🏁 Script executed:

# Check if there's a Python version we can compare against
find . -name "*.py" -type f | grep -E "(escape|query)" | head -20

Repository: triggerdotdev/trigger.dev

Length of output: 51


🏁 Script executed:

# Search for visitDateTime or visit_datetime in Python files
rg "visit_datetime|visitDateTime" --type=py -B 2 -A 10

Repository: triggerdotdev/trigger.dev

Length of output: 217


🏁 Script executed:

# Search more broadly for Python escape or query files
fd "\.(py)$" | grep -E "(escape|query)" | head -20

Repository: triggerdotdev/trigger.dev

Length of output: 51


🏁 Script executed:

# Check if there are any Python files in the repo at all
fd "\.(py)$" | head -20

Repository: triggerdotdev/trigger.dev

Length of output: 51


🏁 Script executed:

# Look at the class definition and initialization to understand timezone property
cat -n internal-packages/tsql/src/query/escape.ts | head -50

Repository: triggerdotdev/trigger.dev

Length of output: 1796


🏁 Script executed:

# Search for where SQLValueEscaper is instantiated to see how timezone is set
rg "new SQLValueEscaper|SQLValueEscaper(" -B 2 -A 2

Repository: triggerdotdev/trigger.dev

Length of output: 191


🏁 Script executed:

# Fix the regex - escape the parentheses
rg "new SQLValueEscaper|SQLValueEscaper\(" -B 2 -A 2

Repository: triggerdotdev/trigger.dev

Length of output: 1666


🏁 Script executed:

# Look at the class definition and constructor
cat -n internal-packages/tsql/src/query/escape.ts | sed -n '110,160p'

Repository: triggerdotdev/trigger.dev

Length of output: 1750


🏁 Script executed:

# Search for the constructor and timezone initialization
rg "constructor\|timezone" internal-packages/tsql/src/query/escape.ts -B 1 -A 2

Repository: triggerdotdev/trigger.dev

Length of output: 51


🏁 Script executed:

# Check the test file to see if there are timezone-related tests
cat -n internal-packages/tsql/src/query/escape.test.ts | grep -A 5 -B 5 "timezone\|DateTime\|visitDateTime"

Repository: triggerdotdev/trigger.dev

Length of output: 1562


🏁 Script executed:

# Look for any comments or documentation about timezone handling
rg "timezone|toDateTime" internal-packages/tsql/src/query/ -B 2 -A 2

Repository: triggerdotdev/trigger.dev

Length of output: 11925


🏁 Script executed:

# Check if there's a Python version we can reference
find . -type f -name "*.py" | head -5

Repository: triggerdotdev/trigger.dev

Length of output: 51


🏁 Script executed:

# Check ClickHouse function signatures for toDateTime vs toDateTime64
cat -n internal-packages/tsql/src/query/functions.ts | grep -A 1 "toDateTime"

Repository: triggerdotdev/trigger.dev

Length of output: 673


🏁 Script executed:

# Look at how escapeTSQLString and escapeClickHouseString are used
rg "escapeTSQLString|escapeClickHouseString" --type=ts -B 1 -A 1 | head -40

Repository: triggerdotdev/trigger.dev

Length of output: 2567


🏁 Script executed:

# Check ClickHouse documentation comment or any comments about toDateTime for TSQL
rg "TSQL|toDateTime" internal-packages/tsql/src/query/escape.ts -B 2 -A 2

Repository: triggerdotdev/trigger.dev

Length of output: 1226


🏁 Script executed:

# Look for any issue references or TODOs related to timezone
rg "TODO|FIXME|BUG|timezone.*TSQL|TSQL.*timezone" internal-packages/tsql/src/

Repository: triggerdotdev/trigger.dev

Length of output: 611


TSQL date escaping does not use the timezone parameter despite accepting it.

The escapeTSQLString function accepts a timezone parameter that is passed to SQLValueEscaper, but in the visitDateTime method, timezone is only applied to ClickHouse dialect (line 205 via toDateTime64). For TSQL dialect (line 203), toDateTime is called with only the datetime string, ignoring the timezone. Since toDateTime supports up to 2 arguments, consider either passing the timezone as the second argument or clarifying why TSQL dialect intentionally omits it.

🤖 Prompt for AI Agents
In @internal-packages/tsql/src/query/escape.ts around lines 188 - 206, The TSQL
branch in SQLValueEscaper.visitDateTime builds a datetime string but ignores
this.timezone; update the this.dialect === "tsql" branch in visitDateTime to
include the timezone as the second argument (e.g. return
`toDateTime(${this.visitString(datetimeString)},
${this.visitString(this.timezone)})`) so it mirrors the ClickHouse branch's use
of visitString(this.timezone); ensure visitString is used for the timezone value
to maintain proper escaping/quoting.

import { TSQLParser } from "../grammar/TSQLParser.js";
import { TSQLParseTreeConverter } from "./parser.js";
import { ArithmeticOperationOp, CompareOperationOp } from "./ast.js";
import { SyntaxError } from "./errors.js";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Rename the imported SyntaxError to avoid shadowing the global.

The import shadows the global SyntaxError property, which can lead to confusion about the origin of the error type being referenced.

Proposed fix
-import { SyntaxError } from "./errors.js";
+import { SyntaxError as TSQLSyntaxError } from "./errors.js";

And update the usage on line 543:

-      if (error instanceof SyntaxError) {
+      if (error instanceof TSQLSyntaxError) {
         // Error should have position information if available
-        expect(error).toBeInstanceOf(SyntaxError);
+        expect(error).toBeInstanceOf(TSQLSyntaxError);
       }

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Biome (2.1.2)

[error] 7-7: Do not shadow the global "SyntaxError" property.

Consider renaming this variable. It's easy to confuse the origin of variables when they're named after a known global.

(lint/suspicious/noShadowRestrictedNames)

🤖 Prompt for AI Agents
In @internal-packages/tsql/src/query/parser.test.ts at line 7, The imported
local name `SyntaxError` shadows the global Error constructor; rename the import
(e.g., `import { SyntaxError as TsqlSyntaxError } from "./errors.js";`) and
update all references in this test file that intend the tsql module's error to
use the new name (replace usages of `SyntaxError` such as the test assertion
that expects the parser to throw the module error with `TsqlSyntaxError`).

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is an impressive and substantial PR that introduces TRQL (TRigger Query Language) - a domain-specific query language for safely querying ClickHouse data. I've conducted a thorough review focusing on security, code quality, performance, and test coverage.


Overall Assessment: ✅ APPROVE (with minor suggestions)

The implementation demonstrates excellent security engineering with comprehensive SQL injection prevention and robust multi-tenant isolation. The codebase is well-structured with extensive test coverage.


Security Analysis

SQL Injection Prevention: Excellent

  • Parameterized queries throughout: All user-provided values are passed as parameters using ClickHouse's {paramName: Type} syntax, not inlined in SQL
  • ANTLR-based parser: Rejects malicious SQL at parse time, preventing stacked queries, comment injection, and quote escape attacks
  • Identifier validation: Rejects % characters in identifiers to prevent format string attacks
  • Comprehensive escaping: Special character handling for backslashes, quotes, null bytes, and Unicode

Tenant Isolation: Excellent

  • Automatic guard injection: Organization/project/environment filters are ALWAYS injected, even with user WHERE clauses
  • AND-based wrapping: User WHERE clauses are wrapped in AND with tenant guards, preventing OR-based bypass attempts (e.g., WHERE x OR 1=1)
  • All query types protected: SELECT, JOIN, UNION, and subqueries all receive tenant guards
  • Table allowlisting: Only schema-defined tables are accessible; system tables are rejected

AI Query Service: Good ⚠️

The AI-powered query generation has solid fundamentals but could benefit from:

  1. Rate limiting: No per-user/org rate limiting visible on the AI endpoint. Consider adding throttling to prevent abuse and cost overruns.

  2. Prompt injection mitigation: User prompts are embedded directly in AI context. Consider:

    • Adding input sanitization
    • Implementing prompt injection detection
    • Logging suspicious patterns for audit
  3. Cost tracking: While maxSteps: 5 limits iterations, consider adding token/cost limits per request.


Code Quality

Strengths 👍

  • Type safety: Excellent TypeScript usage with discriminated unions for error handling ([QueryError, null] | [null, TSQLQuerySuccess<T>])
  • Separation of concerns: Clean architecture with route handlers, services, presenters, and the TSQL package clearly separated
  • Error handling: Consistent error tuple pattern throughout with position tracking for parser errors
  • Schema design: Well-structured TableSchema type with virtual columns, field mappings, and where transforms

Minor Suggestions 📝

  1. Database migration: The CustomerQuery table has good indexes. Consider adding a partial index for high-value queries if query patterns emerge.

  2. N+1 query concern: In queryService.server.ts lines 73-81, ALL projects and environments are fetched for field mappings. For large organizations, consider lazy-loading only referenced entities.

  3. Error message exposure: Some raw error messages are returned to the frontend. Consider sanitizing messages for security-sensitive operations (e.g., query compilation errors could expose schema details).

  4. Query timeout: No explicit timeout visible for ClickHouse queries. Consider adding queryTimeout settings to prevent long-running queries from consuming resources.


Test Coverage

Impressive Coverage 👏

  • ~6,700 lines of new test code across 14 test files
  • Security tests (security.test.ts): 601 lines covering cross-tenant access, SQL injection vectors, parameter safety
  • Parser tests: 629 lines testing ANTLR grammar and AST generation
  • Printer tests: 2,244 lines validating SQL generation
  • Integration tests (tsql.test.ts): 1,548 lines with real ClickHouse queries

Security Test Highlights ✅

The security test suite covers critical attack vectors:

  • Stacked statement rejection
  • Quote/backslash escape handling
  • Null byte injection
  • Comment injection (-- and /* */)
  • UNION injection prevention
  • Function allowlisting
  • Cross-tenant bypass attempts (OR 1=1)

Suggestion

Consider adding:

  • Load testing for the AI endpoint
  • Edge case tests for very large queries
  • Tests for concurrent query execution

Performance Considerations

Positives 👍

  • Query stats collection: Granular metrics (read_rows, read_bytes, elapsed_ns, byte_seconds) for monitoring
  • Streaming for large results: Prevents memory exhaustion
  • Connection pooling: Configurable max_open_connections and keep-alive settings
  • Query history: Limited to 20 queries with proper indexing

Areas to Watch 👀

  1. TSQL compilation overhead: Single-pass compilation is efficient, but consider caching compiled queries for high-frequency operations
  2. Chart configuration: Smart cache invalidation based on column schema changes is a nice touch
  3. ClickHouse settings: Consider exposing readonly=1 and max_rows_to_read settings for additional safety

Database Migration Review

The migration looks well-structured:

  • Appropriate foreign key relationships with CASCADE delete
  • Good index on (organizationId, createdAt DESC) for query history
  • costInCents field enables usage tracking

Summary

Category Rating Notes
SQL Injection Prevention ⭐⭐⭐⭐⭐ Parameterized queries, ANTLR parser, comprehensive escaping
Tenant Isolation ⭐⭐⭐⭐⭐ Automatic guards, bypass prevention, table allowlisting
Code Quality ⭐⭐⭐⭐ Well-structured, type-safe, minor improvements suggested
Test Coverage ⭐⭐⭐⭐⭐ Extensive security and integration tests
Performance ⭐⭐⭐⭐ Good fundamentals, query caching could help at scale
AI Service Security ⭐⭐⭐ Solid auth, needs rate limiting and prompt injection mitigation

Recommended Actions Before Merge

High Priority:

  • Add rate limiting to AI query endpoint
  • Add query execution timeouts

Medium Priority:

  • Sanitize error messages before exposing to frontend
  • Consider lazy-loading for field mappings in large orgs

Low Priority (can be follow-up PRs):

  • Add query result caching
  • Implement prompt injection detection
  • Document security model in code comments

This is excellent work that demonstrates strong security engineering. The TSQL package is a well-designed, secure abstraction over ClickHouse that provides robust tenant isolation. Great job on the comprehensive test coverage! 🎉

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
apps/webapp/app/components/code/TSQLResultsTable.tsx (1)

92-323: Consider memoizing CellValue for better performance with large result sets.

The CellValue component is rendered for every cell in the table and performs multiple conditional checks and transformations. Without memoization, this component will re-render unnecessarily when parent components update, which could impact performance with large query results.

⚡ Suggested optimization

Wrap the component with React.memo:

-function CellValue({
+const CellValue = React.memo(function CellValue({
   value,
   column,
   prettyFormatting = true,
 }: {
   value: unknown;
   column: OutputColumnMetadata;
   prettyFormatting?: boolean;
 }) {
   // ... existing implementation
-}
+});

This ensures the component only re-renders when value, column, or prettyFormatting actually change.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4f48b07 and 3767503.

📒 Files selected for processing (3)
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/primitives/Tooltip.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/primitives/Tooltip.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/primitives/Tooltip.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/primitives/Tooltip.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/primitives/Tooltip.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/primitives/Tooltip.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/primitives/Tooltip.tsx
  • apps/webapp/app/components/primitives/Table.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
🧬 Code graph analysis (2)
apps/webapp/app/components/primitives/Tooltip.tsx (2)
packages/core/src/v3/schemas/style.ts (1)
  • Variant (8-8)
apps/webapp/app/utils/cn.ts (1)
  • cn (77-79)
apps/webapp/app/components/primitives/Table.tsx (3)
apps/webapp/app/components/primitives/Tooltip.tsx (2)
  • InfoIconTooltip (110-139)
  • SimpleTooltip (141-141)
apps/webapp/app/hooks/useCopy.ts (1)
  • useCopy (3-22)
apps/webapp/app/utils/cn.ts (1)
  • cn (77-79)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (5)
apps/webapp/app/components/code/TSQLResultsTable.tsx (1)

364-423: Well-structured table component with comprehensive type support.

The component properly handles empty states, provides tooltips for column descriptions, and integrates the copyable cell functionality. The use of index as key for rows (Line 401) is acceptable for static query results.

apps/webapp/app/components/primitives/Tooltip.tsx (1)

110-139: Clean optimization to conditionally render tooltips.

The addition of the enabled prop with a default value of true maintains backward compatibility while allowing consumers to control when the tooltip is mounted. The refactoring to extract the icon improves readability.

apps/webapp/app/components/primitives/Table.tsx (3)

109-134: LGTM: Props spreading enables flexible tr attributes.

The enhancement to spread additional props to the underlying tr element is a good pattern that maintains type safety while allowing consumers to pass native HTML attributes (e.g., data-*, aria-*, event handlers) without breaking existing behavior.


160-197: LGTM: Hover-gated tooltip mounting improves performance.

The hover tracking optimization for InfoIconTooltip is well-implemented. By only enabling the tooltip on hover, you reduce the initial render cost for tables with many header tooltips while maintaining the same user experience.


291-343: The absolute positioning of the copy button is intentional and safe.

The CopyableTableCell's copy button positioning at -right-2 is a deliberate floating button design that works correctly within the table structure. The Table container uses overflow-x-auto to handle any horizontal overflow, the TableBody has overflow-y-auto (not overflow-hidden), and individual cells don't have overflow constraints that would clip the button. The z-10 z-index ensures visibility. Real-world usage in TSQLResultsTable and the storybook example both render without issues.

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is an impressive and substantial PR that introduces TRQL (Trigger Query Language), a custom SQL-like query language for safely querying ClickHouse data. The implementation is well-architected with strong security considerations. Here's my detailed review:

✅ Strengths

Security Architecture (Excellent)

  • Tenant isolation is implemented correctly with automatic injection of organization_id, project_id, and environment_id guards into every query
  • Parameterized queries prevent SQL injection - string values are never directly interpolated into SQL
  • Table and column allowlisting via schema definitions prevents access to unauthorized data
  • Function allowlisting in functions.ts restricts which ClickHouse functions users can call
  • The security test suite (security.test.ts) comprehensively tests SQL injection vectors, cross-tenant access, and bypass attempts

Code Quality

  • Clean separation of concerns: lexer/parser (ANTLR), AST types, validator, printer
  • Comprehensive type definitions throughout
  • Good use of the visitor pattern for AST traversal
  • Error handling with custom error classes (SyntaxError, QueryError, NotImplementedError)
  • Attribution and licensing for PostHog's HogQL in NOTICE.md

Test Coverage

  • 8 test files covering parser, printer, escape, schema, validator, results, and security
  • Integration tests with actual ClickHouse via testcontainers (tsql.test.ts)
  • The security tests are particularly thorough

Feature Set

  • Virtual columns with computed expressions
  • Value mapping for user-friendly display names
  • Field mappings for runtime value translation
  • Query statistics tracking and billing integration
  • AI-powered query generation with validation

🔍 Areas for Consideration

1. Potential Performance Concern - Field Mappings Query

In queryService.server.ts:73-85, the field mappings are built by querying all projects and environments for the organization on every query execution:

const projects = await prisma.project.findMany({
  where: { organizationId },
  select: { id: true, externalRef: true },
});

const environments = await prisma.runtimeEnvironment.findMany({
  where: { project: { organizationId } },
  select: { id: true, slug: true },
});

For organizations with many projects/environments, this could add latency. Consider:

  • Caching these mappings with a short TTL
  • Or lazily loading only when the relevant columns are selected

2. Database Migration Indexing

The CustomerQuery migration creates indexes, but consider adding a composite index if you plan to frequently query by userId + organizationId:

CREATE INDEX "CustomerQuery_organizationId_userId_idx" ON "public"."CustomerQuery" ("organizationId", "userId");

3. Error Message Exposure

In escape.ts, the error message includes the identifier value:

throw new QueryError(
  `The identifier "${identifier}" is not permitted as it contains the "%" character`
);

This is fine since it's for user-provided query syntax, but ensure these don't leak into client-facing error messages with sensitive internal identifiers.

4. AI Query Service - Model Hardcoding

In aiQueryService.server.ts:49, the model is hardcoded:

private readonly model: LanguageModelV1 = openai("gpt-4o-mini")

Consider making this configurable via environment variable for easier model swapping and cost management.

5. Query History Cost Calculation

In queryService.server.ts:98:

const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;

Verify that byteSeconds is being parsed correctly from the stats - it's cast to float but should handle edge cases (0, negative, extremely large values).

6. Minor Type Safety

In several places, as any casts are used in the parser visitor:

this.visitCatchBlock(c) // ctx.catchBlock().map((c: CatchBlockContext) => ...

These are largely unavoidable due to ANTLR4's type system, but worth documenting why they're necessary.

🔒 Security Checklist

Check Status
Tenant isolation enforced
SQL injection prevented
Table allowlisting
Column allowlisting
Function allowlisting
Parameterized queries
Query timeout/limits ✅ (maxRows, max_execution_time)
Rate limiting ⚠️ Consider for production

📁 Test Coverage Summary

  • grammar/parser.test.ts - Grammar parsing tests
  • query/parser.test.ts - AST conversion tests
  • query/printer.test.ts - SQL generation tests
  • query/escape.test.ts - String/identifier escaping
  • query/schema.test.ts - Schema validation
  • query/validator.test.ts - Query validation
  • query/results.test.ts - Result transformation
  • query/security.test.ts - Security scenarios
  • clickhouse/tsql.test.ts - Integration tests

The test coverage is comprehensive and appropriate for a security-critical feature.

📝 Minor Suggestions

  1. Consider adding JSDoc examples to the main compileTSQL function
  2. The tsqlCompletion.test.ts file exists in two locations (webapp and test folder) - consolidate if they're duplicates
  3. Add a rate limit or query complexity limit to prevent resource exhaustion

Summary

This is a well-engineered, security-conscious implementation of a custom query language. The tenant isolation and SQL injection protections are solid. The main considerations are around performance optimization for field mappings and ensuring proper rate limiting in production.

Recommendation: Approve with minor considerations 🚀


🤖 Generated with Claude Code

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
apps/webapp/app/components/code/TSQLResultsTable.tsx (2)

233-240: Consider extracting environment types to a const.

The hardcoded array of environment type strings could be extracted to a shared constant for better maintainability and consistency with other parts of the codebase.

♻️ Suggested refactor

At the top of the file or in a shared constants file:

const VALID_ENVIRONMENT_TYPES = ["PRODUCTION", "STAGING", "DEVELOPMENT", "PREVIEW"] as const;

Then update the check:

      case "environmentType": {
        if (
          typeof value === "string" &&
-          ["PRODUCTION", "STAGING", "DEVELOPMENT", "PREVIEW"].includes(value)
+          VALID_ENVIRONMENT_TYPES.includes(value as any)
        ) {
          return (
            <EnvironmentLabel
              environment={{ type: value as "PRODUCTION" | "STAGING" | "DEVELOPMENT" | "PREVIEW" }}
            />
          );
        }
        return <span>{String(value)}</span>;
      }

438-454: Consider using a unique identifier for React keys.

Line 439 uses the array index as the key for table rows. If the data has a unique identifier (e.g., a row ID or a combination of column values), using that would provide better React reconciliation behavior when rows are reordered or updated.

Example if rows have a unique field:

rows.map((row, i) => {
  const uniqueKey = row.id ?? row.runId ?? i; // Use unique field if available
  return (
    <TableRow key={uniqueKey}>
      {/* ... */}
    </TableRow>
  );
})
apps/webapp/app/components/primitives/DateTime.tsx (1)

159-161: Consider clarifying the timeZone prop usage in component signatures.

Several components accept DateTimeProps which includes a timeZone prop, but don't use it for formatting:

  • SmartDateTime (line 159): Doesn't destructure timeZone, always uses localTimeZone
  • DateTimeShort (line 295): Doesn't destructure timeZone, always uses localTimeZone
  • DateTimeAccurate (line 227): Accepts timeZone = "UTC" but uses it only for the tooltip, not for formatting the display

While this appears intentional (to always display times in the user's local timezone), the current API might confuse callers who pass a timeZone prop expecting it to affect the displayed time.

Consider these options for clarity

Option 1: Create separate prop types for components that don't use timeZone:

type SmartDateTimeProps = Omit<DateTimeProps, 'timeZone' | 'showTimezone' | 'showTooltip' | 'includeSeconds' | 'includeTime' | 'hideDate'>;

export const SmartDateTime = ({ date, previousDate = null, hour12 = true }: SmartDateTimeProps) => {
  // ...
};

Option 2: Add JSDoc comments clarifying that timeZone prop is ignored:

/**
 * Displays date/time in the user's local timezone.
 * Note: timeZone prop is not supported - always uses local timezone.
 */
export const SmartDateTime = ({ date, previousDate = null, hour12 = true }: DateTimeProps) => {
  // ...
};

Also applies to: 227-236, 295-297

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3767503 and e4d86dd.

📒 Files selected for processing (2)
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (1)
apps/webapp/app/components/code/TSQLResultsTable.tsx (2)
internal-packages/tsql/src/query/schema.ts (2)
  • OutputColumnMetadata (225-240)
  • column (301-313)
apps/webapp/app/components/primitives/Table.tsx (4)
  • Table (51-69)
  • TableHeader (76-92)
  • TableHeaderCell (148-198)
  • TableCell (212-289)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (9)
apps/webapp/app/components/code/TSQLResultsTable.tsx (6)

1-32: LGTM!

Imports are properly structured and the constant value is reasonable for truncating long strings in table cells.


37-88: LGTM!

Helper functions are well-implemented with proper handling of edge cases and comprehensive type guards for ClickHouse types.


94-154: LGTM!

The hover state optimization is a good performance pattern, and the plain text mode correctly handles truncation with tooltips.


267-361: LGTM!

Type-based rendering is comprehensive and correctly implements the hover optimization for DateTime tooltips as described in the commit messages.


363-383: LGTM!

Both components properly handle cases where the referenced data is not available in the current context with appropriate fallbacks.


257-263: Queue type determination logic is correct.

The pattern correctly identifies queue types. All task queues in the system use the "task/" prefix (as confirmed by SDK schemas and test fixtures), while custom queues don't have this prefix. The type assignment and name extraction via replace() is consistent with the codebase conventions where only two queue types exist: "task" and "custom".

apps/webapp/app/components/primitives/DateTime.tsx (3)

9-40: LGTM! SSR-safe timezone caching implementation.

The useLocalTimeZone hook correctly uses useSyncExternalStore to provide SSR compatibility. The module-level cache is safe here because the server snapshot always returns "UTC" and client-side each browser has its own runtime. The no-op subscription is appropriate since timezone doesn't change during the app's lifetime.


64-64: LGTM! Backward-compatible timezone handling.

The DateTime component correctly uses the new hook while maintaining backward compatibility by respecting the timeZone prop when provided (line 81: timeZone ?? localTimeZone).


190-225: LGTM! Consistent helper function signatures.

The formatting helper functions (formatSmartDateTime, formatTimeOnly, formatDateTimeAccurate, formatDateTimeShort) now consistently accept a timeZone parameter, making them more flexible and testable. The components properly pass the timezone value from the hook.

Also applies to: 274-293, 304-321

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is a substantial PR introducing a custom query language (TRQL) for safe user queries against ClickHouse. I've reviewed the key components across security, code quality, performance, and test coverage.


🔒 Security Assessment

Strengths:

  1. Tenant Isolation - The createTenantGuard method in printer.ts:1366-1424 properly injects organization/project/environment guards into every query. This is ANDed with user-provided WHERE clauses, preventing OR-based bypasses.

  2. Parameterized Queries - String values are passed through context.addValue() (printer.ts:1843), generating typed ClickHouse parameters like {tsql_xxx: String} rather than string interpolation.

  3. Function Allowlisting - Only known-safe functions from TSQL_CLICKHOUSE_FUNCTIONS and TSQL_AGGREGATIONS are permitted. Dangerous functions like file() are rejected (security.test.ts:326-328).

  4. Table Allowlisting - Unknown tables and system tables are rejected (security.test.ts:193-209).

  5. Stacked Query Prevention - The parser rejects semicolons that would start new statements (security.test.ts:344-356).

  6. Comprehensive Security Tests - security.test.ts covers cross-tenant access, SQL injection vectors, comment injection, identifier injection, and UNION-based attacks.

Potential Concerns:

  1. expression Field Security - Virtual columns defined in schema use raw SQL expressions (schema.ts:97). Ensure these expressions are only set by trusted code, never from user input. Consider adding validation or a comment warning developers.

  2. AI Service Prompt Injection - The AIQueryService (aiQueryService.server.ts) passes user prompts directly to the LLM. While the output is validated through validateQuery, there's a theoretical risk of prompt injection causing unexpected behavior. The 5-step limit (maxSteps: 5) helps mitigate runaway calls.


📝 Code Quality & Best Practices

Strengths:

  1. Clean Architecture - Good separation of concerns: parsing, AST, schema validation, and printing are distinct modules.

  2. TypeScript Types - Strong typing throughout with ClickHouseType, TableSchema, ColumnSchema etc.

  3. Port Attribution - Clear documentation that this is ported from PostHog's HogQL, with NOTICE.md acknowledging the origin.

  4. Value Mapping - The valueMap feature for translating user-friendly values (e.g., "Completed" → "COMPLETED_SUCCESSFULLY") is well-designed.

Suggestions:

  1. printer.ts Size - At 2291 lines, this file is quite large. Consider splitting into smaller modules (e.g., separate type inference, expression visitors).

  2. Error Messages - Some errors could be more user-friendly. For example, Unknown function: ${name} could suggest similar functions.

  3. extractQueryFromResponse - The regex-based query extraction in aiQueryService.server.ts:472-485 could be fragile. Consider more robust parsing if edge cases emerge.


⚡ Performance Considerations

Strengths:

  1. Query Stats - The system returns read_rows, elapsed_ns, and other metrics, enabling performance monitoring.

  2. Max Rows Limit - context.maxRows caps results, preventing unbounded result sets.

  3. Parameter Reuse - The PrinterContext caches parameter names, avoiding redundant allocations.

Suggestions:

  1. LIMIT Enforcement - The code caps LIMIT at maxRows (printer.ts:411-424), but only for top-level queries. Subqueries could still return large result sets. Consider if this is intentional.

  2. Virtual Column Expressions - Complex expressions in virtual columns (e.g., dateDiff('millisecond', started_at, completed_at)) are expanded inline. For frequently-used expressions, ensure ClickHouse can optimize them.


🧪 Test Coverage

Strengths:

  1. Excellent Security Tests - security.test.ts (601 lines) comprehensively covers injection vectors and tenant isolation.

  2. Integration Tests - tsql.test.ts tests against a real ClickHouse container via testcontainers.

  3. Parser Tests - parser.test.ts, printer.test.ts cover AST generation and SQL output.

  4. Escape Tests - escape.test.ts validates character escaping for various edge cases.

  5. Validation Tests - validator.test.ts covers column validation and enum value checking.

Suggestions:

  1. AI Service Tests - The aiQuery.eval.ts appears to be an evaluation file. Consider adding unit tests that mock the LLM responses to ensure consistent behavior.

  2. Edge Cases in Printer - Consider adding tests for deeply nested subqueries and complex JOIN scenarios to ensure tenant guards are properly injected at all levels.


📋 Minor Issues

  1. Type Assertion Style - Throughout printer.ts, there are patterns like (node as Field).expression_type === "field". Consider using a type guard function for cleaner code.

  2. Console Warnings - Ensure no console.log statements remain in production code.

  3. Magic Numbers - maxSteps: 5 in AI service could be a named constant.


✅ Summary

This is a well-designed and security-conscious implementation of a custom query language. The tenant isolation and SQL injection protections are thorough. The main areas for improvement are:

  1. Code organization (splitting large files)
  2. Additional AI service testing
  3. Documentation warnings about the expression field security implications

Recommendation: This PR is ready for merge after addressing any critical security review comments. The security posture is solid for a query language exposed to users.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx:
- Around line 164-181: The numeric timestamp branch in tryParseDate currently
treats numbers as milliseconds only, causing valid Unix seconds timestamps to be
rejected; update the number handling in tryParseDate to first attempt new
Date(value) and validate the year range (1970–2100), and if that fails, attempt
new Date(value * 1000) (treating the number as seconds) and validate again,
returning the millisecond-corrected Date when valid, otherwise return null;
ensure the checks use the same year sanity bounds and preserve the function
signature and behavior for Date/string inputs.

In
@apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx:
- Around line 2027-2033: The formatBytes function can produce NaN or
out-of-range indices for negative or extremely large values; update formatBytes
to first validate the input (treat negative bytes as 0 or return a clear string
like "0 B" or "-X B" per desired behavior), extend the sizes array to include
larger units (e.g., "TB", "PB") and compute the exponent safely by clamping the
index: use a safeBytes = Math.max(1, bytes) when taking logs to avoid NaN,
compute i then clamp i = Math.min(i, sizes.length - 1) before formatting, and
ensure the case bytes === 0 still returns "0 B".
🧹 Nitpick comments (10)
apps/webapp/app/components/primitives/DateTime.tsx (1)

364-374: Simplify the memoization pattern.

The current implementation memoizes a function creator, but since the function is called immediately rather than passed as a prop, it would be clearer to memoize the computed value directly.

♻️ Proposed refactor to memoize the value directly
-  const getUtcOffset = useMemo(
-    () => () => {
-      if (title !== "Local") return "";
-      const offset = -new Date().getTimezoneOffset();
-      const sign = offset >= 0 ? "+" : "-";
-      const hours = Math.abs(Math.floor(offset / 60));
-      const minutes = Math.abs(offset % 60);
-      return `(UTC ${sign}${hours}${minutes ? `:${minutes.toString().padStart(2, "0")}` : ""})`;
-    },
-    [title]
-  );
+  const utcOffset = useMemo(() => {
+    if (title !== "Local") return "";
+    const offset = -new Date().getTimezoneOffset();
+    const sign = offset >= 0 ? "+" : "-";
+    const hours = Math.abs(Math.floor(offset / 60));
+    const minutes = Math.abs(offset % 60);
+    return `(UTC ${sign}${hours}${minutes ? `:${minutes.toString().padStart(2, "0")}` : ""})`;
+  }, [title]);

Then update the usage on line 381:

-        <span className="font-normal text-text-dimmed">{getUtcOffset()}</span>
+        <span className="font-normal text-text-dimmed">{utcOffset}</span>
apps/webapp/app/components/code/AIQueryInput.tsx (3)

59-59: Consider using pathBuilder utilities for consistency.

The resource path is constructed manually using template literals. According to the PR context, pathBuilder utilities are available that should be used for constructing resource paths to ensure consistency and ease maintenance.

♻️ Suggested refactor
// Import the path builder utility
import { queryAIGeneratePath } from "~/utils/pathBuilder";

// Then use it instead of manual construction:
const resourcePath = queryAIGeneratePath(organization, project, environment);

Note: Verify the exact function name and signature in apps/webapp/app/utils/pathBuilder.ts.


224-229: Consider shorter error timeout duration.

The error message auto-hides after 15 seconds, which is notably longer than the typical 3-5 second duration for UI error messages. While this might be intentional for AI-generated errors that users need more time to read, consider whether a shorter duration (e.g., 8-10 seconds) would provide a better UX.


234-236: Consider extracting inline gradient to a constant.

The gradient style is defined inline, which makes it harder to maintain and reuse if needed elsewhere. Consider extracting it to a CSS variable or a constant.

♻️ Suggested refactor
// At the top of the file or in a styles constant
const AI_INPUT_GRADIENT = "linear-gradient(to bottom right, #E543FF, #286399)";

// Then use it:
<div
  className="rounded-md p-px"
  style={{ background: AI_INPUT_GRADIENT }}
>

Or define it in your CSS/Tailwind configuration for better reusability.

apps/webapp/app/components/code/QueryResultsChart.tsx (2)

43-47: Unused columns prop.

The columns prop is declared in QueryResultsChartProps but is never used within the component. Consider removing it if not needed, or document the intended future use.

♻️ Suggested fix
 interface QueryResultsChartProps {
   rows: Record<string, unknown>[];
-  columns: OutputColumnMetadata[];
   config: ChartConfiguration;
 }

Note: Also update the call sites that pass columns if you remove it.


295-332: Null sorting order may be unexpected.

Lines 307-309 place null values first when sorting ascending, which differs from typical SQL behavior where NULLs sort last in ASC. This might surprise users familiar with SQL conventions.

♻️ Consider standard SQL null ordering
     // Handle null/undefined
     if (aVal == null && bVal == null) return 0;
-    if (aVal == null) return sortDirection === "asc" ? -1 : 1;
-    if (bVal == null) return sortDirection === "asc" ? 1 : -1;
+    if (aVal == null) return sortDirection === "asc" ? 1 : -1;  // nulls last in ASC
+    if (bVal == null) return sortDirection === "asc" ? -1 : 1;  // nulls first in DESC
apps/webapp/app/components/code/TSQLResultsTable.tsx (1)

438-439: Using array index as React key.

Using i as the key for TableRow is acceptable here since query results are static after rendering and rows aren't reordered or individually modified. However, if the table ever supports row operations, this could cause issues.

apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (3)

505-528: Clipboard operations lack error handling.

navigator.clipboard.writeText returns a Promise that can reject (e.g., permission denied, insecure context). Consider adding try/catch with user feedback.

♻️ Add error handling for clipboard operations
-  const handleCopyCSV = () => {
+  const handleCopyCSV = async () => {
     const csv = rowsToCSV(rows, columns);
-    navigator.clipboard.writeText(csv);
+    try {
+      await navigator.clipboard.writeText(csv);
+    } catch (err) {
+      console.error("Failed to copy to clipboard:", err);
+      // Optionally show a toast notification
+    }
     setIsOpen(false);
   };

Apply similar changes to handleCopyJSON.


693-697: setTimeout workaround for re-triggering prompts.

The setTimeout(..., 0) pattern to reset and re-set autoSubmitPrompt ensures the same prompt can be triggered multiple times. This works but is slightly fragile—a unique key approach might be more robust.

♻️ Alternative: use a trigger counter
const [promptTrigger, setPromptTrigger] = useState<{ prompt: string; key: number } | undefined>();

// In onClick:
setPromptTrigger({ prompt: example, key: Date.now() });

// Pass to AIQueryInput with key prop or combined value

2065-2104: Unused variable in highlightSQL.

Line 2068 declares const suffix = ""; which is always empty and adds nothing to the output. This appears to be leftover code or incomplete implementation.

♻️ Remove dead code
 function highlightSQL(query: string): React.ReactNode[] {
   // Normalize whitespace for display (let CSS line-clamp handle truncation)
   const normalized = query.replace(/\s+/g, " ").slice(0, 200);
-  const suffix = "";
 
   // ... rest of function ...
 
-  if (suffix) {
-    parts.push(suffix);
-  }
-
   return parts;
 }
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e4d86dd and c05fc0d.

📒 Files selected for processing (5)
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/code/TSQLResultsTable.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/AIQueryInput.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/primitives/DateTime.tsx
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
📚 Learning: 2025-12-08T15:19:56.823Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2760
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx:278-281
Timestamp: 2025-12-08T15:19:56.823Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx, the tableState search parameter uses intentional double-encoding: the parameter value contains a URL-encoded URLSearchParams string, so decodeURIComponent(value("tableState") ?? "") is required to fully decode it before parsing with new URLSearchParams(). This pattern allows bundling multiple filter/pagination params as a single search parameter.

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧬 Code graph analysis (4)
apps/webapp/app/components/code/TSQLResultsTable.tsx (6)
internal-packages/tsql/src/query/schema.ts (2)
  • OutputColumnMetadata (225-240)
  • column (301-313)
internal-packages/tsql/src/index.ts (2)
  • OutputColumnMetadata (73-73)
  • column (83-83)
apps/webapp/app/utils/pathBuilder.ts (2)
  • v3RunPathFromFriendlyId (314-316)
  • v3ProjectPath (144-146)
apps/webapp/app/components/runs/v3/TaskRunStatus.tsx (4)
  • isTaskRunStatus (275-277)
  • isRunFriendlyStatus (282-284)
  • runStatusFromFriendlyTitle (243-249)
  • descriptionForTaskRunStatus (92-94)
apps/webapp/app/components/primitives/DateTime.tsx (1)
  • DateTimeAccurate (300-300)
apps/webapp/app/components/primitives/Table.tsx (3)
  • Table (51-69)
  • TableCell (212-289)
  • CopyableTableCell (295-343)
apps/webapp/app/components/code/QueryResultsChart.tsx (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (1)
  • ChartConfiguration (14-22)
apps/webapp/app/components/primitives/Chart.tsx (1)
  • ChartConfig (10-18)
apps/webapp/app/components/primitives/Paragraph.tsx (1)
  • Paragraph (88-107)
apps/webapp/app/components/code/AIQueryInput.tsx (5)
apps/webapp/app/hooks/useProject.tsx (1)
  • useProject (20-24)
apps/webapp/app/hooks/useEnvironment.tsx (1)
  • useEnvironment (19-23)
apps/webapp/app/components/primitives/Buttons.tsx (1)
  • Button (296-329)
apps/webapp/app/components/primitives/Spinner.tsx (1)
  • Spinner (8-62)
apps/webapp/app/utils/cn.ts (1)
  • cn (77-79)
apps/webapp/app/components/primitives/DateTime.tsx (1)
apps/webapp/app/components/primitives/Tooltip.tsx (2)
  • SimpleTooltip (141-141)
  • TooltipContent (141-141)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (31)
apps/webapp/app/components/primitives/DateTime.tsx (2)

9-40: Excellent SSR-compatible timezone implementation.

The approach using useSyncExternalStore is the correct pattern for SSR-safe timezone handling. The caching mechanism appropriately optimizes repeated timezone resolution, and the hook is properly exported for reuse across components.


280-300: Well-implemented memoization optimization.

The custom comparison function areDateTimePropsEqual correctly compares Date objects by their timestamp values rather than reference equality, preventing unnecessary re-renders when parent components pass new Date instances with identical values. This optimization is particularly valuable for components rendered in lists or tables.

apps/webapp/app/components/code/AIQueryInput.tsx (5)

1-21: LGTM! Well-structured imports and lazy loading.

The lazy loading of StreamdownRenderer with SSR prevention is well implemented, and the theme context wrapper is properly configured.


71-165: LGTM! Robust streaming implementation.

The submitQuery callback correctly handles:

  • Abort controller lifecycle
  • Streaming response parsing with buffering
  • Error cases including abort scenarios
  • Proper state management

The dependency array is correct since mode is used as a default parameter value.


240-259: LGTM! Well-implemented textarea with proper keyboard handling.

The textarea correctly handles Enter key for submission (without Shift) while allowing Shift+Enter for new lines, and includes proper disabled state management.


260-304: LGTM! Proper button state management.

The loading states, mode switching, and disabled conditions are well implemented. The tooltip for the disabled edit button is particularly helpful for user understanding.


327-398: LGTM! Excellent thinking panel with clear status indicators.

The thinking panel effectively shows:

  • Loading state with spinner
  • Success/error states with colored indicators
  • Appropriate action buttons (Cancel/Dismiss)
  • Streamed content with proper fallback

The use of Suspense for the StreamdownRenderer is well-handled.

apps/webapp/app/components/code/QueryResultsChart.tsx (8)

1-24: LGTM on imports and setup.

Clean organization with proper type imports and component imports from the primitives library.


25-41: Good color palette design.

The centralized color palette with modular access via getSeriesColor is clean and maintainable.


64-88: Good time granularity detection logic.

The range-based granularity detection is well-structured with appropriate thresholds. Edge case of fewer than 2 dates returning "days" as default is reasonable.


193-241: Solid data transformation logic for non-grouped mode.

The transformation correctly handles date formatting, preserves raw dates for tooltips, and maps Y-axis columns to numeric values.


244-281: Good pivoting logic for grouped data.

The grouping implementation correctly sums values for duplicate x+group combinations and handles the series extraction well.


334-381: Well-structured memoization.

The use of useMemo for data transformation, sorting, time granularity, and chart config is appropriate for performance optimization.


444-511: Clean chart rendering implementation.

The conditional rendering for bar/area/line charts with proper stacking configuration is well-implemented. Good use of the memo HOC for the component.


517-565: Smart Y-axis formatter with range-based precision.

The dynamic precision based on data range ensures readable axis labels across various data scales.

apps/webapp/app/components/code/TSQLResultsTable.tsx (7)

1-30: Appropriate imports and dependencies.

Good use of subpath exports from @trigger.dev/core/v3 as per coding guidelines.


32-53: Clean utility functions.

truncateString and valueToString are well-implemented with proper null handling.


55-88: Type detection covers common ClickHouse types.

The type checking functions handle both base types and their Nullable variants appropriately.


94-119: Hover-based tooltip optimization.

Using per-cell hover state to conditionally enable tooltips is a reasonable performance tradeoff. For very large tables (1000+ rows), this creates many state instances, but for typical query result sizes this should be fine.


124-264: Comprehensive custom render type handling.

The switch statement covers a good variety of render types (runId, runStatus, duration, cost, machine, environment, etc.). Each case properly falls back to string rendering when the value type doesn't match.


363-383: Good use of context hooks for linked data.

ProjectCellValue and EnvironmentCellValue correctly use organization/project context to resolve display names and links, with proper fallbacks for unmatched values.


402-459: Well-structured table component.

The memoized component with proper column alignment, empty state handling, and copyable cells provides a good user experience.

apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (9)

1-76: Comprehensive imports for feature-rich page.

Imports are well-organized. Note: Direct Prisma import on line 66 is acceptable per the learnings for database interactions.


77-108: Feature flag access control is well-implemented.

The hasQueryAccess function properly checks admin status first, then falls back to organization feature flags with proper validation.


116-150: Loader with proper authentication and authorization.

Good use of requireUser, proper 404 responses for missing resources, and redirect for unauthorized access.


152-242: Action with comprehensive error handling.

Uses zod for input validation (per coding guidelines), proper status codes (400, 403, 404, 500), and structured error responses.


251-328: QueryEditorForm isolation pattern.

Good use of forwardRef with useImperativeHandle to expose controlled methods while keeping internal state isolated. This prevents parent re-renders from affecting the editor.


330-494: Well-structured Page component.

Good separation of concerns with the resizable panel layout, isolated editor form, and conditional rendering for results/charts.


739-1871: Comprehensive TRQL guide content.

The guide provides extensive documentation for users including table of contents, code examples with "Try it" buttons, and comprehensive function references with tooltips. Well-structured for discoverability.


1905-1987: Good schema and column help rendering.

TableSchemaContent and ColumnHelpItem provide useful schema documentation with copyable column names, types, examples, and allowed values.


2106-2161: Query history popover is well-implemented.

Good UX with highlighted SQL, datetime display, user attribution, and scope information.

Comment on lines +2027 to +2033
function formatBytes(bytes: number): string {
if (bytes === 0) return "0 B";
const k = 1024;
const sizes = ["B", "KB", "MB", "GB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

formatBytes may produce incorrect output for edge cases.

If bytes is negative or extremely large (exceeding GB), Math.log(bytes) returns NaN or an index exceeding the sizes array. Consider adding guards.

🔧 Add guards for edge cases
 function formatBytes(bytes: number): string {
   if (bytes === 0) return "0 B";
+  if (bytes < 0) return "-" + formatBytes(-bytes);
   const k = 1024;
   const sizes = ["B", "KB", "MB", "GB"];
-  const i = Math.floor(Math.log(bytes) / Math.log(k));
+  const i = Math.min(Math.floor(Math.log(bytes) / Math.log(k)), sizes.length - 1);
   return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function formatBytes(bytes: number): string {
if (bytes === 0) return "0 B";
const k = 1024;
const sizes = ["B", "KB", "MB", "GB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
}
function formatBytes(bytes: number): string {
if (bytes === 0) return "0 B";
if (bytes < 0) return "-" + formatBytes(-bytes);
const k = 1024;
const sizes = ["B", "KB", "MB", "GB"];
const i = Math.min(Math.floor(Math.log(bytes) / Math.log(k)), sizes.length - 1);
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(1))} ${sizes[i]}`;
}
🤖 Prompt for AI Agents
In
@apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
around lines 2027 - 2033, The formatBytes function can produce NaN or
out-of-range indices for negative or extremely large values; update formatBytes
to first validate the input (treat negative bytes as 0 or return a clear string
like "0 B" or "-X B" per desired behavior), extend the sizes array to include
larger units (e.g., "TB", "PB") and compute the exponent safely by clamping the
index: use a safeBytes = Math.max(1, bytes) when taking logs to avoid NaN,
compute i then clamp i = Math.min(i, sizes.length - 1) before formatting, and
ensure the case bytes === 0 still returns "0 B".

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is a substantial PR (~42k additions) that introduces TRQL (TRigger Query Language), a SQL-like query interface for ClickHouse with tenant isolation. Overall, this is well-architected with strong security considerations, though I have some suggestions for improvement.


Security Assessment ✅

Strengths:

  1. Excellent tenant isolation - The printer.ts correctly injects organization/project/environment guards into every query automatically via createTenantGuard(). The guards are ANDed with user conditions, preventing bypass via OR 1=1 attacks.

  2. Comprehensive SQL injection prevention:

    • String values are parameterized (context.addValue()) rather than interpolated
    • Identifiers are escaped via escapeClickHouseIdentifier() with proper backtick escaping
    • The parser rejects stacked statements (semicolons for multiple queries)
    • Function allowlisting via TSQL_CLICKHOUSE_FUNCTIONS and TSQL_AGGREGATIONS
  3. Schema-based validation - Only allowlisted tables/columns can be queried. Attempts to access system.tables or unknown tables throw QueryError.

  4. Parameterized queries - All user-provided string values are sent as typed ClickHouse parameters ({tsql_xxx: Type}) rather than string interpolation.

  5. Thorough security test suite - security.test.ts covers cross-tenant access, SQL injection vectors, UNION injection, and subquery guards.

Minor Concerns:

  • The % character rejection in identifiers (escape.ts:68-71) is good, but consider documenting why this is necessary (likely related to ClickHouse LIKE pattern injection)
  • Consider adding rate limiting for the query endpoint to prevent DoS via expensive queries

Code Quality 👍

Well Done:

  1. Clean separation of concerns:

    • @internal/tsql - Parser, AST, validation (reusable)
    • @internal/clickhouse - Query execution layer
    • apps/webapp - UI components and routes
  2. TypeScript throughout with good type definitions (ClickHouseType, OutputColumnMetadata, etc.)

  3. Attribution - Proper NOTICE.md for HogQL derivation

  4. Good error handling - Custom error types (QueryError, SyntaxError, NotImplementedError)

Suggestions:

  1. printer.ts:2290 - The file is quite large (~2300 lines). Consider splitting visitor methods into separate files by category (e.g., visitors/select.ts, visitors/expressions.ts).

  2. route.tsx - The query page route is also very large (~2160 lines). Consider extracting:

    • TRQLGuideContent and function categories to a separate help module
    • Chart config and results components to their own files
  3. Type assertion pattern - The AST visitor uses many type assertions like (col as Field).expression_type === "field". Consider using discriminated union type guards for cleaner code:

    function isField(expr: Expression): expr is Field {
      return expr.expression_type === "field";
    }

Potential Issues 🔍

  1. aiQueryService.server.ts:47-49 - The default model is gpt-4o-mini. Ensure this is intentional and consider making it configurable via environment variable for flexibility.

  2. queryService.server.ts:98 - Cost calculation uses env.CENTS_PER_QUERY_BYTE_SECOND but I don't see this value defined in the PR. Ensure this env var exists or has a sensible default.

  3. Query history storage - The CustomerQuery table stores full query text. For very long queries, consider truncation or a separate storage mechanism.

  4. printer.ts:411-425 - The maxRows limit is applied by modifying a Constant node's value in place (constLimit.value = Math.min()). This mutates the AST, which could cause issues if the AST is reused. Consider creating a new node instead.


Performance Considerations ⚡

  1. Field mappings in queryService.server.ts:73-86 - Two database queries run on every query execution to build field mappings. Consider caching these per-organization with a short TTL.

  2. Parser initialization - ANTLR parser/lexer creation on every query. Consider object pooling if query volume is high.

  3. SELECT * expansion (printer.ts:590-709) - Correctly expands asterisks to explicit column lists, which is good for ClickHouse performance.


Test Coverage ✅

Excellent coverage:

  • security.test.ts - 600+ lines of security scenarios
  • escape.test.ts - Input sanitization tests
  • printer.test.ts - SQL generation tests
  • tsql.test.ts - Integration tests with actual ClickHouse
  • aiQuery.eval.ts - AI query generation evaluation

Suggestion:

  • Add negative tests for the AI query service (malicious prompts attempting to generate dangerous queries)

Overall Assessment

This is a high-quality PR with thoughtful security design. The TSQL approach (parsing to AST, validating against schema, then printing with guards) is a robust pattern for safe user-facing query interfaces.

Recommendation: Approve with minor suggestions

The security model is sound, tenant isolation is properly enforced, and the code is well-structured. The main suggestions are around code organization (splitting large files) and adding some defensive measures (rate limiting, env var validation).

Great work on this significant feature! 🎉

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx:
- Around line 240-290: The internal keys __rawDate and __granularity may collide
with user columns; update the code that builds and reads points (the data
mapping block and the groupedByX -> data conversion) to store internal metadata
under a namespaced nested property (e.g., a single internal object like __meta
or similar) instead of top-level keys; change all uses that set or read
__rawDate and __granularity (locations around the formatX/tryParseDate mapping
and the groupedByX map creation/consumption) to use this nested property so
external column names cannot clash, and ensure any tooltip/consumer code that
currently reads __rawDate or __granularity is updated to read the new nested
path.
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c05fc0d and 5c67061.

📒 Files selected for processing (1)
  • apps/webapp/app/components/code/QueryResultsChart.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (1)
apps/webapp/app/components/code/QueryResultsChart.tsx (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (1)
  • ChartConfiguration (14-22)
apps/webapp/app/components/primitives/Chart.tsx (1)
  • ChartConfig (10-18)
apps/webapp/app/components/primitives/Paragraph.tsx (1)
  • Paragraph (88-107)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (7)
apps/webapp/app/components/code/QueryResultsChart.tsx (7)

1-41: LGTM!

The imports and color palette are well-structured. The modulo-based color assignment ensures consistent colors across all series counts.


43-59: LGTM!

Type definitions are clear and follow TypeScript best practices.


164-193: Pragmatic date parsing with heuristic year range.

The dual-attempt approach (milliseconds first, then seconds) with year validation (1970-2100) is a practical solution for handling different timestamp formats. This aligns with the commit message about improving date parsing reliability.


432-433: Verify X-axis angle for "minutes" granularity.

The X-axis labels are angled at -45 degrees only for "hours" and "seconds" granularity, but not for "minutes". This might be intentional based on label length, but verify that "minutes" labels don't cause overlap or readability issues at 0 degrees.


346-524: LGTM!

The component is well-structured with appropriate memoization, validation, and conditional rendering logic. The chart type selection and configuration are correctly implemented.


529-577: LGTM!

The dynamic Y-axis formatter intelligently handles different value ranges with appropriate precision and abbreviations.


579-587: LGTM!

The EmptyState component is clean and follows established UI patterns.

Comment on lines 240 to 290
const data = rows.map((row) => {
const point: Record<string, unknown> = {
[xAxisColumn]: formatX(row[xAxisColumn]),
// Store raw date for tooltip
__rawDate: tryParseDate(row[xAxisColumn]),
__granularity: granularity,
};
for (const yCol of yAxisColumns) {
point[yCol] = toNumber(row[yCol]);
}
return point;
});

return { data, series: yAxisColumns, dateValues };
}

// With grouping: pivot data so each group value becomes a series
const yCol = yAxisColumns[0]; // Use first Y column when grouping
const groupValues = new Set<string>();
const groupedByX = new Map<string, { values: Record<string, number>; rawDate: Date | null }>();

for (const row of rows) {
const xValue = formatX(row[xAxisColumn]);
const rawDate = tryParseDate(row[xAxisColumn]);
const groupValue = String(row[groupByColumn] ?? "Unknown");
const yValue = toNumber(row[yCol]);

groupValues.add(groupValue);

if (!groupedByX.has(xValue)) {
groupedByX.set(xValue, { values: {}, rawDate });
}

const existing = groupedByX.get(xValue)!;
// Sum values if there are multiple rows with same x + group
existing.values[groupValue] = (existing.values[groupValue] ?? 0) + yValue;
}

// Convert to array format
const series = Array.from(groupValues).sort();
const data = Array.from(groupedByX.entries()).map(([xValue, { values, rawDate }]) => {
const point: Record<string, unknown> = {
[xAxisColumn]: xValue,
__rawDate: rawDate,
__granularity: granularity,
};
for (const group of series) {
point[group] = values[group] ?? 0;
}
return point;
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider namespace collision with special keys.

The use of __rawDate and __granularity as internal keys could theoretically conflict if the source data contains columns with these names. While unlikely, consider using Symbol keys or a nested object structure to guarantee isolation.

♻️ Alternative approach using Symbols
+const RAW_DATE_KEY = Symbol('rawDate');
+const GRANULARITY_KEY = Symbol('granularity');
+
 // Then use these symbols instead of string keys:
-        __rawDate: tryParseDate(row[xAxisColumn]),
-        __granularity: granularity,
+        [RAW_DATE_KEY]: tryParseDate(row[xAxisColumn]),
+        [GRANULARITY_KEY]: granularity,

Note: This would require updating all references to these keys throughout the file.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx around lines 240 -
290, The internal keys __rawDate and __granularity may collide with user
columns; update the code that builds and reads points (the data mapping block
and the groupedByX -> data conversion) to store internal metadata under a
namespaced nested property (e.g., a single internal object like __meta or
similar) instead of top-level keys; change all uses that set or read __rawDate
and __granularity (locations around the formatX/tryParseDate mapping and the
groupedByX map creation/consumption) to use this nested property so external
column names cannot clash, and ensure any tooltip/consumer code that currently
reads __rawDate or __granularity is updated to read the new nested path.

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is a substantial PR introducing TRQL (TRiggerQueryLanguage) - a domain-specific query language for safely querying ClickHouse with tenant isolation. Overall, this is a well-architected feature with strong security fundamentals.

🌟 Strengths

1. Security Architecture (Excellent)

  • Tenant Isolation: The multi-level tenant guards (organization_id, project_id, environment_id) are automatically injected and cannot be bypassed - even with OR clauses (security.test.ts:130-151)
  • SQL Injection Prevention: Comprehensive parameterized queries using typed parameters ({param: String}), proper escaping in escape.ts, and stacked query prevention
  • Table Allowlisting: Only explicitly defined tables in the schema can be queried (security.test.ts:192-210)
  • Function Allowlisting: System functions like file() are blocked (security.test.ts:323-341)
  • The security test suite is thorough and covers key attack vectors

2. Test Coverage (Strong)

  • security.test.ts - 600+ lines of security tests
  • tsql.test.ts - 1500+ lines of integration tests with ClickHouse testcontainers
  • Parser, printer, escape, schema, validator, and results all have dedicated test files
  • AI query service has evaluation tests

3. Code Architecture

  • Clean separation: Parser → AST → Printer → ClickHouse SQL
  • Virtual columns with computed expressions are well designed
  • Field mappings for value translation (internal → user-facing) is elegant
  • The compileTSQL API is intuitive

⚠️ Areas for Improvement

1. Error Handling in AI Service

// aiQueryService.server.ts:182-186
if (text.toLowerCase().includes("cannot") || text.toLowerCase().includes("unable")) {
  return { success: false, error: text.slice(0, 200) };
}

This string matching is fragile. Consider using structured error responses from the AI or implementing more robust error detection.

2. Query Cost Calculation

// queryService.server.ts:98
const costInCents = byteSeconds * env.CENTS_PER_QUERY_BYTE_SECOND;

The cost calculation relies on byte_seconds from query stats - ensure this is always available and valid. Consider adding validation/fallback.

3. Rate Limiting Consideration
The query endpoint doesn't appear to have rate limiting beyond the feature flag access control. For an analytics query endpoint, consider adding:

  • Per-user query rate limits
  • Maximum query complexity limits (already have maxRows in settings)
  • Query timeout handling (already implemented via max_execution_time)

4. Potential Performance Issue

// queryService.server.ts:73-85
const projects = await prisma.project.findMany({...});
const environments = await prisma.runtimeEnvironment.findMany({...});

These queries run on every TSQL execution. For organizations with many projects/environments, consider caching these field mappings.

5. Missing Changeset
This PR adds new packages and significant features. Per CONTRIBUTING.md, changes to packages should include a changeset (pnpm run changeset:add).

🔍 Minor Suggestions

  1. Duplicate code: extractQueryFromText exists in both aiQueryService.server.ts and ai-generate.tsx - consider consolidating

  2. Type safety: In route.tsx:209, z.record(z.any()) loses type information. Consider a more specific schema if possible.

  3. Documentation: The internal-packages/tsql/README.md is minimal. Given the complexity and security-critical nature of this package, more comprehensive documentation would help future maintainers.

✅ Security Checklist (All Pass)

  • Tenant isolation enforced at all query levels (SELECT, JOIN, UNION, subquery)
  • SQL injection prevented via parameterized queries
  • Table/column allowlisting enforced
  • Function allowlisting enforced
  • Stacked queries prevented (parser rejects semicolons)
  • OR clause bypass prevention
  • Cross-tenant attack protection tested

Summary

This is a well-implemented feature with strong security foundations. The security test coverage is particularly impressive. The main areas for improvement are around error handling robustness, performance optimization for field mappings, and adding rate limiting for production use.

Recommendation: Approve with minor suggestions above.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx:
- Around line 675-681: The current useMemo block for data always forces
date-based sorting by timestamp (isDateBased branch) and ignores user
preferences; update the logic in the useMemo that computes data so it either (A)
disables/hides sort controls in ChartConfigPanel when isDateBased is true, or
(B) respects sortByColumn and sortDirection for date-based axes by calling
sortData(unsortedData, sortByColumn, sortDirection) even when isDateBased, or
(preferred) (C) perform bucketing by xDataKey first then apply sortData(…,
sortByColumn, sortDirection) so date bucketing is preserved but user sort order
is honored; adjust references to useMemo, isDateBased, sortData, unsortedData,
xDataKey, sortByColumn, sortDirection and update ChartConfigPanel UI as needed
to reflect disabled controls if you choose option A.
- Around line 193-275: fillTimeGaps currently assumes all series are summable
and silently rebuckets timestamps when estimatedPoints > maxPoints; update the
function to (1) add a clear comment at the top of fillTimeGaps stating it
assumes summable/count-like metrics and that summation is used when multiple
points fall in the same bucket, and mark a TODO to introduce an
aggregationMethod parameter later; and (2) emit a visible warning when the
function increases the interval (i.e., when effectiveInterval !== interval) so
callers know data was re-bucketed (use console.warn or a provided logger).
Include references to the function name fillTimeGaps, the effectiveInterval
variable, and the series handling block that currently sums values so reviewers
can locate the changes.
🧹 Nitpick comments (5)
apps/webapp/app/components/code/ChartConfigPanel.tsx (2)

133-142: Redundant auto-sort logic.

Lines 133-142 appear to duplicate the auto-sort logic from lines 119-122. Both blocks check if the X-axis is a datetime column and auto-set the sort column accordingly. The second block will rarely (if ever) execute if the first block runs successfully.

♻️ Consider consolidating the logic
   // Auto-select X-axis (prefer datetime, then first categorical)
   if (!config.xAxisColumn) {
     const defaultX = dateTimeColumns[0] ?? categoricalColumns[0] ?? columns[0];
     if (defaultX) {
       updates.xAxisColumn = defaultX.name;
       needsUpdate = true;

       // Auto-set sort to x-axis ASC if it's a datetime column
       if (isDateTimeType(defaultX.type) && !config.sortByColumn) {
         updates.sortByColumn = defaultX.name;
         updates.sortDirection = "asc";
       }
     }
   }

   // Auto-select Y-axis (first numeric column)
   if (config.yAxisColumns.length === 0 && numericColumns.length > 0) {
     updates.yAxisColumns = [numericColumns[0].name];
     needsUpdate = true;
   }

-  // Auto-set sort by x-axis ASC if x-axis is datetime and no sort is configured
-  if (
-    config.xAxisColumn &&
-    !config.sortByColumn &&
-    !updates.sortByColumn &&
-    dateTimeColumns.some((col) => col.name === config.xAxisColumn)
-  ) {
-    updates.sortByColumn = config.xAxisColumn;
-    updates.sortDirection = "asc";
-    needsUpdate = true;
-  }

460-480: TypeBadge string truncation could split multi-byte characters.

Line 472 uses displayType.slice(0, 10) which could split multi-byte UTF-8 characters. While ClickHouse type names are ASCII and this is unlikely to occur, consider using a safer truncation approach for consistency.

♻️ Consider safer string truncation
 function TypeBadge({ type }: { type: string }) {
   // Simplify type for display
   let displayType = type;
   if (type.startsWith("Nullable(")) {
     displayType = type.slice(9, -1) + "?";
   }
   if (type.startsWith("LowCardinality(")) {
     displayType = type.slice(15, -1);
   }

   // Shorten long type names
   if (displayType.length > 12) {
-    displayType = displayType.slice(0, 10) + "…";
+    // Use substring for safer truncation or Array.from to handle multi-byte chars
+    displayType = Array.from(displayType).slice(0, 10).join("") + "…";
   }

   return (
     <span className="rounded bg-charcoal-750 px-1 py-0.5 font-mono text-xxs text-text-dimmed">
       {displayType}
     </span>
   );
 }

Alternatively, if performance is not a concern and you want to be defensive:

if (displayType.length > 12) {
  displayType = displayType.substring(0, 10) + "…";
}
apps/webapp/app/components/code/QueryResultsChart.tsx (2)

395-424: Arbitrary year range in date parsing.

Lines 408-410 and 416-418 restrict parsed dates to years between 1970-2100. While reasonable for most use cases, this is an arbitrary constraint that could reject valid dates outside this range (e.g., historical data before 1970 or projections beyond 2100).

Consider:

  1. Making the year range configurable via props if needed for specific use cases
  2. Documenting this limitation in a comment
  3. Using more lenient bounds (e.g., 1900-2200) for broader compatibility

440-595: Document the 80% threshold for date detection.

Line 468 uses an 80% threshold to determine if the X-axis should be treated as date-based:

const isDateBased = dateValues.length >= rows.length * 0.8;

This means if 20% or more of values fail to parse as dates, they will be silently filtered out (line 513). While some tolerance is reasonable for handling nulls and invalid values, this threshold is arbitrary and could lead to unexpected data loss.

Recommendation:

  1. Add a comment explaining why 80% was chosen
  2. Consider raising the threshold to 90-95% for higher confidence
  3. Consider logging a warning when rows are filtered out due to invalid dates
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (1)

2069-2108: Optimize regex creation in highlightSQL.

Lines 2075-2078 create a new RegExp object every time highlightSQL is called. Since SQL_KEYWORDS is constant, the regex pattern should be created once at module level for better performance.

⚡ Performance optimization
 const SQL_KEYWORDS = [
   "SELECT",
   "FROM",
   // ... rest of keywords
 ];

+// Create regex pattern once at module level
+const SQL_KEYWORD_PATTERN = new RegExp(
+  `\\b(${SQL_KEYWORDS.map((k) => k.replace(/\\s+/g, "\\\\s+")).join("|")})\\b`,
+  "gi"
+);
+
 function highlightSQL(query: string): React.ReactNode[] {
   // Normalize whitespace for display (let CSS line-clamp handle truncation)
   const normalized = query.replace(/\s+/g, " ").slice(0, 200);
   const suffix = "";

-  // Create a regex pattern that matches keywords as whole words (case insensitive)
-  const keywordPattern = new RegExp(
-    `\\b(${SQL_KEYWORDS.map((k) => k.replace(/\s+/g, "\\s+")).join("|")})\\b`,
-    "gi"
-  );
-
   const parts: React.ReactNode[] = [];
   let lastIndex = 0;
   let match;

-  while ((match = keywordPattern.exec(normalized)) !== null) {
+  // Reset lastIndex for global regex reuse
+  SQL_KEYWORD_PATTERN.lastIndex = 0;
+  
+  while ((match = SQL_KEYWORD_PATTERN.exec(normalized)) !== null) {
     // Add text before the match
     if (match.index > lastIndex) {
       parts.push(normalized.slice(lastIndex, match.index));
     }
     // Add the highlighted keyword
     parts.push(
       <span key={match.index} className="text-[#c678dd]">
         {match[0]}
       </span>
     );
-    lastIndex = keywordPattern.lastIndex;
+    lastIndex = SQL_KEYWORD_PATTERN.lastIndex;
   }

   // Add remaining text
   if (lastIndex < normalized.length) {
     parts.push(normalized.slice(lastIndex));
   }

   if (suffix) {
     parts.push(suffix);
   }

   return parts;
 }

Note: When reusing a global regex, remember to reset lastIndex before each use.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5c67061 and eb5ad2b.

📒 Files selected for processing (4)
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/tailwind.css
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/webapp/app/tailwind.css
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
📚 Learning: 2025-12-08T15:19:56.823Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2760
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx:278-281
Timestamp: 2025-12-08T15:19:56.823Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx, the tableState search parameter uses intentional double-encoding: the parameter value contains a URL-encoded URLSearchParams string, so decodeURIComponent(value("tableState") ?? "") is required to fully decode it before parsing with new URLSearchParams(). This pattern allows bundling multiple filter/pagination params as a single search parameter.

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧬 Code graph analysis (1)
apps/webapp/app/components/code/ChartConfigPanel.tsx (4)
internal-packages/tsql/src/query/printer.ts (1)
  • isDateTimeType (1178-1188)
apps/webapp/app/utils/cn.ts (1)
  • cn (77-79)
apps/webapp/app/components/primitives/Buttons.tsx (1)
  • Button (296-329)
apps/webapp/app/components/primitives/Switch.tsx (1)
  • Switch (64-119)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (2)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (2)

252-328: LGTM: QueryEditorForm state isolation is well-designed.

The component deliberately isolates the query state from the parent via useState(defaultQuery) and exposes control through the forwardRef handle. This design prevents the editor from being reset when parent state changes (e.g., when new results come in), while still allowing external control through history selection and AI-generated queries.

The useImperativeHandle approach is appropriate here for the "imperative escape hatch" pattern where the parent needs fine-grained control without controlling the entire component state.


330-498: LGTM: Well-structured route component with thoughtful state management.

The Page component demonstrates several good practices:

  1. Smart defaults (lines 335-337): Uses most recent history item or falls back to default query
  2. Selective chart config reset (lines 353-357): Only resets when column schema actually changes, preserving user configuration across re-runs with different WHERE clauses
  3. Proper loading states: Navigation state tracking for loading indicators
  4. Imperative API: Uses ref handle to control the editor form imperatively for history/examples/AI

The component properly handles the complex interaction between query editing, execution, results display, and chart configuration.

Comment on lines 193 to 275
function fillTimeGaps(
data: Record<string, unknown>[],
xDataKey: string,
series: string[],
minTime: number,
maxTime: number,
interval: number,
granularity: TimeGranularity,
maxPoints = 1000
): Record<string, unknown>[] {
const range = maxTime - minTime;
const estimatedPoints = Math.ceil(range / interval);

// If filling would create too many points, increase the interval to stay within limits
let effectiveInterval = interval;
if (estimatedPoints > maxPoints) {
effectiveInterval = Math.ceil(range / maxPoints);
// Round up to a nice interval
const MINUTE = 60 * 1000;
const HOUR = 60 * MINUTE;
if (effectiveInterval < 5 * MINUTE) effectiveInterval = 5 * MINUTE;
else if (effectiveInterval < 10 * MINUTE) effectiveInterval = 10 * MINUTE;
else if (effectiveInterval < 15 * MINUTE) effectiveInterval = 15 * MINUTE;
else if (effectiveInterval < 30 * MINUTE) effectiveInterval = 30 * MINUTE;
else if (effectiveInterval < HOUR) effectiveInterval = HOUR;
else if (effectiveInterval < 2 * HOUR) effectiveInterval = 2 * HOUR;
else if (effectiveInterval < 4 * HOUR) effectiveInterval = 4 * HOUR;
else if (effectiveInterval < 6 * HOUR) effectiveInterval = 6 * HOUR;
else if (effectiveInterval < 12 * HOUR) effectiveInterval = 12 * HOUR;
else effectiveInterval = 24 * HOUR;
}

// Create a map of existing data points by timestamp (bucketed to the effective interval)
const existingData = new Map<number, Record<string, unknown>>();
for (const point of data) {
const timestamp = point[xDataKey] as number;
// Bucket to the nearest interval
const bucketedTime = Math.floor(timestamp / effectiveInterval) * effectiveInterval;

// If there's already data for this bucket, aggregate it
const existing = existingData.get(bucketedTime);
if (existing) {
// Sum the values
for (const s of series) {
const existingVal = (existing[s] as number) || 0;
const newVal = (point[s] as number) || 0;
existing[s] = existingVal + newVal;
}
} else {
// Clone the point with the bucketed timestamp
existingData.set(bucketedTime, {
...point,
[xDataKey]: bucketedTime,
__rawDate: new Date(bucketedTime),
});
}
}

// Generate all time slots and fill with zeros where missing
const filledData: Record<string, unknown>[] = [];
const startTime = Math.floor(minTime / effectiveInterval) * effectiveInterval;

for (let t = startTime; t <= maxTime; t += effectiveInterval) {
const existing = existingData.get(t);
if (existing) {
filledData.push(existing);
} else {
// Create a zero-filled data point
const zeroPoint: Record<string, unknown> = {
[xDataKey]: t,
__rawDate: new Date(t),
__granularity: granularity,
__originalX: new Date(t).toISOString(),
};
for (const s of series) {
zeroPoint[s] = 0;
}
filledData.push(zeroPoint);
}
}

return filledData;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Data aggregation in fillTimeGaps may produce incorrect results for non-summable metrics.

Lines 236-240 aggregate multiple data points that fall into the same time bucket by summing their values. This approach is only correct for count-like metrics, but will produce incorrect results for:

  • Averages (should compute weighted average, not sum of averages)
  • Percentages or ratios (cannot be summed)
  • Percentile values (e.g., p50, p95)
  • Min/max values (should preserve actual min/max)

Additional concern: Lines 206-223 automatically increase the interval when too many data points would be created. Users may not expect this automatic bucketing, which could mask important variations in the data.

Recommendations:

  1. Document this aggregation behavior clearly in component props or user-facing help
  2. Consider adding an aggregationMethod prop to let users control how values are combined
  3. Consider warning users when data is being bucketed/aggregated
  4. For now, add a comment explaining that this function assumes summable metrics
🤖 Prompt for AI Agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx around lines 193 -
275, fillTimeGaps currently assumes all series are summable and silently
rebuckets timestamps when estimatedPoints > maxPoints; update the function to
(1) add a clear comment at the top of fillTimeGaps stating it assumes
summable/count-like metrics and that summation is used when multiple points fall
in the same bucket, and mark a TODO to introduce an aggregationMethod parameter
later; and (2) emit a visible warning when the function increases the interval
(i.e., when effectiveInterval !== interval) so callers know data was re-bucketed
(use console.warn or a provided logger). Include references to the function name
fillTimeGaps, the effectiveInterval variable, and the series handling block that
currently sums values so reviewers can locate the changes.

Comment on lines +675 to +681
const data = useMemo(() => {
if (isDateBased) {
// Always sort by timestamp for date-based axes
return sortData(unsortedData, xDataKey, "asc");
}
return sortData(unsortedData, sortByColumn, sortDirection);
}, [unsortedData, sortByColumn, sortDirection, isDateBased, xDataKey]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sorting logic ignores user preferences for date-based axes.

For date-based axes, line 678 always sorts by timestamp in ascending order, completely ignoring the sortByColumn and sortDirection configuration. This means:

  1. The ChartConfigPanel allows users to set custom sorting even when the X-axis is a date
  2. Users expect that setting "Order by: compute_cost DESC" would show the highest-cost data points
  3. The actual rendering ignores this and always sorts by timestamp

Impact: Users may be confused why their sort configuration has no effect on date-based charts.

Recommendation: Either:

  1. Disable the sort controls in ChartConfigPanel when the X-axis is detected as date-based (preferred for consistency)
  2. Honor the user's sort preference and document that date-based axes may show gaps
  3. Apply sort after time-based bucketing but before rendering
🤖 Prompt for AI Agents
In @apps/webapp/app/components/code/QueryResultsChart.tsx around lines 675 -
681, The current useMemo block for data always forces date-based sorting by
timestamp (isDateBased branch) and ignores user preferences; update the logic in
the useMemo that computes data so it either (A) disables/hides sort controls in
ChartConfigPanel when isDateBased is true, or (B) respects sortByColumn and
sortDirection for date-based axes by calling sortData(unsortedData,
sortByColumn, sortDirection) even when isDateBased, or (preferred) (C) perform
bucketing by xDataKey first then apply sortData(…, sortByColumn, sortDirection)
so date bucketing is preserved but user sort order is honored; adjust references
to useMemo, isDateBased, sortData, unsortedData, xDataKey, sortByColumn,
sortDirection and update ChartConfigPanel UI as needed to reflect disabled
controls if you choose option A.

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

Thank you for this comprehensive feature! I have reviewed the implementation and have the following feedback organized by severity.


Critical Issues

1. Database Migration: Indexes Without CONCURRENTLY

File: internal-packages/database/prisma/migrations/20251220201230_add_customer_query/migration.sql (Lines 25, 28)

Per CONTRIBUTING.md (line 179), indexes must use CONCURRENTLY to avoid table locks in production. The current migration creates indexes without this keyword which will lock the CustomerQuery table during deployment.

2. Window Function Validation Missing (Security)

File: internal-packages/tsql/src/query/printer.ts (Lines 2205-2216)

Window functions bypass the function allowlist validation. Unlike regular function calls that go through findTSQLFunction(), window functions directly use the name field without checks.

Risk: An attacker could call dangerous ClickHouse functions like file() via window function syntax.

Fix: Add validation similar to visitCall() - check that the function name exists in the allowlist before using it.

3. Missing Authorization Check on AI Query Endpoint

File: apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query.ai-generate.tsx (Lines 18-68)

The AI query endpoint only checks requireUserId() but does NOT verify hasQueryAccess() like the main query route does. This could allow users without the feature flag to access AI query generation.


High Priority Issues

4. N+1 Query Pattern on Every Query Execution

File: apps/webapp/app/services/queryService.server.ts (Lines 73-86)

Every query execution fetches ALL projects and ALL environments for the organization to build field mappings, regardless of actual scope.

Recommendation: Cache field mappings or filter based on actual scope.

5. Missing Result Pagination/Limiting

File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (Lines 206-234)

No pagination or server-side result limiting. A query returning millions of rows will exhaust browser memory.

Recommendation: Enforce maximum result set size (e.g., 10,000 rows) and add pagination UI.

6. Parameter Logging in Error Cases

File: internal-packages/clickhouse/src/client/client.ts (Lines 139-145, 288-294, 454-460)

Raw query parameters are logged in error messages. If parameters contain sensitive data (PII, tokens), they will appear in logs.

7. Duplicate Identifier Escape Implementation with Bug

File: internal-packages/tsql/src/query/property_types.ts (Lines 22-52)

This file has a duplicate implementation of escapeTSQLIdentifier that differs from escape.ts and contains a bug on line 43 - it maps the letter a instead of the bell character.

Fix: Remove duplicate and import from escape.ts.


Medium Priority Issues

8. AST Mutation in LIMIT Capping

File: internal-packages/tsql/src/query/printer.ts (Lines 415-424)

The code mutates the AST in-place during printing, which could cause bugs if the AST is reused.

9. Float for Financial Data

File: internal-packages/database/prisma/schema.prisma (Line 2415)

costInCents uses Float which can have precision issues for financial calculations. Consider Decimal(10, 2) if used for billing.

10. Expensive Time Gap Filling

File: apps/webapp/app/components/code/QueryResultsChart.tsx (Lines 193-275)

Gap-filling algorithm can create thousands of synthetic data points per series which could cause performance issues with many series.


Minor Issues

  • Inconsistent Error Types: escape.ts uses QueryError, property_types.ts uses generic Error, safeIdentifier silently removes % instead of throwing
  • Silent Error Handling: AIQueryInput.tsx and TSQLEditor.tsx silently ignore parse/format errors
  • Unused Parameter: printer.ts line 1366 has unused _tableAlias in createTenantGuard
  • Missing Collision Check: printer_context.ts addNamedValue() can silently overwrite existing parameters

Positive Findings

The overall architecture is solid:

  1. Strong tenant isolation - Tests demonstrate protection against OR 1=1 bypass attacks
  2. Comprehensive security tests - 23+ test scenarios covering SQL injection and cross-tenant access
  3. Parameterized queries - Proper use of ClickHouse parameterized queries prevents injection
  4. Type safety - Full Zod validation for inputs and outputs
  5. Virtual column support - Well-implemented server-side computed fields
  6. Good test coverage - The tsql.test.ts and security.test.ts files are thorough

Summary

Priority Issue Effort
Critical Index CONCURRENTLY Easy
Critical Window function validation Easy
Critical AI endpoint authorization Easy
High N+1 query pattern Medium
High Result pagination Medium
High Parameter logging Medium
High Duplicate escape code Easy
Medium AST mutation Easy
Medium Float for costs Easy

The critical issues should be addressed before merging. Great work on this substantial feature!

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (2)

2072-2111: SQL highlighting function works correctly.

The function properly highlights SQL keywords using regex with word boundaries, ensuring only whole-word matches are highlighted. The normalization and truncation provide reasonable display limits for the history popover.

The suffix variable (line 2075) is always an empty string and the check on line 2106 is unnecessary. Consider removing these lines for clarity:

♻️ Simplify by removing unused suffix variable
 function highlightSQL(query: string): React.ReactNode[] {
   // Normalize whitespace for display (let CSS line-clamp handle truncation)
   const normalized = query.replace(/\s+/g, " ").slice(0, 200);
-  const suffix = "";

   // Create a regex pattern that matches keywords as whole words (case insensitive)
   const keywordPattern = new RegExp(
     `\\b(${SQL_KEYWORDS.map((k) => k.replace(/\s+/g, "\\s+")).join("|")})\\b`,
     "gi"
   );

   const parts: React.ReactNode[] = [];
   let lastIndex = 0;
   let match;

   while ((match = keywordPattern.exec(normalized)) !== null) {
     // Add text before the match
     if (match.index > lastIndex) {
       parts.push(normalized.slice(lastIndex, match.index));
     }
     // Add the highlighted keyword
     parts.push(
       <span key={match.index} className="text-[#c678dd]">
         {match[0]}
       </span>
     );
     lastIndex = keywordPattern.lastIndex;
   }

   // Add remaining text
   if (lastIndex < normalized.length) {
     parts.push(normalized.slice(lastIndex));
   }

-  if (suffix) {
-    parts.push(suffix);
-  }
-
   return parts;
 }

503-555: Consider using function declarations for internal components.

Per coding guidelines, consider refactoring internal components from const expressions to function declarations. For example:

// Current
const ExportResultsButton = ({ rows, columns }: { ... }) => { ... }

// Preferred
function ExportResultsButton({ rows, columns }: { ... }) { ... }

This applies to: ExportResultsButton, AITabContent, TryableCodeBlock, FunctionCategory, TableSchemaContent, ExamplesContent, ColumnHelpItem, ScopeItem, and QueryHistoryPopover. The QueryEditorForm uses forwardRef and is acceptable as-is.

As per coding guidelines: Use function declarations instead of default exports (while default export for the Page component is required by Remix, internal components should use function declarations).

Also applies to: 668-714, 716-744, 1882-1910, 1912-1934, 1936-1957, 1959-1994, 1996-2011, 2113-2168

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eb5ad2b and 2ca9d15.

📒 Files selected for processing (2)
  • apps/webapp/app/components/code/TSQLEditor.tsx
  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/webapp/app/components/code/TSQLEditor.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
📚 Learning: 2025-12-08T15:19:56.823Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2760
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx:278-281
Timestamp: 2025-12-08T15:19:56.823Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.runs.$runParam/route.tsx, the tableState search parameter uses intentional double-encoding: the parameter value contains a URL-encoded URLSearchParams string, so decodeURIComponent(value("tableState") ?? "") is required to fully decode it before parsing with new URLSearchParams(). This pattern allows bundling multiple filter/pagination params as a single search parameter.

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: The webapp at apps/webapp is a Remix 2.1 application using Node.js v20

Applied to files:

  • apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (16)
apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.query/route.tsx (16)

1-75: Imports are well-organized and appropriate.

The imports follow a logical grouping pattern (external libraries, internal packages, components, utilities, services) which aids readability. The use of zod for validation aligns with coding guidelines.


77-108: Access control implementation is sound.

The function properly enforces access control by checking admin status and organization membership before validating the feature flag. The membership check on line 90 ensures users can only access organizations they belong to, preventing unauthorized access.


185-201: Proper input validation with clear error messages.

The use of safeParse with appropriate error handling ensures invalid inputs are rejected gracefully with informative error messages.


205-241: Action properly handles query execution with error handling.

The action function correctly executes queries with proper error handling, history tracking, and consistent response structure. The try-catch block ensures unexpected errors are handled gracefully.

Line 209 uses z.record(z.any()) as the validation schema, which is very permissive and bypasses type checking for result rows. Confirm this is intentional—if the result structure is known, a stricter schema would provide better type safety and runtime validation.


244-328: QueryEditorForm uses forwardRef pattern effectively.

The component properly exposes imperative methods via useImperativeHandle with correct dependencies, allowing parent components to control the editor state. The use of useCallback for handleHistorySelected prevents unnecessary re-renders.


330-362: Page component has well-structured state management.

The use of a stable columnsKey (lines 347-349) to detect schema changes is a clever approach. The effect on line 353 correctly resets chart configuration only when the column schema changes, preserving config across re-runs with different WHERE clauses. The explanatory comment on line 352 is helpful.


363-501: UI composition is clean and well-organized.

The component properly composes a multi-panel layout with resizable sections, proper loading states, error handling, and conditional rendering based on results. The use of refs for imperative editor control is appropriate for coordinating between the help sidebar and editor.


503-555: Export functionality is well-implemented.

The component provides comprehensive export options (CSV/JSON, copy/download) with proper file naming using timestamps. All handlers correctly close the popover after action, ensuring good UX.


557-609: Example queries are practical and well-documented.

The examples cover common use cases (failed runs, performance analysis, cost tracking) with clear descriptions. The queries demonstrate important TRQL features like filtering, aggregation, and time-based analysis.


611-714: Help sidebar provides comprehensive assistance.

The tabbed interface organizes help content effectively. The AI tab's use of setTimeout to reset autoSubmitPrompt (lines 702-703) is a valid pattern to enable re-triggering the same prompt, though slightly unusual.


716-1878: Comprehensive TRQL documentation with interactive examples.

The guide provides extensive, well-organized documentation covering all aspects of TRQL. The integration of "Try it" buttons with code examples enables users to experiment immediately, which is excellent for learning. The function reference is thorough and includes helpful examples for each function.


1882-2011: Helper components are focused and well-implemented.

Each component has a single, clear responsibility. The ScopeItem component appropriately uses context hooks to render scope-specific labels with a sensible fallback for unexpected scope values.


2013-2040: Utility functions correctly format statistics and bytes.

Both functions use appropriate algorithms for their purpose. The formatQueryStats function properly converts nanoseconds to milliseconds and conditionally formats based on duration. The formatBytes function uses the standard logarithmic approach for unit conversion.


2113-2168: Query history popover provides good UX.

The component effectively displays query history with syntax highlighting, timestamps, and user attribution. The scrollable container with max-height and disabled state when empty ensures good user experience across different scenarios.


330-330: Note: Default export is required for Remix routes.

The default export here is necessary for Remix routing conventions and correctly follows framework requirements.


116-150: Loader properly enforces authentication and authorization.

The loader correctly authenticates users, validates parameters, checks access permissions, and handles errors with appropriate HTTP responses. Good security posture overall.

The query history loaded via QueryPresenter is organization-scoped. The call() method accepts only organizationId and provides no filtering by project or environment, so the displayed history shows the last 20 queries across the entire organization regardless of which project/environment the user is currently viewing. This appears to be the intended design.

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is a substantial PR that introduces TRQL (TRiggerQueryLanguage), a custom SQL-like query language that compiles to ClickHouse SQL with automatic tenant isolation. The implementation is derived from PostHog's HogQL and represents a significant feature addition.

Overall Assessment

This is a well-architected feature with strong security foundations. The code demonstrates careful thought about SQL injection prevention and multi-tenant data isolation. The test coverage is impressive, particularly for security-related functionality.


Security Analysis ✅

Strengths

  1. Multi-tenant Isolation - The tenant guard injection system is robust:

    • Organization, project, and environment IDs are automatically injected as parameterized WHERE clauses
    • Guards are applied to all query types: SELECT, JOINs, UNIONs, and subqueries
    • The security tests in security.test.ts verify these guards cannot be bypassed via OR clauses or explicit conditions
  2. SQL Injection Prevention:

    • All user-provided string values are parameterized (using ClickHouse's {param: Type} syntax)
    • Identifier escaping properly handles special characters and prevents backtick injection
    • The parser rejects stacked statements (semicolon-separated queries)
    • Function allowlisting prevents access to dangerous ClickHouse functions like file()
  3. Table/Column Allowlisting:

    • Only explicitly defined tables in the schema can be queried
    • System tables are rejected
    • Database prefixes are blocked

Minor Security Considerations

  1. AI-Generated Queries (aiQueryService.server.ts): The AI service validates queries before returning them, which is good. However, consider adding rate limiting to the AI generation endpoint to prevent abuse.

  2. Query Cost Tracking: The CENTS_PER_QUERY_BYTE_SECOND billing mechanism is a good addition for cost control. Consider adding configurable limits per organization to prevent runaway costs.


Code Quality ✅

Strengths

  1. Strong TypeScript typing throughout the codebase with proper use of discriminated unions for AST nodes
  2. Comprehensive test suite with unit tests for escaping, parsing, printing, and security
  3. Clean separation of concerns - parser, validator, printer, and executor are well-isolated
  4. Good documentation in code comments explaining the PostHog HogQL derivation

Suggestions

  1. Duplicated Query Extraction Logic (aiQueryService.server.ts:472-486 and ai-generate.tsx:186-199):

    // This function appears in two files
    function extractQueryFromText(text: string): string | null {
      const codeBlockMatch = text.match(/```(?:sql)?\s*([\s\S]*?)```/i);
      // ...
    }

    Consider extracting this to a shared utility to maintain DRY principles.

  2. Error Handling Consistency: The route action returns different error formats:

    • Some return { error: string } with 400 status
    • Others use 403/404 status codes
      Consider standardizing the error response format.
  3. Magic Numbers: In queryService.server.ts, the cost calculation uses env.CENTS_PER_QUERY_BYTE_SECOND which is good, but ensure this env var has validation and a sensible default.


Performance Considerations ⚠️

Potential Issues

  1. Field Mappings Query (queryService.server.ts:73-85):

    const projects = await prisma.project.findMany({
      where: { organizationId },
      select: { id: true, externalRef: true },
    });
    const environments = await prisma.runtimeEnvironment.findMany({
      where: { project: { organizationId } },
      select: { id: true, slug: true },
    });

    These queries run for every query execution. For large organizations, consider:

    • Caching these mappings with a reasonable TTL
    • Lazy loading only when needed (if the query references those fields)
  2. Regex in Escape Functions (escape.ts): The character-by-character processing with .split("").map().join() is correct but could be optimized for very long strings using a single regex replace.

  3. AST Visitor Pattern: The printer uses a recursive visitor pattern which is clean but could potentially cause stack overflow for extremely deeply nested queries. Consider implementing a max depth limit.


Test Coverage ✅

The test coverage is excellent:

  • security.test.ts - 600+ lines of security-focused tests
  • escape.test.ts - Thorough coverage of escaping edge cases
  • printer.test.ts - 2200+ lines testing SQL generation
  • validator.test.ts - Schema validation tests
  • tsqlCompletion.test.ts and tsqlLinter.test.ts - Editor tooling tests

Suggestion

Consider adding integration tests that execute actual queries against a test ClickHouse instance to verify the end-to-end flow, particularly for edge cases around tenant isolation.


Database Migration ✅

The migration (20251220201230_add_customer_query/migration.sql) is well-structured:

  • Proper indexes on (organizationId, createdAt DESC) for history queries
  • CASCADE deletes maintain referential integrity
  • SET NULL on userId deletion is appropriate for audit trail preservation

Minor Issues & Nits

  1. Grammar File Maintenance: The generated ANTLR files (TSQLLexer.ts, TSQLParser.ts) are committed. Consider documenting the regeneration process in the README.

  2. Environment Variable: New env var CENTS_PER_QUERY_BYTE_SECOND should be documented in .env.example if not already.

  3. Feature Flag Check (route.tsx:77-108): The hasQueryAccess function queries the database for every request. Consider caching this check or moving it to middleware.


Summary

Recommendation: Approve with minor suggestions

This is a high-quality implementation of a complex feature. The security model is well thought out, the code is clean and well-tested, and the architecture allows for future extensibility. The suggestions above are minor optimizations and consistency improvements rather than blocking issues.

Key strengths:

  • 🔒 Strong security model with comprehensive tests
  • 📐 Clean architecture derived from battle-tested HogQL
  • ✅ Excellent test coverage
  • 📝 Well-documented code

The main areas for future improvement:

  • ⚡ Consider caching field mappings for performance
  • 🔄 Extract duplicated utility functions
  • 📊 Add integration tests with real ClickHouse

Great work on this substantial feature! 🎉

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (3)

14-22: Consider using type instead of interface for ChartConfiguration.

The coding guidelines specify preferring types over interfaces for TypeScript definitions.

♻️ Refactor to use type alias
-export interface ChartConfiguration {
+export type ChartConfiguration = {
   chartType: ChartType;
   xAxisColumn: string | null;
   yAxisColumns: string[];
   groupByColumn: string | null;
   stacked: boolean;
   sortByColumn: string | null;
   sortDirection: SortDirection;
-}
+};

As per coding guidelines.


283-292: Consider preserving user's sort configuration when changing X-axis.

When manually selecting a datetime X-axis column, the sort order is always overridden to that column ascending, even if the user had previously configured a different sort. This differs from the initial auto-select behavior (lines 142-150) which only sets sort when none is configured.

This could be surprising if a user has carefully configured their sort order. Consider only auto-setting the sort when config.sortByColumn is null:

♻️ Preserve existing sort configuration
 setValue={(value) => {
   const updates: Partial<ChartConfiguration> = { xAxisColumn: value || null };
   // Auto-set sort to x-axis ASC if selecting a datetime column
-  if (value) {
+  if (value && !config.sortByColumn) {
     const selectedCol = columns.find((c) => c.name === value);
     if (selectedCol && isDateTimeType(selectedCol.type)) {
       updates.sortByColumn = value;
       updates.sortDirection = "asc";
     }
   }
   updateConfig(updates);
 }}

469-489: TypeBadge could better handle nested type wrappers.

The current sequential processing (Nullable first, then LowCardinality) doesn't optimally handle nested types like LowCardinality(Nullable(String)). The LowCardinality wrapper would be stripped, leaving Nullable(String), but the Nullable indicator isn't applied since that check already passed.

For the initial implementation this is acceptable, but you might consider recursively stripping both wrappers and collecting indicators:

♻️ Enhanced nested type handling
 function TypeBadge({ type }: { type: string }) {
-  // Simplify type for display
   let displayType = type;
-  if (type.startsWith("Nullable(")) {
-    displayType = type.slice(9, -1) + "?";
-  }
-  if (type.startsWith("LowCardinality(")) {
-    displayType = type.slice(15, -1);
-  }
+  let suffix = "";
+  
+  // Strip wrappers recursively
+  while (displayType.startsWith("Nullable(") || displayType.startsWith("LowCardinality(")) {
+    if (displayType.startsWith("Nullable(")) {
+      displayType = displayType.slice(9, -1);
+      suffix = "?";
+    } else if (displayType.startsWith("LowCardinality(")) {
+      displayType = displayType.slice(15, -1);
+    }
+  }
+  
+  displayType = displayType + suffix;
 
   // Shorten long type names
   if (displayType.length > 12) {
     displayType = displayType.slice(0, 10) + "…";
   }
 
   return (
     <span className="rounded bg-charcoal-750 px-1 py-0.5 font-mono text-xxs text-text-dimmed">
       {displayType}
     </span>
   );
 }
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2ca9d15 and 5cef0b4.

📒 Files selected for processing (1)
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/code/ChartConfigPanel.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/code/ChartConfigPanel.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (1)
apps/webapp/app/components/code/ChartConfigPanel.tsx (5)
internal-packages/tsql/src/query/printer.ts (1)
  • isDateTimeType (1178-1188)
apps/webapp/app/utils/cn.ts (1)
  • cn (77-79)
apps/webapp/app/components/primitives/Paragraph.tsx (1)
  • Paragraph (88-107)
apps/webapp/app/components/primitives/Buttons.tsx (1)
  • Button (296-329)
apps/webapp/app/components/primitives/Switch.tsx (1)
  • Switch (64-119)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (4)
apps/webapp/app/components/code/ChartConfigPanel.tsx (4)

42-75: LGTM! Type detection helpers are well-implemented.

The type detection functions correctly handle ClickHouse types including Nullable wrappers and parameterized types like DateTime64(3).


79-156: LGTM! Auto-selection logic is well-designed.

The use of columnsKey for change detection and refs to avoid dependency issues demonstrates thoughtful handling of React's reactivity model. The auto-selection logic appropriately defaults to datetime columns for the X-axis and only auto-sets sorting when none is configured.


313-416: LGTM! Selection controls and conditional toggles are well-structured.

The consistent use of the __none__ sentinel value for optional selections and conditional rendering of the Stacked and Sort Direction controls based on configuration state creates a clean, intuitive interface.


421-467: LGTM! Helper components are clean and straightforward.

Both ConfigField and SortDirectionToggle serve their purposes well as simple internal components. The styling and interaction patterns are consistent with the rest of the component.

@claude
Copy link

claude bot commented Jan 8, 2026

PR Review: TRQL and the Query Page

This is an impressive and substantial PR introducing a new query language (TRQL) and a full-featured query page for trigger.dev. The implementation shows excellent attention to security, particularly around tenant isolation and SQL injection prevention. Below is my detailed review.


🔒 Security Analysis

Strengths:

  1. Robust Tenant Isolation - The createTenantGuard() method in printer.ts:1366-1424 correctly injects mandatory tenant guards (organization_id, project_id, environment_id) into all queries. The guards are properly ANDed with user WHERE clauses, preventing bypass via OR conditions.

  2. Parameterized Queries - String values are properly parameterized via PrinterContext.addValue() rather than interpolated directly into SQL. The escape functions in escape.ts correctly handle special characters.

  3. Function Allowlisting - Only functions defined in functions.ts are permitted. Dangerous functions like file() or system functions are correctly rejected (verified by security.test.ts:323-342).

  4. Comprehensive Security Tests - The security.test.ts file covers:

    • Cross-tenant access prevention (lines 66-127)
    • SQL injection via strings, comments, identifiers (lines 213-343)
    • Stacked query prevention (lines 344-356)
    • UNION-based injection (lines 358-371)

Potential Concerns:

  1. AI Query Service (aiQueryService.server.ts) - The AI-generated queries are validated before execution, which is good. However, consider adding rate limiting to prevent abuse of the AI endpoint.

  2. Feature Flag Check - The hasQueryAccess() function in the route file performs authorization correctly, but ensure the feature flag cannot be manipulated by users.


🐛 Potential Bugs or Issues

  1. Type Coercion in Chart Config - In QueryResultsChart.tsx, when processing numeric data for charts, ensure proper handling of nullable columns that might contain empty strings or null values to prevent NaN in visualizations.

  2. Error Boundary Missing - The main query route component doesn't appear to have an error boundary. If the TSQL compilation fails in an unexpected way, the UI might crash.

  3. History Query Race Condition - In the loader/action flow, if a user submits a query while the history is loading, there could be a mismatch. Consider optimistic updates or loading states.

  4. Parser Error Messages - When parseTSQLSelect() fails, the error message could be more user-friendly with line/column information (already present in the AST nodes as start/end properties).


⚡ Performance Considerations

  1. Query Limits - The maxRows limit in QuerySettings (defaulting based on context) is good. Consider also adding:

    • Query execution timeout (visible in the options but ensure ClickHouse settings are applied)
    • Maximum query complexity/depth limits
  2. ANTLR4 Parser Performance - ANTLR parsers can be memory-intensive. For very long queries, consider adding input size validation before parsing.

  3. Result Transformation - The transformResults() function iterates over all rows and columns. For large result sets, this could be a bottleneck. Consider lazy transformation or streaming if needed.

  4. Chart Rendering - For large datasets (>1000 rows), the chart component might struggle. Consider adding data sampling or aggregation on the server side.


📊 Test Coverage

Well Tested:

  • Parser and printer (parser.test.ts, printer.test.ts with 2244 lines!)
  • Security scenarios (security.test.ts with 601 lines)
  • Escape functions (escape.test.ts)
  • Integration tests against real ClickHouse (tsql.test.ts)
  • Frontend linting and completion (tsqlLinter.test.ts, tsqlCompletion.test.ts)

Consider Adding:

  • Tests for the AI query service edge cases
  • E2E tests for the query page UI flow
  • Tests for CSV/JSON export functionality
  • Tests for very large result sets
  • Tests for concurrent query execution

💻 Code Quality

Positive:

  1. TypeScript Port Quality - The HogQL port is well-documented with comments linking to the original Python implementation. The code maintains the same structure making it easier to sync updates.

  2. Separation of Concerns - Clean separation between:

    • Grammar/Parser (ANTLR generated)
    • AST types and visitors
    • Schema validation
    • SQL printing
    • ClickHouse execution
  3. Schema-Driven Design - The TableSchema type provides a single source of truth for:

    • Column names and types
    • Virtual columns with expressions
    • Value mappings (user-friendly ↔ internal)
    • Tenant column definitions
  4. Good Error Types - Custom error classes (QueryError, SyntaxError, NotImplementedError) with position information.

Suggestions:

  1. Magic Strings - Consider using constants for frequently used strings like column types ("String", "DateTime64", etc.) to prevent typos.

  2. Documentation - The NOTICE.md in the tsql package is great for attribution. Consider adding API documentation for the main entry points.


📁 Database Migration

The CustomerQuery table migration (20251220201230_add_customer_query) looks good:

  • Proper indexes for organization_id + createdAt queries
  • Foreign key cascades for cleanup
  • SET NULL for userId to preserve history if user is deleted

🎨 Minor Suggestions

  1. Rename TSQL - "TSQL" might be confused with Microsoft's T-SQL. The PR description mentions "TRQL" (pronounced Treacle) - consider using this consistently in the codebase.

  2. Error Message Consistency - Some error messages use single quotes, others double quotes for identifiers. Standardize for consistency.

  3. Virtual Column Documentation - Add JSDoc comments explaining the difference between regular columns, virtual columns, and columns with whereTransform.


✅ Summary

This is a well-architected PR with strong security foundations. The TSQL/TRQL implementation follows best practices for query languages with proper sandboxing, parameterization, and tenant isolation. The test coverage is comprehensive, especially for security-critical paths.

Recommended before merge:

  1. Add rate limiting to the AI query endpoint
  2. Add an error boundary to the query page
  3. Consider the performance suggestions for large datasets

Great work on this feature! 🎉

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/webapp/app/components/code/ChartConfigPanel.tsx (1)

45-78: Consider extracting type detection helpers to a shared utility.

The type detection functions (isNumericType, isDateTimeType, isStringType) are well-implemented. However, a similar isDateTimeType exists in internal-packages/tsql/src/query/printer.ts with slight differences. Your version includes the check type.startsWith("DateTime64(") which handles parameterized DateTime64 types like DateTime64(3), making it more comprehensive.

♻️ Consider creating a shared utility module

These type detection helpers could be extracted to a shared utility module (e.g., apps/webapp/app/utils/clickhouse-types.ts) to avoid duplication and ensure consistency across the codebase. This would also make it easier to maintain and extend type detection logic in the future.

apps/webapp/app/components/code/QueryResultsChart.tsx (1)

149-187: Consider using mode instead of minimum for interval detection.

The current implementation uses the minimum gap (line 166) as a heuristic for data interval, which works for many cases but could be affected by outliers (e.g., one data point that's very close to another).

♻️ Consider calculating the mode (most common gap) for more robust detection

Using the most frequently occurring gap would be more resilient to outliers:

// After line 162, add:
// Count gap frequencies
const gapCounts = new Map<number, number>();
for (const gap of gaps) {
  // Round to nearest minute to group similar gaps
  const rounded = Math.round(gap / MINUTE) * MINUTE;
  gapCounts.set(rounded, (gapCounts.get(rounded) || 0) + 1);
}

// Find the most common gap
let mostCommonGap = minGap;
let maxCount = 0;
for (const [gap, count] of gapCounts) {
  if (count > maxCount) {
    maxCount = count;
    mostCommonGap = gap;
  }
}

// Use mostCommonGap instead of minGap for snapping logic
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5cef0b4 and 7e5a0cb.

📒 Files selected for processing (2)
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
  • apps/webapp/app/components/code/QueryResultsChart.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
**/*.{js,ts,jsx,tsx,json,md,css,scss}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier

Files:

  • apps/webapp/app/components/code/QueryResultsChart.tsx
  • apps/webapp/app/components/code/ChartConfigPanel.tsx
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
🧬 Code graph analysis (2)
apps/webapp/app/components/code/QueryResultsChart.tsx (3)
apps/webapp/app/components/code/ChartConfigPanel.tsx (2)
  • ChartConfiguration (15-24)
  • AggregationType (13-13)
apps/webapp/app/components/primitives/Chart.tsx (1)
  • ChartConfig (10-18)
apps/webapp/app/components/primitives/Paragraph.tsx (1)
  • Paragraph (88-107)
apps/webapp/app/components/code/ChartConfigPanel.tsx (5)
internal-packages/tsql/src/query/printer.ts (1)
  • isDateTimeType (1178-1188)
apps/webapp/app/utils/cn.ts (1)
  • cn (77-79)
apps/webapp/app/components/primitives/Paragraph.tsx (1)
  • Paragraph (88-107)
apps/webapp/app/components/primitives/Buttons.tsx (1)
  • Button (296-329)
apps/webapp/app/components/primitives/Switch.tsx (1)
  • Switch (64-119)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: claude-review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (12)
apps/webapp/app/components/code/ChartConfigPanel.tsx (4)

82-105: Column categorization logic is sound.

The categorization correctly identifies numeric, datetime, and categorical columns. Note that datetime columns are intentionally included in both dateTimeColumns and categoricalColumns (line 94), which allows them to be used for grouping purposes while maintaining their datetime nature for axis selection.


118-159: Auto-selection logic works correctly.

The auto-select defaults effect properly handles initial column selection with good prioritization:

  • X-axis: prefers datetime, then categorical, then any column
  • Y-axis: selects first numeric column
  • Sort: auto-sets to X-axis ascending when it's a datetime column

The use of columnsKey to detect actual column changes and refs to access current config without dependency issues is well-implemented.


241-451: Chart configuration UI is well-structured.

The UI properly handles:

  • Chart type selection with visual feedback
  • Dynamic option lists based on column types
  • Conditional visibility for stacked toggle (only when grouping) and sort direction (only when sorting)
  • Auto-configuration when selecting datetime columns for X-axis

The immediate auto-sort feedback in the X-axis onChange handler (lines 294-305) provides better UX compared to waiting for the next useEffect run.


453-521: Helper components are clean and functional.

The helper components are well-designed:

  • ConfigField provides consistent field layout with proper handling of empty labels
  • SortDirectionToggle offers clear ascending/descending selection
  • TypeBadge intelligently simplifies ClickHouse type names for display (strips wrapper types, truncates long names)
apps/webapp/app/components/code/QueryResultsChart.tsx (8)

72-143: Time granularity detection and formatting are well-implemented.

The detectTimeGranularity function uses sensible thresholds to determine the appropriate time scale, and formatDateByGranularity provides contextually appropriate formatting for each granularity level. The use of native date formatting APIs ensures locale-aware display.


193-288: Time gap filling with aggregation is correctly implemented.

This complex function properly handles:

  • Limiting maximum points to prevent performance issues (lines 207-224)
  • Bucketing data points to align with time intervals (lines 226-252)
  • Filling missing time slots with zeros (lines 254-287)
  • Applying aggregation to multiple values in the same bucket (line 269)

The logic ensures charts display continuous time series data with visible gaps rendered as zeros rather than connecting distant points.


323-379: Tick generation creates well-aligned, human-friendly labels.

The generateTimeTicks function intelligently:

  • Selects appropriate intervals from predefined "nice" values
  • Aligns ticks to natural boundaries (midnight, hour marks)
  • Ensures a reasonable number of ticks (2-8)
  • Provides padding to avoid cutting off edge data points

408-437: Date parsing handles multiple formats robustly.

The tryParseDate function defensively handles:

  • Date objects with validation
  • ISO string formats (regex check for YYYY-MM-DD)
  • Numbers as milliseconds (prioritized)
  • Numbers as Unix timestamps in seconds (fallback)
  • Invalid dates return null rather than throwing

The year range check (1970-2100) prevents accepting unrealistic timestamp values.


453-638: Data transformation logic is comprehensive and correct.

This critical function handles multiple complex scenarios:

Date detection (lines 471-499):

  • Uses an 80% threshold to determine if the X-axis is date-based
  • Calculates time domain with 2% padding for visual spacing
  • Generates appropriate tick positions

Non-grouped mode (lines 507-568):

  • Groups rows by X-axis value to handle duplicates
  • Applies configured aggregation to multiple values at the same X coordinate
  • Fills time gaps for continuous time series

Grouped mode (lines 570-637):

  • Pivots data so each group value becomes a separate chart series
  • Applies aggregation within each group
  • Fills time gaps while maintaining group structure

The defensive checks (line 520, 586) skip invalid dates for date-based axes, preventing chart rendering errors.


640-708: Utility functions are robust and well-implemented.

The helper functions handle edge cases properly:

  • toNumber: Converts various types to numbers with safe fallback
  • aggregateValues: Correctly implements all aggregation types (sum, avg, count, min, max) with empty array handling
  • sortData: Comprehensive sorting logic that tries date comparison first (using __rawDate), then numeric, then string comparison, with proper null handling

710-923: Chart rendering component handles multiple visualization scenarios correctly.

The QueryResultsChart component properly:

Data processing (lines 715-743):

  • Transforms data with memoization for performance
  • Forces chronological order for date-based axes (line 740), which is essential for time series visualization

Axis configuration (lines 752-853):

  • Creates dynamic formatters based on data characteristics
  • Uses different X-axis configurations for date-based (continuous numeric scale) vs categorical axes
  • Provides adaptive Y-axis formatting based on value ranges

Rendering (lines 855-920):

  • Conditionally renders BarChart, AreaChart (for stacked), or LineChart
  • Properly configures stacking, tooltips, and legends
  • Includes validation and empty states

The distinction between date-based continuous axes and categorical axes ensures proper visual representation of time series data.


925-986: Y-axis formatter and empty state are well-designed.

The createYAxisFormatter function provides intelligent formatting:

  • Uses K/M abbreviations for large numbers (≥1K, ≥1M)
  • Dynamically adjusts decimal places based on data range (more precision for smaller ranges)
  • Handles edge cases (infinite values, zero range)

The EmptyState component provides clear user feedback for various validation failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants