Skip to content

feat: add distributed session scaling E2E tests and related configurations#352

Merged
frontegg-david merged 5 commits intomainfrom
add-scaling-e2e-tests
Apr 9, 2026
Merged

feat: add distributed session scaling E2E tests and related configurations#352
frontegg-david merged 5 commits intomainfrom
add-scaling-e2e-tests

Conversation

@frontegg-david
Copy link
Copy Markdown
Contributor

@frontegg-david frontegg-david commented Apr 9, 2026

Summary by CodeRabbit

  • New Features

    • Transport security controls (CORS, bind-address, DNS rebinding protections), orphan-session scanner, Redis-backed transport bus, SSE affinity + relay for distributed deployments, and a distributed demo/test cluster.
  • Documentation

    • Added “Transport Security” guide and updated High-Availability docs with orphan scanner, SSE routing, and load‑balancer affinity guidance.
  • Tests

    • New distributed E2E and unit tests covering Redis, host validation, security audits, orphan scanning, and the Redis transport bus.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds distributed/HA runtime and test infrastructure: Redis-backed transport bus, HaManager orphan-session scanner and takeover, distributed SSE affinity/relay, host/Origin validation middleware, security audit and bind-resolution, scope/transport wiring for distributed mode, distributed E2E harness and testcontainers Redis support; also broad typing/import cleanups across tool implementations and docs.

Changes

Cohort / File(s) Summary
Distributed E2E app & tooling
apps/e2e/demo-e2e-distributed/src/main.ts, apps/e2e/demo-e2e-distributed/src/apps/distributed-app/index.ts, apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/echo.tool.ts, apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/node-info.tool.ts, apps/e2e/demo-e2e-distributed/jest.e2e.config.ts, apps/e2e/demo-e2e-distributed/project.json, apps/e2e/demo-e2e-distributed/tsconfig.*, apps/e2e/demo-e2e-distributed/webpack.config.js
New demo distributed application, build/test configs, tools and entrypoint wired to REDIS/ENV for multi-node runs.
Distributed E2E tests & containers
apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts, libs/testing/src/containers/redis-container.ts, libs/testing/src/server/distributed-test-cluster.ts, libs/testing/src/server/port-registry.ts
New multi-node E2E spec; Redis testcontainer helper; DistributedTestCluster for launching test nodes; added port range.
Redis Transport Bus & tests
libs/sdk/src/transport/bus/redis-transport-bus.ts, libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts, libs/sdk/src/transport/bus/index.ts
New RedisTransportBus implementation, typed client/options, advertise/revoke/lookup/destroyRemote flows and unit tests; re-exports added.
HA manager & orphan scanner
libs/sdk/src/ha/ha-manager.ts, libs/sdk/src/ha/__tests__/ha-manager.spec.ts
HaManager: new orphan-scanner types & implementation (periodic scan, Lua CAS takeover, handler callback), lifecycle controls and tests.
Scope & HA/transport wiring
libs/sdk/src/common/entries/scope.entry.ts, libs/sdk/src/scope/scope.instance.ts
Scope adds optional haManager; creates HaManager before TransportService in distributed mode, injects RedisTransportBus, starts orphan scanner, and adjusts event-store auto-enable logic.
Transport registry, SSE flow & tests
libs/sdk/src/transport/transport.registry.ts, libs/sdk/src/transport/flows/handle.sse.flow.ts, libs/sdk/src/transport/__tests__/transport.registry.spec.ts
TransportService now accepts optional TransportBus → enables distributed mode; session persistence/lookup extended to SSE; SSE handler sets machine-id cookie/header affinity and relays messages to owning pod via HA channel when needed; tests updated.
Host validation middleware & tests
libs/sdk/src/server/middleware/host-validation.middleware.ts, libs/sdk/src/server/middleware/__tests__/host-validation.middleware.spec.ts
New host/Origin validation middleware (dns rebinding protection) with allowlists and 403 responses; comprehensive tests added.
Security audit, bind resolution & server integration
libs/sdk/src/server/security/security-audit.ts, libs/sdk/src/server/security/__tests__/security-audit.spec.ts, libs/sdk/src/server/server.instance.ts
New security audit utilities (auditSecurityDefaults, logSecurityFindings, resolveBindAddress); server runs audits at start, logs findings, and uses resolved bindAddress for host.start.
Host adapters / signatures / express changes
libs/sdk/src/common/interfaces/server.interface.ts, libs/sdk/src/server/adapters/base.host.adapter.ts, libs/sdk/src/server/adapters/express.host.adapter.ts
Start signatures extended to accept bindAddress; Express adapter accepts security options, registers host-validation middleware when enabled, and listens on resolved host.
HTTP security types & schema
libs/sdk/src/common/types/options/http/interfaces.ts, libs/sdk/src/common/types/options/http/schema.ts, libs/sdk/src/common/types/options/http/index.ts
Introduced SecurityOptions type and added optional security to HttpOptionsInterface and Zod schema.
Transport constructor signature
libs/sdk/src/transport/transport.registry.ts
TransportService constructor signature updated to accept an optional bus?: TransportBus (enables distributed mode).
Testing infra & dev deps
package.json, many libs/testing files
Added testcontainers devDependency; added Redis container helper and distributed cluster test utilities.
Wide tool typing & import cleanup
many apps/**/tools/*.tool.ts, libs/**/tools/*.ts, docs/examples (see diff)
Removed explicit ToolContext<...> generics across many tool implementations and examples; reorder/type-only imports — typing cleanup only, no runtime changes.
Docs & navigation
docs/frontmcp/deployment/transport-security.mdx, docs/frontmcp/deployment/high-availability.mdx, docs/docs.json
Added Transport Security doc; updated HA doc with orphan scanner and SSE routing details; inserted nav entry.
Misc tests & small fixes
libs/sdk/src/server/__tests__/server.instance.spec.ts, libs/sdk/src/transport/__tests__/transport.registry.spec.ts, others
Adjusted tests to type-only imports and expectation fixes aligned with new behavior (CORS adapter args, protocol coverage).

Sequence Diagram(s)

sequenceDiagram
    participant HaMgr as HaManager
    participant Scanner as Orphan Scanner
    participant Redis as Redis
    participant Handler as onOrphan Handler

    HaMgr->>Scanner: startOrphanScanner(options)
    Scanner->>Redis: SCAN / fetch session keys by prefix
    Redis-->>Scanner: list of session keys
    loop per session key
        Scanner->>Redis: HGET session key -> JSON
        Redis-->>Scanner: session JSON (owner, protocol, ...)
        alt owner not alive and protocol recreatable
            Scanner->>Redis: EVAL Lua CAS to claim ownership
            Redis-->>Scanner: claimed / not claimed
            alt claimed
                Scanner->>Handler: onOrphan(sessionId, previousNodeId)
            end
        else skip
        end
    end
Loading
sequenceDiagram
    participant Client
    participant WrongNode as Node A
    participant Redis as Redis
    participant Owner as Node B

    Client->>WrongNode: POST /message (session token)
    WrongNode->>WrongNode: check in-memory transport -> not found
    WrongNode->>Redis: lookup bus key for session
    Redis-->>WrongNode: { nodeId: "node-1", channel: "ha:node-1" }
    WrongNode->>Redis: PUBLISH channel { method: "sse:relay-message", payload }
    Redis-->>Owner: deliver message on HA channel
    Owner->>Owner: forward to active SSE connection
    WrongNode-->>Client: 202 Accepted
Loading
sequenceDiagram
    participant Server as FrontMCP Server
    participant Audit as Security Audit
    participant Express as ExpressHostAdapter
    participant HostVal as Host Validation Middleware

    Server->>Audit: resolveBindAddress(security, deploymentMode)
    Audit-->>Server: bindAddress
    Server->>Audit: auditSecurityDefaults(config, isProduction)
    Audit-->>Server: findings
    Server->>Audit: logSecurityFindings(findings, logger)
    Server->>Express: new ExpressHostAdapter({ cors, security })
    alt dnsRebindingProtection enabled
        Express->>HostVal: createHostValidationMiddleware(options)
        Express->>Express: app.use(hostValidationMiddleware)
    end
    Server->>Express: host.start(port, bindAddress)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Poem

🐰 I hopped through keys of Redis gold,

Two nodes danced, their cookies bold,
Orphans found a gentle claim,
Headers, hosts and audits named,
A clustered carrot tale retold.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch add-scaling-e2e-tests

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
libs/sdk/src/transport/transport.registry.ts (1)

479-493: ⚠️ Potential issue | 🟠 Major

Persist the real protocol for SSE sessions.

This branch now stores both streamable-http and sse, but storedSession.session.protocol is still hardcoded to 'streamable-http'. That makes downstream HA and recovery logic misclassify SSE sessions, including any protocol-aware filtering you add for orphan takeover.

Suggested fix
     if (sessionStore && (type === 'streamable-http' || type === 'sse')) {
       const storedSession: StoredSession = {
         session: {
           id: sessionId,
           authorizationId: key.tokenHash,
-          protocol: 'streamable-http',
+          protocol: type,
           createdAt: Date.now(),
           nodeId: getMachineId(),
         },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/transport/transport.registry.ts` around lines 479 - 493, The
storedSession is incorrectly hardcoding session.protocol to 'streamable-http'
even when type === 'sse'; update the creation of StoredSession in the block
inside transport.registry.ts so storedSession.session.protocol uses the actual
protocol variable (e.g., protocol: type or a conditional mapping when type is
'sse' vs 'streamable-http') so SSE sessions are persisted with protocol 'sse'
and downstream HA/recovery and any protocol-aware filtering (refer to the
storedSession object and session.session.protocol assignment) will classify
sessions correctly.
libs/sdk/src/scope/scope.instance.ts (1)

183-199: ⚠️ Potential issue | 🟠 Major

Stop the HA manager if scope initialization fails.

haManager.start() happens before the rest of initialize() finishes. If any later step rejects, this process keeps publishing heartbeats even though the scope never became ready, so other pods will treat it as healthy and avoid taking over its sessions. Wrap the remaining initialization in a cleanup path that stops HA on failure, or defer start() until the scope is fully ready.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/scope/scope.instance.ts` around lines 183 - 199, The HA manager
is started early in initialize() via HaManager.create(...) and
this.haManager.start(), which can leave heartbeats running if later
initialization fails; modify initialize() to either delay calling
this.haManager.start() until after all remaining initialization steps complete
and the scope is ready, or add a cleanup path that calls await
this.haManager.stop() (or HaManager.stop) in the initialize() catch/finally when
any later step rejects; ensure references to this.haManager, HaManager.create,
this.haManager.start and this.haManager.stop are used so the manager is either
started only after readiness or reliably stopped on initialization failure.
🧹 Nitpick comments (7)
libs/sdk/src/server/middleware/__tests__/host-validation.middleware.spec.ts (1)

62-88: Consider adding test for valid host with invalid origin.

The combined tests cover "invalid host even if origin is valid", but there's no explicit test for "valid host but invalid origin" in the combined section.

💡 Additional test case
    it('rejects when origin is invalid (even if host is valid)', () => {
      const next = jest.fn();
      const res = createMockRes();
      middleware(createMockReq({ host: 'localhost:3001', origin: 'https://evil.com' }), res as ServerResponse, next);
      expect(next).not.toHaveBeenCalled();
      expect(res.statusCode).toBe(403);
      expect(res.body).toEqual(expect.objectContaining({ message: 'Invalid Origin header' }));
    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/server/middleware/__tests__/host-validation.middleware.spec.ts`
around lines 62 - 88, Add a test that asserts the middleware created by
createHostValidationMiddleware rejects requests where the host is allowed but
the Origin header is invalid: create a test (e.g., using createMockReq with
host: 'localhost:3001' and origin: 'https://evil.com'), call the middleware with
a mocked response (createMockRes) and next jest.fn(), then assert next was not
called, res.statusCode is 403 and res.body contains { message: 'Invalid Origin
header' }; place this alongside the existing combined tests to cover the “valid
host but invalid origin” case.
libs/sdk/src/server/middleware/host-validation.middleware.ts (2)

51-60: Same array handling consideration for Origin header.

Similar to the Host header, apply consistent handling for the Origin header.

🛡️ Suggested defensive handling
     // Validate Origin header (only if present and allowedOrigins configured)
     if (allowedOriginsSet) {
-      const origin = req.headers?.['origin'] as string | undefined;
-      if (origin && !allowedOriginsSet.has(origin)) {
+      const originRaw = req.headers?.['origin'];
+      const origin = Array.isArray(originRaw) ? originRaw[0] : originRaw;
+      if (origin && !allowedOriginsSet.has(origin)) {
         res.status(403).json({
           error: 'Forbidden',
           message: 'Invalid Origin header',
         });
         return;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/server/middleware/host-validation.middleware.ts` around lines 51
- 60, The Origin header handling must mirror the Host logic by defensively
handling array values: in the host-validation.middleware (look for
allowedOriginsSet and the origin retrieval via req.headers['origin']) coerce the
header to a single string before checking the set (use Array.isArray to pick the
first element or undefined if empty), then perform the
allowedOriginsSet.has(origin) check and return the 403 JSON as before; ensure
you treat undefined/empty arrays the same as a missing header so the behavior
matches the Host header handling.

39-48: Handle potential array-valued Host header.

The host header could theoretically be an array if multiple values are sent. The current code casts to string and performs Set lookup, which would fail for array values.

🛡️ Suggested defensive handling
     // Validate Host header
     if (allowedHostsSet) {
       const host = req.headers?.['host'];
-      if (!host || !allowedHostsSet.has(host as string)) {
+      const hostValue = Array.isArray(host) ? host[0] : host;
+      if (!hostValue || !allowedHostsSet.has(hostValue)) {
         res.status(403).json({
           error: 'Forbidden',
           message: 'Invalid Host header',
         });
         return;
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/server/middleware/host-validation.middleware.ts` around lines 39
- 48, The Host header value in host-validation.middleware.ts can be an array,
but the current block casts to string and calls allowedHostsSet.has(host as
string), which mis-handles array values; update the validation to first
normalize/check the header type: if req.headers['host'] is a string, check
allowedHostsSet.has(host); if it's an array, check whether any element in that
array is present in allowedHostsSet (or otherwise normalize/join appropriately)
before rejecting; adjust the conditional around allowedHostsSet and the early
return so array-valued headers are correctly validated.
libs/sdk/src/common/types/options/http/schema.ts (1)

60-60: Consider validating custom IP addresses.

The bindAddress field accepts any string beyond 'loopback' and 'all'. Invalid IP addresses like "not-an-ip" would pass schema validation but fail at runtime when binding.

💡 Optional: Add IP format validation
-      bindAddress: z.union([z.literal('loopback'), z.literal('all'), z.string()]).optional(),
+      bindAddress: z
+        .union([
+          z.literal('loopback'),
+          z.literal('all'),
+          z.string().regex(/^(\d{1,3}\.){3}\d{1,3}$|^[a-fA-F0-9:]+$/, 'Invalid IP address format'),
+        ])
+        .optional(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/common/types/options/http/schema.ts` at line 60, The bindAddress
union currently allows any string (z.string()) which lets invalid IPs slip
through; update the schema for bindAddress to validate custom addresses by
replacing the plain z.string() with either a regex-based validator (e.g.,
z.string().regex(/^(?:\d{1,3}\.){3}\d{1,3}$|^\[[0-9a-f:]+\]$/) for IPv4/IPv6) or
a z.string().refine(val => isIP(val) || isHostname(val), { message: 'Invalid IP
or hostname' }) using an isIP helper; keep the existing z.literal('loopback')
and z.literal('all') branches intact so only the string branch is tightened.
libs/testing/src/server/port-registry.ts (1)

66-73: Comment ordering is misleading.

The "Distributed E2E tests (50440-50459)" comment appears before the "ESM E2E tests (50400-50439)" comment, but the port ranges are in reverse numerical order (distributed starts at 50440, ESM starts at 50400). This makes the file harder to navigate.

📝 Suggested reordering for clarity
-  // Distributed E2E tests (50440-50459)
-  'demo-e2e-distributed': { start: 50440, size: 20 },
-
   // ESM E2E tests (50400-50439)
   'esm-package-server': { start: 50400, size: 10 },
   'esm-package-server-hot-reload': { start: 50410, size: 10 },
   'esm-package-server-cli': { start: 50420, size: 10 },
   'demo-e2e-unix-socket': { start: 50430, size: 10 },
+
+  // Distributed E2E tests (50440-50459)
+  'demo-e2e-distributed': { start: 50440, size: 20 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/testing/src/server/port-registry.ts` around lines 66 - 73, The file's
comment order is misleading because the "Distributed E2E tests (50440-50459)"
comment and entry for 'demo-e2e-distributed' appear above the "ESM E2E tests
(50400-50439)" block even though the ESM range (50400) is numerically earlier;
reorder the sections so comment headers match ascending port ranges—move the
"ESM E2E tests (50400-50439)" comment and its entries ('esm-package-server',
'esm-package-server-hot-reload', 'esm-package-server-cli',
'demo-e2e-unix-socket') above the "Distributed E2E tests" comment/entry
('demo-e2e-distributed') to restore logical numeric order and improve
navigation.
libs/testing/src/containers/redis-container.ts (1)

43-45: Add stop timeout for graceful shutdown.

The stop() function calls container.stop() without a timeout. In CI environments, a hanging container could delay test cleanup. Testcontainers v11 supports a timeout parameter (in milliseconds) to ensure containers shut down gracefully, similar to how the container startup uses withStartupTimeout(30_000).

💡 Add stop timeout
    stop: async () => {
-     await container.stop();
+     await container.stop({ timeout: 10_000 });
    },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/testing/src/containers/redis-container.ts` around lines 43 - 45, The
stop helper currently calls container.stop() with no timeout, which can hang in
CI; update the stop method in the exported object (the stop async function that
calls container.stop()) to pass a millisecond timeout (e.g., 30_000) to
container.stop(timeout) so the Redis testcontainer shuts down gracefully within
a bounded time and fails cleanly if it cannot.
libs/sdk/src/transport/__tests__/transport.registry.spec.ts (1)

313-314: Add a companion positive test for 'sse' stored-session lookup.

This negative case is good, but since 'sse' is now part of the Redis lookup path, add one explicit 'sse' success-path assertion to lock behavior.

💡 Suggested test addition
+    it('should query Redis for sse stored sessions', async () => {
+      mockRedisSessionStore.get.mockResolvedValue(null);
+      await service.getStoredSession('sse', 'test-token', 'session-123');
+      expect(mockRedisSessionStore.get).toHaveBeenCalledWith('session-123');
+    });

Based on learnings, test files should cover all relevant code paths, including newly introduced behavior branches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/transport/__tests__/transport.registry.spec.ts` around lines 313
- 314, Add a positive test case that mirrors the existing negative case but
verifies the successful lookup path for 'sse': create/store a mock session entry
for provider type 'sse' (matching the shape used elsewhere in these tests), call
service.getStoredSession('sse', 'test-token', 'session-123'), and assert the
returned value equals the stored session; reference the test harness/service and
the getStoredSession method so the new spec verifies the 'sse' Redis lookup
branch succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts`:
- Around line 111-115: The test assumes tools.list() returns an object with a
tools property but it actually returns Tool[], so change the mapping to operate
on tools0 and tools1 directly (e.g., replace tools0.tools.map and
tools1.tools.map with tools0.map and tools1.map) when building toolNames0 and
toolNames1; keep the existing sort() and any type annotations in the test around
toolNames0/toolNames1 and the client calls (client0.tools.list(),
client1.tools.list()) so the assertions receive string[] as expected.
- Around line 49-124: Add an E2E test that creates a session on node-0 and then
reuses that same session against node-1 (and again after stopping node-0) to
exercise session handoff and recovery: use McpTestClient.create({ baseUrl:
nodes[0].info.baseUrl }).buildAndConnect() to open a session and capture its
session identifier or cookie from the client (or from the result of
client.tools.call('echo'...)), then create a new McpTestClient pointed at
nodes[1].info.baseUrl but configured to reuse the same session token/ID and
assert client.tools.call('echo'...) returns the same session-scoped response;
finally call cluster.stopNode(0) and repeat the request against node-1 to verify
recovery and that the session continues to work. Ensure to close all clients
(client.close) in finally blocks.

In `@libs/sdk/src/common/entries/scope.entry.ts`:
- Line 69: The new abstract member haManager on ScopeEntry is a breaking change
for external subclasses; change it to a concrete optional field instead of
abstract so existing subclasses aren't forced to implement it — replace the
abstract declaration (abstract haManager?: HaManager) with a normal optional
property (haManager?: HaManager) on the ScopeEntry class, keep its type
HaManager, and ensure any internal usages of this.haManager handle undefined
safely (or initialize it where needed) so the public API remains
backward-compatible.

In `@libs/sdk/src/common/interfaces/server.interface.ts`:
- Line 74: The abstract start signature allows passing a bindAddress for Unix
sockets which is ignored; update the API to enforce correct call shapes by
replacing the single method start(portOrSocketPath?: number | string,
bindAddress?: string) in server.interface.ts with either overloads (start(port:
number, bindAddress?: string): Promise<void> | void; start(socketPath: string):
Promise<void> | void;) or a discriminated options union (e.g., { port: number;
bindAddress?: string } | { socketPath: string }) and then update
implementations: adjust ExpressHostAdapter.start (express.host.adapter.ts) and
Server.instance.start (server.instance.ts) to accept the new signature and only
pass bindAddress to server.listen when using the numeric port branch, and
remove/omit bindAddress for socketPath branch so socket deployments no longer
silently drop the value. Ensure all callers compile against the new
overload/union and update tests/usages accordingly.

In `@libs/sdk/src/ha/__tests__/ha-manager.spec.ts`:
- Around line 168-180: The test currently only checks manager.isStarted() and
doesn’t verify the orphan-scanner was stopped; update the test using
HaManager.create, startOrphanScanner and stop so you seed a session key into the
createMockRedis() instance, pass a jest.fn() as onOrphan, start the scanner,
call manager.stop(), advance timers (or wait a bit) and then assert that the
onOrphan mock was not called after stop; ensure you reference the same
startOrphanScanner/onOrphan and HaManager.stop/isStarted symbols so the test
fails if the scanner interval still runs.
- Around line 215-248: The tests in ha-manager.spec.ts use real setTimeout
waits; replace both occurrences of await new Promise((r) => setTimeout(r, 200))
with the project fake-timer helper to make timing deterministic: call the helper
(via the repo helper useFakeTimers in
libs/sdk/src/__test-utils__/helpers/setup.helpers.ts) before starting timers,
then advance the fake clock by the required amount (e.g., heartbeatIntervalMs +
any grace/TTL) using advance() or advanceTimersByTimeAsync()/advance() so the
HaManager.start() and manager.startOrphanScanner(...) (the HaManager class and
its startOrphanScanner behavior) see the simulated time progress and the tests
assert deterministically; ensure teardown restores real timers if needed.

In `@libs/sdk/src/ha/ha-manager.ts`:
- Around line 119-123: The delayed setTimeout used to start the orphan scanner
can still fire after shutdown because its handle isn't stored or cleared; fix by
saving the timeout handle (e.g., this.scannerTimeout) when calling setTimeout,
call clearTimeout(this.scannerTimeout) during shutdown/stop, and ensure
intervals created in runOrphanScan are cleared via
clearInterval(this.scannerTimer) (and nullified) so the scanner cannot be
restarted after services stop; apply the same change to the other identical
block around lines 133-138 and preserve any unref calls on the saved handles.
- Line 225: The orphan scanner in ha-manager.ts uses HeartbeatRedisClient.keys()
which can block Redis; add a breaking-change-safe plan: extend the
HeartbeatRedisClient interface with an optional async scan(cursor: string,
pattern?: string, count?: number): Promise<{ cursor: string; keys: string[] }>,
update the HA orphan-scanning logic in the method that currently calls keys() to
prefer scan() when present (detect at runtime: if (typeof client.scan ===
'function') use cursor-based scanning, else fall back to keys()), and update SDK
docs/draft/docs/** to describe the new optional scan capability and migration
guidance. Ensure implementations of HeartbeatRedisClient (clients) are updated
to implement scan() where supported.

In `@libs/sdk/src/scope/scope.instance.ts`:
- Around line 214-228: The orphan scanner currently targets the shared transport
keyspace (see resolvedPrefix and the haManager.startOrphanScanner call) and can
claim SSE sessions; change the scanner configuration or handler so SSE sessions
are excluded from takeover: when starting the orphan scanner
(haManager.startOrphanScanner) add a filter or early-return that checks the
session metadata/protocol and skips any session whose protocol is "sse" (or
otherwise marks SSE), so the onOrphan callback does not rewrite nodeId for SSE
sessions; update any place that reads session data (the onOrphan handler or the
haManager filter option) to load and inspect protocol/type before claiming and
only proceed for recreatable protocols.

In `@libs/sdk/src/server/adapters/express.host.adapter.ts`:
- Around line 63-71: The current logic enables host validation when
security.strict is true but passes undefined allowedHosts/allowedOrigins to
createHostValidationMiddleware, resulting in a no-op validator; update the code
that builds the hostValidation (the call to createHostValidationMiddleware and
the surrounding conditional using securityOpts,
securityOpts.dnsRebindingProtection, and securityOpts.strict) to either (a)
derive a reasonable strict-mode allowlist (for example using the server's
configured host/hostname and localhost loopback addresses) and pass it as
allowedHosts/allowedOrigins, or (b) throw/fail-fast with a clear error when
security.strict is true but no allowedHosts/allowedOrigins are provided; ensure
the error path prevents calling this.app.use(hostValidation) with undefined
lists and clearly references createHostValidationMiddleware, hostValidation,
allowedHosts, and allowedOrigins so reviewers can find and verify the change.

In `@libs/sdk/src/server/security/security-audit.ts`:
- Around line 60-73: The audit currently raises CORS_PERMISSIVE_DEFAULT and
DNS_REBINDING_UNPROTECTED even when security.strict is true and the routine
later emits STRICT_MODE_ENABLED, causing contradictory findings; to fix, detect
strict mode via config.security?.strict (or whatever variable computes
strictModeEnabled) and either compute the effective CORS and
dnsRebindingProtection values after applying strict defaults and base warnings
on those effective settings, or suppress the raw-config warnings (do not push
findings with codes 'CORS_PERMISSIVE_DEFAULT', 'CORS_ORIGIN_TRUE', or
'DNS_REBINDING_UNPROTECTED') when strict mode is active and instead rely on the
existing STRICT_MODE_ENABLED finding; update the logic around findings.push in
the blocks that reference config.cors and config.dnsRebindingProtection so they
consult strict mode first and avoid emitting contradictory warnings.

In `@libs/sdk/src/transport/bus/redis-transport-bus.ts`:
- Around line 101-103: The revoke method currently does an unconditional DEL on
the key returned by busKey(key), which can remove a newer owner's registration;
change revoke to perform an atomic compare-and-delete against this.machineId:
read the value at this.busKey(key) (or use a Redis EVAL Lua script) and only
delete if the stored owner/nodeId equals this.machineId; use this.redis.eval
with a short Lua script or a WATCH/MULTI/EXEC transaction to compare the
existing value and delete atomically, referencing the revoke, busKey,
this.redis, and this.machineId symbols to locate and update the logic.

In `@libs/testing/src/server/distributed-test-cluster.ts`:
- Around line 84-99: reservePort() is called before TestServer.start(), but if
TestServer.start() throws the reserved port's release() is never invoked,
leaking the port; wrap the TestServer.start(...) call in a try/catch (or
try/finally) and call the captured release() when start fails (i.e., before
rethrowing the error) so the port is freed, leaving the successful-start path
unchanged where the node is added to this.nodes.
- Around line 66-75: The start method can leave started child nodes running if
startNode(i) throws; wrap the loop in try/catch and on error iterate the
already-started results to tear them down before rethrowing: after catching an
error from startNode, call the appropriate teardown for each started node (e.g.
this.stopNode(node) or await node.stop()) inside its own try/catch so teardown
errors are swallowed/logged but do not prevent cleaning up, then rethrow the
original error so callers still see the startup failure; update start to use
these semantics around startNode and results.

---

Outside diff comments:
In `@libs/sdk/src/scope/scope.instance.ts`:
- Around line 183-199: The HA manager is started early in initialize() via
HaManager.create(...) and this.haManager.start(), which can leave heartbeats
running if later initialization fails; modify initialize() to either delay
calling this.haManager.start() until after all remaining initialization steps
complete and the scope is ready, or add a cleanup path that calls await
this.haManager.stop() (or HaManager.stop) in the initialize() catch/finally when
any later step rejects; ensure references to this.haManager, HaManager.create,
this.haManager.start and this.haManager.stop are used so the manager is either
started only after readiness or reliably stopped on initialization failure.

In `@libs/sdk/src/transport/transport.registry.ts`:
- Around line 479-493: The storedSession is incorrectly hardcoding
session.protocol to 'streamable-http' even when type === 'sse'; update the
creation of StoredSession in the block inside transport.registry.ts so
storedSession.session.protocol uses the actual protocol variable (e.g.,
protocol: type or a conditional mapping when type is 'sse' vs 'streamable-http')
so SSE sessions are persisted with protocol 'sse' and downstream HA/recovery and
any protocol-aware filtering (refer to the storedSession object and
session.session.protocol assignment) will classify sessions correctly.

---

Nitpick comments:
In `@libs/sdk/src/common/types/options/http/schema.ts`:
- Line 60: The bindAddress union currently allows any string (z.string()) which
lets invalid IPs slip through; update the schema for bindAddress to validate
custom addresses by replacing the plain z.string() with either a regex-based
validator (e.g., z.string().regex(/^(?:\d{1,3}\.){3}\d{1,3}$|^\[[0-9a-f:]+\]$/)
for IPv4/IPv6) or a z.string().refine(val => isIP(val) || isHostname(val), {
message: 'Invalid IP or hostname' }) using an isIP helper; keep the existing
z.literal('loopback') and z.literal('all') branches intact so only the string
branch is tightened.

In `@libs/sdk/src/server/middleware/__tests__/host-validation.middleware.spec.ts`:
- Around line 62-88: Add a test that asserts the middleware created by
createHostValidationMiddleware rejects requests where the host is allowed but
the Origin header is invalid: create a test (e.g., using createMockReq with
host: 'localhost:3001' and origin: 'https://evil.com'), call the middleware with
a mocked response (createMockRes) and next jest.fn(), then assert next was not
called, res.statusCode is 403 and res.body contains { message: 'Invalid Origin
header' }; place this alongside the existing combined tests to cover the “valid
host but invalid origin” case.

In `@libs/sdk/src/server/middleware/host-validation.middleware.ts`:
- Around line 51-60: The Origin header handling must mirror the Host logic by
defensively handling array values: in the host-validation.middleware (look for
allowedOriginsSet and the origin retrieval via req.headers['origin']) coerce the
header to a single string before checking the set (use Array.isArray to pick the
first element or undefined if empty), then perform the
allowedOriginsSet.has(origin) check and return the 403 JSON as before; ensure
you treat undefined/empty arrays the same as a missing header so the behavior
matches the Host header handling.
- Around line 39-48: The Host header value in host-validation.middleware.ts can
be an array, but the current block casts to string and calls
allowedHostsSet.has(host as string), which mis-handles array values; update the
validation to first normalize/check the header type: if req.headers['host'] is a
string, check allowedHostsSet.has(host); if it's an array, check whether any
element in that array is present in allowedHostsSet (or otherwise normalize/join
appropriately) before rejecting; adjust the conditional around allowedHostsSet
and the early return so array-valued headers are correctly validated.

In `@libs/sdk/src/transport/__tests__/transport.registry.spec.ts`:
- Around line 313-314: Add a positive test case that mirrors the existing
negative case but verifies the successful lookup path for 'sse': create/store a
mock session entry for provider type 'sse' (matching the shape used elsewhere in
these tests), call service.getStoredSession('sse', 'test-token', 'session-123'),
and assert the returned value equals the stored session; reference the test
harness/service and the getStoredSession method so the new spec verifies the
'sse' Redis lookup branch succeeds.

In `@libs/testing/src/containers/redis-container.ts`:
- Around line 43-45: The stop helper currently calls container.stop() with no
timeout, which can hang in CI; update the stop method in the exported object
(the stop async function that calls container.stop()) to pass a millisecond
timeout (e.g., 30_000) to container.stop(timeout) so the Redis testcontainer
shuts down gracefully within a bounded time and fails cleanly if it cannot.

In `@libs/testing/src/server/port-registry.ts`:
- Around line 66-73: The file's comment order is misleading because the
"Distributed E2E tests (50440-50459)" comment and entry for
'demo-e2e-distributed' appear above the "ESM E2E tests (50400-50439)" block even
though the ESM range (50400) is numerically earlier; reorder the sections so
comment headers match ascending port ranges—move the "ESM E2E tests
(50400-50439)" comment and its entries ('esm-package-server',
'esm-package-server-hot-reload', 'esm-package-server-cli',
'demo-e2e-unix-socket') above the "Distributed E2E tests" comment/entry
('demo-e2e-distributed') to restore logical numeric order and improve
navigation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 60df07b7-8698-4608-9f22-4c71d8520e21

📥 Commits

Reviewing files that changed from the base of the PR and between 1cf770f and 2d4ccba.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (39)
  • apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts
  • apps/e2e/demo-e2e-distributed/jest.e2e.config.ts
  • apps/e2e/demo-e2e-distributed/project.json
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/index.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/echo.tool.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/node-info.tool.ts
  • apps/e2e/demo-e2e-distributed/src/main.ts
  • apps/e2e/demo-e2e-distributed/tsconfig.app.json
  • apps/e2e/demo-e2e-distributed/tsconfig.json
  • apps/e2e/demo-e2e-distributed/webpack.config.js
  • docs/docs.json
  • docs/frontmcp/deployment/high-availability.mdx
  • docs/frontmcp/deployment/transport-security.mdx
  • libs/sdk/src/common/entries/scope.entry.ts
  • libs/sdk/src/common/interfaces/server.interface.ts
  • libs/sdk/src/common/types/options/http/index.ts
  • libs/sdk/src/common/types/options/http/interfaces.ts
  • libs/sdk/src/common/types/options/http/schema.ts
  • libs/sdk/src/ha/__tests__/ha-manager.spec.ts
  • libs/sdk/src/ha/ha-manager.ts
  • libs/sdk/src/scope/scope.instance.ts
  • libs/sdk/src/server/__tests__/server.instance.spec.ts
  • libs/sdk/src/server/adapters/base.host.adapter.ts
  • libs/sdk/src/server/adapters/express.host.adapter.ts
  • libs/sdk/src/server/middleware/__tests__/host-validation.middleware.spec.ts
  • libs/sdk/src/server/middleware/host-validation.middleware.ts
  • libs/sdk/src/server/security/__tests__/security-audit.spec.ts
  • libs/sdk/src/server/security/security-audit.ts
  • libs/sdk/src/server/server.instance.ts
  • libs/sdk/src/transport/__tests__/transport.registry.spec.ts
  • libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts
  • libs/sdk/src/transport/bus/index.ts
  • libs/sdk/src/transport/bus/redis-transport-bus.ts
  • libs/sdk/src/transport/flows/handle.sse.flow.ts
  • libs/sdk/src/transport/transport.registry.ts
  • libs/testing/src/containers/redis-container.ts
  • libs/testing/src/server/distributed-test-cluster.ts
  • libs/testing/src/server/port-registry.ts
  • package.json

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts (2)

115-116: ⚠️ Potential issue | 🔴 Critical

tools.list() result is mapped with the wrong shape.

tools0.tools.map(...) / tools1.tools.map(...) will fail if list() returns Tool[]; map the arrays directly.

Suggested fix
-      const toolNames0 = tools0.tools.map((t: { name: string }) => t.name).sort();
-      const toolNames1 = tools1.tools.map((t: { name: string }) => t.name).sort();
+      const toolNames0 = tools0.map((t: { name: string }) => t.name).sort();
+      const toolNames1 = tools1.map((t: { name: string }) => t.name).sort();
#!/bin/bash
set -euo pipefail

# Verify `McpTestClient.tools.list()` declaration/implementation shape
fd -i 'mcp*test*client*.ts' . | xargs -r rg -n -C3 '\blist\s*\('

# Verify current call-site usage in this spec
rg -n -C3 '\btools0\.tools\.map\b|\btools1\.tools\.map\b|\.tools\.list\s*\(' apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts` around lines
115 - 116, The tests assume list() returns an array of Tool but the code uses
tools0.tools and tools1.tools; update the call sites in
session-scaling.e2e.spec.ts to map the arrays directly (use tools0.map(t =>
t.name).sort() and tools1.map(t => t.name).sort()) or, if the API should be an
object with a tools property, adjust McpTestClient.tools.list()
implementation/typing to return that shape; ensure the symbols tools0, tools1
and the McpTestClient.tools.list() declaration/implementation are consistent.

57-125: ⚠️ Potential issue | 🟠 Major

Missing cross-node session reuse/failover validation for the scaling objective.

These tests create independent clients per node but never reuse one session across nodes (or after node shutdown), so they don’t validate handoff/recovery behavior for distributed session scaling.

Based on learnings: Applies to **/*.spec.ts : Test all code paths including error cases, constructor validation, and error class instanceof checks.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts`:
- Around line 115-116: The tests assume list() returns an array of Tool but the
code uses tools0.tools and tools1.tools; update the call sites in
session-scaling.e2e.spec.ts to map the arrays directly (use tools0.map(t =>
t.name).sort() and tools1.map(t => t.name).sort()) or, if the API should be an
object with a tools property, adjust McpTestClient.tools.list()
implementation/typing to return that shape; ensure the symbols tools0, tools1
and the McpTestClient.tools.list() declaration/implementation are consistent.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 342edb17-7c1a-4498-a7e9-8462dd8ff4d2

📥 Commits

Reviewing files that changed from the base of the PR and between 2d4ccba and aca7803.

📒 Files selected for processing (4)
  • apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/index.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/echo.tool.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/node-info.tool.ts
✅ Files skipped from review due to trivial changes (1)
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/index.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/node-info.tool.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/echo.tool.ts

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 9, 2026

Performance Test Results

Status: ✅ All tests passed

Summary

Project Tests Passed Warnings Failed Leaks
✅ demo-e2e-agents 4 4 0 0 0
✅ demo-e2e-cache 11 11 0 0 0
✅ demo-e2e-codecall 4 4 0 0 0
✅ demo-e2e-config 4 4 0 0 0
✅ demo-e2e-direct 3 3 0 0 0
✅ demo-e2e-elicitation 1 1 0 0 0
✅ demo-e2e-errors 4 4 0 0 0
✅ demo-e2e-hooks 3 3 0 0 0
✅ demo-e2e-multiapp 4 4 0 0 0
✅ demo-e2e-notifications 3 3 0 0 0
✅ demo-e2e-openapi 2 2 0 0 0
✅ demo-e2e-providers 4 4 0 0 0
✅ demo-e2e-public 4 4 0 0 0
✅ demo-e2e-redis 15 15 0 0 0
✅ demo-e2e-remember 4 4 0 0 0
✅ demo-e2e-remote 5 5 0 0 0
✅ demo-e2e-serverless 2 2 0 0 0
✅ demo-e2e-skills 15 15 0 0 0
✅ demo-e2e-standalone 2 2 0 0 0
✅ demo-e2e-transport-recreation 3 3 0 0 0
✅ demo-e2e-ui 4 4 0 0 0

Total: 101 tests across 21 projects

📊 View full report in workflow run


Generated at: 2026-04-09T10:21:50.943Z
Commit: 5ccad6c2

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (2)
libs/sdk/src/ha/ha-manager.ts (1)

238-240: ⚠️ Potential issue | 🟠 Major

Prefer cursor-based scanning here instead of KEYS.

KEYS ${sessionKeyPrefix}* blocks Redis, and this scanner runs on every heartbeat interval. As session cardinality grows, this can become the HA bottleneck. Prefer SCAN when the client supports it, with KEYS only as a compatibility fallback.

#!/bin/bash
# Verify the orphan scanner still relies on KEYS and whether the HA Redis contract exposes SCAN.
rg -nP 'interface\s+HeartbeatRedisClient|\.keys\(|\.scan\(' libs/sdk/src/ha -C2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/ha/ha-manager.ts` around lines 238 - 240, The current orphan
scanner uses this.redis.keys(`${sessionKeyPrefix}*`) which blocks Redis at
scale; update the scan logic in the HA manager (referencing sessionKeyPrefix,
sessionKeys and this.redis) to use a cursor-based SCAN loop (e.g.,
this.redis.scan or equivalent async iterator exposed by the Redis client) to
incrementally collect matching keys and only fall back to this.redis.keys when
SCAN is not available on the client; ensure the rest of the code still uses the
resulting sessionKeys set and preserve the existing
allowedProtocols/recreatableProtocols behavior from scannerOptions.
apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts (1)

57-71: ⚠️ Potential issue | 🟠 Major

This still doesn't exercise the session handoff path.

Every test opens a fresh client against the node it asserts, so the suite can pass even if Redis-backed session lookup/takeover is broken. Please add one case that creates a session on node 0, reuses that same session through node 1, and repeats after cluster.stopNode(0) to cover recovery.

Based on learnings, "Test all code paths including error cases, constructor validation, and error class instanceof checks".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts` around lines
57 - 71, Add a new e2e test that actually exercises session handoff: create a
session on node 0 using McpTestClient.create(...).buildAndConnect(), capture the
session identifier from the client (e.g., client.sessionId or from the initial
tool call response), then create a second client against node 1 by calling
McpTestClient.create({ baseUrl: nodes[1].info.baseUrl, sessionId: <captured>
}).buildAndConnect() and assert tool calls (client.tools.call('echo', ...))
succeed and return node-0 behavior; next call cluster.stopNode(0) and repeat the
tool call through the node-1 client to verify takeover and recovery still
returns the expected response; ensure both clients are closed in finally blocks.
🧹 Nitpick comments (2)
libs/testing/src/server/distributed-test-cluster.ts (1)

149-161: Wrap portRelease() in try/catch for consistent best-effort cleanup.

The server.stop() call is properly wrapped in try/catch, but portRelease() is not. If one node's port release throws, Promise.all will reject early and may not wait for other nodes to complete teardown, potentially leaving zombie processes.

♻️ Suggested fix
   async teardown(): Promise<void> {
     const stops = [...this.nodes.entries()].map(async ([_index, node]) => {
       try {
         await node.server.stop();
       } catch {
         // Best effort
       }
       if (node.portRelease) {
-        await node.portRelease();
+        try {
+          await node.portRelease();
+        } catch {
+          // Best effort
+        }
       }
     });
     await Promise.all(stops);
     this.nodes.clear();
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/testing/src/server/distributed-test-cluster.ts` around lines 149 - 161,
In the teardown() method of distributed-test-cluster.ts, make portRelease() a
best-effort call like server.stop(): wrap the await node.portRelease() in its
own try/catch so any exception is swallowed (or logged) without rejecting the
overall Promise.all; this ensures Promise.all(stops) waits for every node's
cleanup. Update the async callback that maps over this.nodes.entries() to catch
errors from node.portRelease() (and optionally log them) while preserving the
existing try/catch around node.server.stop().
apps/e2e/demo-e2e-channels/src/apps/channels/tools/send-channel-notification.tool.ts (1)

16-19: Consider tightening the method contract after the ToolContext generic removal.

Using schema-derived Input and an explicit Output return union here reduces drift risk between runtime schema and TypeScript types.

Suggested refactor
+type Input = z.infer<z.ZodObject<typeof inputSchema>>;
+type Output = { sent: true; channelName: string } | { sent: false; reason: string };
+
 export default class SendChannelNotificationTool extends ToolContext {
-  async execute(input: { channelName: string; content: string; meta?: Record<string, string> }) {
+  async execute(input: Input): Promise<Output> {
     const scope = this.scope as unknown as {
       channelNotifications?: { send: (name: string, content: string, meta?: Record<string, string>) => void };
     };

As per coding guidelines **/*.{ts,tsx}: "Follow existing TypeScript patterns and keep strict typing".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/e2e/demo-e2e-channels/src/apps/channels/tools/send-channel-notification.tool.ts`
around lines 16 - 19, The execute method on SendChannelNotificationTool
currently uses a loose inline input type and an any-ish scope cast; tighten the
contract by introducing a concrete Input interface (e.g., Input { channelName:
string; content: string; meta?: Record<string,string> }) and an explicit Output
union type for possible returns, then update the class to extend
ToolContext<Input, Output> (or otherwise parameterize ToolContext) and change
execute to use those types; also narrow the scope cast to the specific interface
that exposes channelNotifications.send so TypeScript enforces the shape when
calling SendChannelNotificationTool.execute and when accessing
scope.channelNotifications.send.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/e2e/demo-e2e-direct/src/apps/notes/tools/create-note.tool.ts`:
- Around line 1-6: The import of randomUUID should come from the project's
helper module rather than Node's crypto: replace the top-level import of
randomUUID from 'crypto' with an import from '@frontmcp/utils' in
create-note.tool.ts so usages of randomUUID in this file (e.g., in the Tool
implementation) use the utils UUID helper per guidelines.

In
`@apps/e2e/demo-e2e-transport-recreation/src/apps/transport-test/tools/increment-counter.tool.ts`:
- Line 25: The class declaration removed generic parameters from ToolContext
causing loss of schema type safety; update the declaration of
IncrementCounterTool to extend ToolContext with the specific schema generics
(i.e., ToolContext<typeof inputSchema, typeof outputSchema>) so the `@Tool`()
decorator schemas are type-coupled to the execute() method signature and prevent
type drift.

In `@apps/e2e/demo-e2e-unix-socket/src/apps/notes/tools/create-note.tool.ts`:
- Around line 1-6: The import of randomUUID is using Node's crypto; replace the
direct Node import with the centralized utility by changing the import source
for the randomUUID symbol to '@frontmcp/utils' (i.e., update the top-level
import that currently reads "import { randomUUID } from 'crypto';" to import
from '@frontmcp/utils' while leaving the other imports and usages of randomUUID
in this file (create-note.tool.ts) unchanged).

In `@libs/sdk/src/transport/bus/redis-transport-bus.ts`:
- Around line 178-181: The code unconditionally calls this.redis.del(redisKey)
immediately after await this.redis.publish(channel, message), which can race and
remove a live bus registration; change this to either let the owning node revoke
the key on successful destroy or perform an owner-scoped compare-and-set before
deleting (the same pattern used in revoke()), i.e., replace the unconditional
DEL with a guarded delete that verifies ownership (using the same owner ID/CAS
check used in revoke()) or remove the DEL here and rely on the owner to call
revoke() after successful destroy; adjust logic around publish(channel,
message), redisKey and revoke() accordingly.

---

Duplicate comments:
In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts`:
- Around line 57-71: Add a new e2e test that actually exercises session handoff:
create a session on node 0 using McpTestClient.create(...).buildAndConnect(),
capture the session identifier from the client (e.g., client.sessionId or from
the initial tool call response), then create a second client against node 1 by
calling McpTestClient.create({ baseUrl: nodes[1].info.baseUrl, sessionId:
<captured> }).buildAndConnect() and assert tool calls (client.tools.call('echo',
...)) succeed and return node-0 behavior; next call cluster.stopNode(0) and
repeat the tool call through the node-1 client to verify takeover and recovery
still returns the expected response; ensure both clients are closed in finally
blocks.

In `@libs/sdk/src/ha/ha-manager.ts`:
- Around line 238-240: The current orphan scanner uses
this.redis.keys(`${sessionKeyPrefix}*`) which blocks Redis at scale; update the
scan logic in the HA manager (referencing sessionKeyPrefix, sessionKeys and
this.redis) to use a cursor-based SCAN loop (e.g., this.redis.scan or equivalent
async iterator exposed by the Redis client) to incrementally collect matching
keys and only fall back to this.redis.keys when SCAN is not available on the
client; ensure the rest of the code still uses the resulting sessionKeys set and
preserve the existing allowedProtocols/recreatableProtocols behavior from
scannerOptions.

---

Nitpick comments:
In
`@apps/e2e/demo-e2e-channels/src/apps/channels/tools/send-channel-notification.tool.ts`:
- Around line 16-19: The execute method on SendChannelNotificationTool currently
uses a loose inline input type and an any-ish scope cast; tighten the contract
by introducing a concrete Input interface (e.g., Input { channelName: string;
content: string; meta?: Record<string,string> }) and an explicit Output union
type for possible returns, then update the class to extend ToolContext<Input,
Output> (or otherwise parameterize ToolContext) and change execute to use those
types; also narrow the scope cast to the specific interface that exposes
channelNotifications.send so TypeScript enforces the shape when calling
SendChannelNotificationTool.execute and when accessing
scope.channelNotifications.send.

In `@libs/testing/src/server/distributed-test-cluster.ts`:
- Around line 149-161: In the teardown() method of distributed-test-cluster.ts,
make portRelease() a best-effort call like server.stop(): wrap the await
node.portRelease() in its own try/catch so any exception is swallowed (or
logged) without rejecting the overall Promise.all; this ensures
Promise.all(stops) waits for every node's cleanup. Update the async callback
that maps over this.nodes.entries() to catch errors from node.portRelease() (and
optionally log them) while preserving the existing try/catch around
node.server.stop().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: edc75a6b-5cea-4470-96db-24521a5f2a29

📥 Commits

Reviewing files that changed from the base of the PR and between aca7803 and 1fdddcc.

📒 Files selected for processing (167)
  • CLAUDE.md
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/admin-only.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/combinator.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/editor-or-admin.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/permissions.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/profile-admin.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/profile-multi.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/public.tool.ts
  • apps/e2e/demo-e2e-authorities/src/apps/authorities/tools/tenant-scoped.tool.ts
  • apps/e2e/demo-e2e-cache/src/apps/compute/tools/expensive-operation.tool.ts
  • apps/e2e/demo-e2e-cache/src/apps/compute/tools/get-cache-stats.tool.ts
  • apps/e2e/demo-e2e-cache/src/apps/compute/tools/non-cached.tool.ts
  • apps/e2e/demo-e2e-cache/src/apps/compute/tools/reset-stats.tool.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/channels/messaging-service.channel.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/tools/emit-app-event.tool.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/tools/file-watcher-tools.tool.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/tools/replay-tools.tool.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/tools/send-channel-notification.tool.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/tools/session-tools.tool.ts
  • apps/e2e/demo-e2e-channels/src/apps/channels/tools/simulate-incoming.tool.ts
  • apps/e2e/demo-e2e-cimd/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-cimd/src/apps/notes/tools/list-notes.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/activities-list.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/activities-log.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/activities-stats.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/crm-reset.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/users-create.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/users-delete.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/users-get.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/users-list.tool.ts
  • apps/e2e/demo-e2e-codecall/src/apps/crm/tools/users-update.tool.ts
  • apps/e2e/demo-e2e-config/src/apps/config/tools/check-config.tool.ts
  • apps/e2e/demo-e2e-config/src/apps/config/tools/get-all-config.tool.ts
  • apps/e2e/demo-e2e-config/src/apps/config/tools/get-config.tool.ts
  • apps/e2e/demo-e2e-config/src/apps/config/tools/get-required-config.tool.ts
  • apps/e2e/demo-e2e-config/src/apps/config/tools/test-config-fallback.tool.ts
  • apps/e2e/demo-e2e-direct/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-direct/src/apps/notes/tools/get-auth-info.tool.ts
  • apps/e2e/demo-e2e-direct/src/apps/notes/tools/list-notes.tool.ts
  • apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/node-info.tool.ts
  • apps/e2e/demo-e2e-elicitation/src/apps/elicitation-demo/tools/confirm-action.tool.ts
  • apps/e2e/demo-e2e-elicitation/src/apps/elicitation-demo/tools/get-user-input.tool.ts
  • apps/e2e/demo-e2e-elicitation/src/apps/elicitation-demo/tools/multi-step.tool.ts
  • apps/e2e/demo-e2e-errors/src/apps/errors/tools/successful.tool.ts
  • apps/e2e/demo-e2e-errors/src/apps/errors/tools/throw-custom-error.tool.ts
  • apps/e2e/demo-e2e-errors/src/apps/errors/tools/throw-internal-error.tool.ts
  • apps/e2e/demo-e2e-errors/src/apps/errors/tools/throw-not-found.tool.ts
  • apps/e2e/demo-e2e-errors/src/apps/errors/tools/throw-validation-error.tool.ts
  • apps/e2e/demo-e2e-feature-flags/src/apps/flagged/tools/always-enabled.tool.ts
  • apps/e2e/demo-e2e-feature-flags/src/apps/flagged/tools/beta-search.tool.ts
  • apps/e2e/demo-e2e-feature-flags/src/apps/flagged/tools/check-flag.tool.ts
  • apps/e2e/demo-e2e-feature-flags/src/apps/flagged/tools/default-true.tool.ts
  • apps/e2e/demo-e2e-feature-flags/src/apps/flagged/tools/experimental-agent.tool.ts
  • apps/e2e/demo-e2e-feature-flags/src/apps/flagged/tools/unflagged.tool.ts
  • apps/e2e/demo-e2e-guard/fixture/src/main.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/combined-guard.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-mutex.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/concurrency-queued.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/rate-limited.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/slow.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/timeout.tool.ts
  • apps/e2e/demo-e2e-guard/src/apps/guard/tools/unguarded.tool.ts
  • apps/e2e/demo-e2e-hooks/src/apps/audit/tools/audited.tool.ts
  • apps/e2e/demo-e2e-hooks/src/apps/audit/tools/clear-audit-log.tool.ts
  • apps/e2e/demo-e2e-hooks/src/apps/audit/tools/get-audit-log.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/employee-directory.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/employee-profile.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/generate-offer-letter.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/headcount-by-department.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/org-chart.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/salary-distribution.tool.ts
  • apps/e2e/demo-e2e-hr/src/apps/hr/tools/team-growth.tool.ts
  • apps/e2e/demo-e2e-multiapp/src/apps/calendar/tools/create-event.tool.ts
  • apps/e2e/demo-e2e-multiapp/src/apps/calendar/tools/list-events.tool.ts
  • apps/e2e/demo-e2e-multiapp/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-multiapp/src/apps/notes/tools/list-notes.tool.ts
  • apps/e2e/demo-e2e-multiapp/src/apps/tasks/tools/create-task.tool.ts
  • apps/e2e/demo-e2e-multiapp/src/apps/tasks/tools/list-tasks.tool.ts
  • apps/e2e/demo-e2e-notifications/src/apps/notify/tools/long-running-task.tool.ts
  • apps/e2e/demo-e2e-notifications/src/apps/notify/tools/test-notify-method.tool.ts
  • apps/e2e/demo-e2e-notifications/src/apps/notify/tools/test-progress-method.tool.ts
  • apps/e2e/demo-e2e-notifications/src/apps/notify/tools/trigger-progress.tool.ts
  • apps/e2e/demo-e2e-notifications/src/apps/notify/tools/trigger-resource-change.tool.ts
  • apps/e2e/demo-e2e-providers/src/apps/config/tools/get-app-info.tool.ts
  • apps/e2e/demo-e2e-providers/src/apps/config/tools/get-request-info.tool.ts
  • apps/e2e/demo-e2e-public/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-public/src/apps/notes/tools/list-notes.tool.ts
  • apps/e2e/demo-e2e-public/src/apps/notes/tools/notes-reset.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/sessions/tools/get-session-data.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/sessions/tools/set-session-data.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/transport/tools/check-session.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/transport/tools/session-info.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/transport/tools/session-isolation.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/transport/tools/update-session-state.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/add-credential.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/authorize-app.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/complete-pending-auth.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/create-pending-auth.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/create-vault-entry.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/delete-vault-entry.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/get-vault-entry.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/update-consent.tool.ts
  • apps/e2e/demo-e2e-redis/src/apps/vault/tools/update-vault-entry.tool.ts
  • apps/e2e/demo-e2e-remember/src/apps/memory/tools/check-memory.tool.ts
  • apps/e2e/demo-e2e-remember/src/apps/memory/tools/forget-value.tool.ts
  • apps/e2e/demo-e2e-remember/src/apps/memory/tools/list-memories.tool.ts
  • apps/e2e/demo-e2e-remember/src/apps/memory/tools/recall-value.tool.ts
  • apps/e2e/demo-e2e-remember/src/apps/memory/tools/remember-value.tool.ts
  • apps/e2e/demo-e2e-remote/src/local-mcp-server/apps/local-test/tools/add.tool.ts
  • apps/e2e/demo-e2e-remote/src/local-mcp-server/apps/local-test/tools/echo.tool.ts
  • apps/e2e/demo-e2e-remote/src/local-mcp-server/apps/local-test/tools/ping.tool.ts
  • apps/e2e/demo-e2e-remote/src/local-mcp-server/apps/local-test/tools/slow-operation.tool.ts
  • apps/e2e/demo-e2e-remote/src/mock-mintlify-server/apps/mock-mintlify/tools/search-mintlify.tool.ts
  • apps/e2e/demo-e2e-serverless/src/apps/serverless/tools/cold-start-test.tool.ts
  • apps/e2e/demo-e2e-serverless/src/apps/serverless/tools/serverless-info.tool.ts
  • apps/e2e/demo-e2e-skills/src/apps/skills/plugins/devops-plugin.ts
  • apps/e2e/demo-e2e-skills/src/apps/skills/tools/admin-action.tool.ts
  • apps/e2e/demo-e2e-skills/src/apps/skills/tools/github-add-comment.tool.ts
  • apps/e2e/demo-e2e-skills/src/apps/skills/tools/github-get-pr.tool.ts
  • apps/e2e/demo-e2e-skills/src/apps/skills/tools/slack-notify.tool.ts
  • apps/e2e/demo-e2e-stdio-transport/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-stdio-transport/src/apps/notes/tools/list-notes.tool.ts
  • apps/e2e/demo-e2e-transparent/src/apps/tasks/tools/create-task.tool.ts
  • apps/e2e/demo-e2e-transparent/src/apps/tasks/tools/list-tasks.tool.ts
  • apps/e2e/demo-e2e-transport-recreation/src/apps/transport-test/tools/increment-counter.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/html-card.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/html-table.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/hybrid-status.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/markdown-list.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/markdown-report.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/mdx-doc.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/mdx-interactive.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/react-chart.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/react-form.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/react-weather.tool.ts
  • apps/e2e/demo-e2e-ui/src/apps/widgets/tools/static-badge.tool.ts
  • apps/e2e/demo-e2e-uipack/src/apps/uipack/tools/build-shell.tool.ts
  • apps/e2e/demo-e2e-uipack/src/apps/uipack/tools/fetch-types.tool.ts
  • apps/e2e/demo-e2e-uipack/src/apps/uipack/tools/load-component.tool.ts
  • apps/e2e/demo-e2e-uipack/src/apps/uipack/tools/resolve-imports.tool.ts
  • apps/e2e/demo-e2e-unix-socket/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-unix-socket/src/apps/notes/tools/get-auth-info.tool.ts
  • apps/e2e/demo-e2e-unix-socket/src/apps/notes/tools/list-notes.tool.ts
  • libs/sdk/src/channel/reply/channel-reply.tool.ts
  • libs/sdk/src/common/entries/scope.entry.ts
  • libs/sdk/src/common/interfaces/server.interface.ts
  • libs/sdk/src/direct/__tests__/create-e2e.spec.ts
  • libs/sdk/src/elicitation/README.md
  • libs/sdk/src/elicitation/send-elicitation-result.tool.ts
  • libs/sdk/src/ha/ha-manager.ts
  • libs/sdk/src/plugin/__tests__/plugin-skills.spec.ts
  • libs/sdk/src/scope/scope.instance.ts
  • libs/sdk/src/server/adapters/express.host.adapter.ts
  • libs/sdk/src/server/security/__tests__/security-audit.spec.ts
  • libs/sdk/src/server/security/security-audit.ts
  • libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts
  • libs/sdk/src/transport/bus/redis-transport-bus.ts
  • libs/sdk/src/transport/transport.registry.ts
  • libs/skills/catalog/frontmcp-development/references/create-tool.md
  • libs/skills/catalog/frontmcp-observability/examples/telemetry-api/tool-custom-spans.md
  • libs/skills/catalog/frontmcp-observability/examples/tracing-setup/basic-tracing.md
  • libs/skills/catalog/frontmcp-observability/references/telemetry-api.md
  • libs/testing/src/containers/redis-container.ts
  • libs/testing/src/example-tools/tool-configs.ts
  • libs/testing/src/server/distributed-test-cluster.ts
  • libs/testing/src/server/port-registry.ts
✅ Files skipped from review due to trivial changes (8)
  • CLAUDE.md
  • libs/testing/src/example-tools/tool-configs.ts
  • libs/skills/catalog/frontmcp-observability/references/telemetry-api.md
  • apps/e2e/demo-e2e-unix-socket/src/apps/notes/tools/list-notes.tool.ts
  • libs/sdk/src/elicitation/README.md
  • libs/testing/src/server/port-registry.ts
  • libs/skills/catalog/frontmcp-development/references/create-tool.md
  • apps/e2e/demo-e2e-uipack/src/apps/uipack/tools/fetch-types.tool.ts
🚧 Files skipped from review as they are similar to previous changes (8)
  • apps/e2e/demo-e2e-distributed/src/apps/distributed-app/tools/node-info.tool.ts
  • libs/sdk/src/common/interfaces/server.interface.ts
  • libs/sdk/src/server/adapters/express.host.adapter.ts
  • libs/sdk/src/server/security/tests/security-audit.spec.ts
  • libs/sdk/src/transport/bus/tests/redis-transport-bus.spec.ts
  • libs/testing/src/containers/redis-container.ts
  • libs/sdk/src/server/security/security-audit.ts
  • libs/sdk/src/scope/scope.instance.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts (1)

70-80: Harden the revoke() Lua assertion to catch CAS regressions

This test currently verifies only 'HGET' is present. A broken script could still pass while losing the ownership guard or delete step.

Suggested assertion tightening
       expect(redis.eval).toHaveBeenCalledWith(
-        expect.stringContaining('HGET'),
+        expect.stringMatching(/HGET[\s\S]*if nodeId == ARGV\[1\][\s\S]*DEL/),
         1,
         'mcp:bus:streamable-http:abc123hash:session-001',
         'node-1',
       );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts` around
lines 70 - 80, The test for revoke() is too loose because it only checks that
the Lua script contains 'HGET' and could miss regressions that remove the
ownership check or the deletion; tighten the assertion in the spec by expecting
the script (the first arg passed to redis.eval when calling bus.revoke(key)) to
include both the ownership check and the deletion operation (e.g., assert it
contains 'HGET' and 'HDEL' or the exact delete token used by your script), and
keep the existing call signature assertions for redis.eval and the key string
matching; refer to the revoke() call and the mocked redis.eval invocation in the
test to locate where to strengthen the expect().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libs/testing/src/server/distributed-test-cluster.ts`:
- Around line 115-119: stopNode currently stops the node process but leaves the
entry in this.nodes so subsequent getNode can return a stopped node; update
stopNode (method stopNode) to remove the node from this.nodes after successful
await node.server.stop(), and update getNode (method getNode) to verify that a
returned node is active (e.g., check node.server.isRunning() or
node.server.status) and either return null/throw or remove the entry if the node
is not running; ensure both places reference the same map key handling so
stopped nodes are no longer returned by getNode.
- Around line 111-114: The doc comment in the DistributedTestCluster class
references restartNode() but that API doesn't exist; either implement
restartNode or remove the misleading reference. To fix, add a public
restartNode(nodeId: string) method on DistributedTestCluster (or equivalent
class name) that re-initializes the stopped node on the same port and state used
by stopNode(), preserving port allocation and re-registering the node with
startNode() helpers, or alternatively update the stopNode() comment to remove
the "call restartNode() to reuse it" sentence if you don't want to add the API;
ensure you modify the DistributedTestCluster class methods (stopNode,
startNode/startExistingNode) consistently so the new restartNode delegates to
the existing start logic and preserves port reuse.
- Around line 65-77: start() currently always starts nodes at index 0 which
allows a second call to overwrite existing entries and orphan child processes;
update start() to either (a) guard against being called when the cluster is
already running by checking a running flag or this.nodes.size and throwing an
error, or (b) if intended to add more nodes, compute the starting index from the
existing count (e.g., use this.nodes.size or this.nextIndex) so startNode(i) is
called with unique indexes and map entries are not overwritten; also ensure
startNode registers nodes under unique keys (and increment nextIndex) and that
teardown() still cleans up all registered nodes.
- Around line 93-98: The env merge currently spreads caller-supplied
this.options.env after the reserved keys (MACHINE_ID, REDIS_URL, NODE_ENV) so
callers can override those; change the merge so reserved keys win—either spread
this.options.env first and then set MACHINE_ID: machineId, REDIS_URL:
this.options.redisUrl, NODE_ENV: 'test', or explicitly filter out those reserved
keys from this.options.env before merging—update the env object in
distributed-test-cluster.ts (the env property using machineId,
this.options.redisUrl and this.options.env) accordingly so caller env cannot
override the reserved keys.

---

Nitpick comments:
In `@libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts`:
- Around line 70-80: The test for revoke() is too loose because it only checks
that the Lua script contains 'HGET' and could miss regressions that remove the
ownership check or the deletion; tighten the assertion in the spec by expecting
the script (the first arg passed to redis.eval when calling bus.revoke(key)) to
include both the ownership check and the deletion operation (e.g., assert it
contains 'HGET' and 'HDEL' or the exact delete token used by your script), and
keep the existing call signature assertions for redis.eval and the key string
matching; refer to the revoke() call and the mocked redis.eval invocation in the
test to locate where to strengthen the expect().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a6cfed71-f442-4384-9bd2-5927d8e318cc

📥 Commits

Reviewing files that changed from the base of the PR and between 1fdddcc and f4610e5.

📒 Files selected for processing (5)
  • apps/e2e/demo-e2e-direct/src/apps/notes/tools/create-note.tool.ts
  • apps/e2e/demo-e2e-unix-socket/src/apps/notes/tools/create-note.tool.ts
  • libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts
  • libs/sdk/src/transport/bus/redis-transport-bus.ts
  • libs/testing/src/server/distributed-test-cluster.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/e2e/demo-e2e-direct/src/apps/notes/tools/create-note.tool.ts
  • libs/sdk/src/transport/bus/redis-transport-bus.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts (1)

57-125: ⚠️ Potential issue | 🟠 Major

Missing cross-node session handoff and failed-node recovery assertions

These tests still create independent clients per node and never prove that the same session survives node switch/failover. Please add one E2E case that: (1) creates session on node-0, (2) reuses that exact session on node-1, and (3) repeats after cluster.stopNode(0).

Suggested test shape
+  skipIf(SKIP)('should reuse the same session across nodes and survive node-0 failure', async () => {
+    const client0 = await McpTestClient.create({ baseUrl: nodes[0].info.baseUrl }).buildAndConnect();
+    let sessionToken: string | undefined;
+    try {
+      const first = await client0.tools.call('echo', { message: 'handoff' });
+      expect(first).toBeSuccessful();
+      // capture session identifier/cookie from client0 (helper/API used in this repo)
+      sessionToken = client0.session?.id ?? client0.session?.token;
+      expect(sessionToken).toBeDefined();
+    } finally {
+      await client0.disconnect();
+    }
+
+    const client1 = await McpTestClient.create({
+      baseUrl: nodes[1].info.baseUrl,
+      // reuse the exact session token/id captured above
+      sessionToken,
+    }).buildAndConnect();
+    try {
+      const handoff = await client1.tools.call('echo', { message: 'handoff' });
+      expect(handoff).toBeSuccessful();
+
+      await cluster.stopNode(0);
+      const recovered = await client1.tools.call('echo', { message: 'recovery' });
+      expect(recovered).toBeSuccessful();
+    } finally {
+      await client1.disconnect();
+    }
+  });

Based on learnings, “Test all code paths including error cases, constructor validation, and error class instanceof checks.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts` around lines
57 - 125, Add a new E2E test in session-scaling.e2e.spec.ts that verifies
cross-node session handoff and failed-node recovery: use
McpTestClient.create(...).buildAndConnect() to create a session on node-0 (call
a tool like 'echo' and capture the session ID from the response/headers or
client.sessionId), then construct a client for node-1 and reuse that exact
session identifier to call the same tool and assert the response is successful
and contains the same session ID and expected payload; next call
cluster.stopNode(0) and repeat the call against node-1 using the same session ID
to assert the session still works after node-0 is stopped. Reference
McpTestClient, client.tools.call, client.tools.list (if needed), and
cluster.stopNode to locate and implement the new test and include explicit
assertions for session ID equality and successful tool responses before and
after the node stop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts`:
- Around line 57-125: Add a new E2E test in session-scaling.e2e.spec.ts that
verifies cross-node session handoff and failed-node recovery: use
McpTestClient.create(...).buildAndConnect() to create a session on node-0 (call
a tool like 'echo' and capture the session ID from the response/headers or
client.sessionId), then construct a client for node-1 and reuse that exact
session identifier to call the same tool and assert the response is successful
and contains the same session ID and expected payload; next call
cluster.stopNode(0) and repeat the call against node-1 using the same session ID
to assert the session still works after node-0 is stopped. Reference
McpTestClient, client.tools.call, client.tools.list (if needed), and
cluster.stopNode to locate and implement the new test and include explicit
assertions for session ID equality and successful tool responses before and
after the node stop.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 82685c94-effe-4e46-96e2-bf94a41fa61b

📥 Commits

Reviewing files that changed from the base of the PR and between f4610e5 and 967b290.

📒 Files selected for processing (3)
  • apps/e2e/demo-e2e-distributed/e2e/session-scaling.e2e.spec.ts
  • libs/sdk/src/transport/bus/__tests__/redis-transport-bus.spec.ts
  • libs/testing/src/server/distributed-test-cluster.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • libs/sdk/src/transport/bus/tests/redis-transport-bus.spec.ts
  • libs/testing/src/server/distributed-test-cluster.ts

@frontegg-david frontegg-david merged commit 7068175 into main Apr 9, 2026
31 checks passed
@frontegg-david frontegg-david deleted the add-scaling-e2e-tests branch April 9, 2026 13:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant