Conversation
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
WalkthroughThis pull request refactors MCPJam to support per-session isolation of MCP client managers, replacing the global singleton architecture. New session resolution middleware detects or creates sessionIds via headers and HttpOnly cookies. A client-manager store factory creates isolated managers per session with TTL and capacity-based eviction. Progress tracking and RPC logging are now session-scoped. Supporting documentation outlines the plan for Dockerized deployment on Railway, ensuring no shared state between concurrent users. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
mcpjam-inspector/server/services/rpc-log-bus.ts (1)
20-25:⚠️ Potential issue | 🟠 MajorUnbounded buffer growth —
bufferByServerhas no size cap or eviction.
publishappends to per-server arrays indefinitely. Thelimitparameter ingetBufferonly trims the response, not the stored data. In a hosted multi-tenant deployment, this is a memory leak vector — many sessions producing RPC events will cause monotonic growth.Consider a ring-buffer or max-size per server key, or periodic pruning akin to the
ProgressStore's stale-entry cleanup.
🤖 Fix all issues with AI agents
In `@mcpjam-inspector/server/app.ts`:
- Around line 90-132: Extract the duplicated createClientManagerStore
configuration into a shared factory function (e.g.,
createDefaultClientManagerStore) that returns the store configured with
HOSTED_MODE, managerFactory, and sessionStoreOptions from
readSessionStoreOptionsFromEnv(); inside managerFactory re-use the
MCPClientManager construction including the rpcLogger that calls
rpcLogBus.publish and the progressHandler that calls progressStore.publish, and
ensure initElicitationCallback(manager) is invoked before returning the manager;
replace the two inline initializations in app.ts and index.ts with calls to this
new shared factory function.
In `@mcpjam-inspector/server/index.ts`:
- Around line 263-264: The global elicitationSubscribers Set and
broadcastElicitation must be scoped by session to avoid cross-tenant leaks:
replace elicitationSubscribers with a Map<sessionId, Set<Subscriber>> (or
similar), update the SSE /stream subscription logic to register subscribers into
the Map under their sessionId, change broadcastElicitation to accept a sessionId
(or attach sessionId to the event payload) and only iterate/send to the Set for
that session, and update initElicitationCallback and any code that calls
broadcastElicitation to pass the correct sessionId; alternatively, if you prefer
client-side filtering, ensure broadcast events include sessionId in their
payload and only add global subscribers that filter incoming events by
sessionId.
In `@mcpjam-inspector/server/services/progress-store.ts`:
- Around line 180-183: The scoped key creation in getScopedServerKey currently
uses the ambiguous separator "::" and can collide when session IDs contain ":"
(see SESSION_ID_PATTERN in client-manager-session.ts); change getScopedServerKey
to use an unambiguous separator (e.g., a null byte "\0") or consistently encode
sessionId/serverId (e.g., base64 or JSON) before concatenation, and update any
code that parses or compares these keys to use the same scheme so both creation
and lookup remain consistent (reference getScopedServerKey and any consumers
that read these keys).
🧹 Nitpick comments (8)
mcpjam-inspector/server/services/__tests__/client-manager-store.test.ts (1)
84-84:await Promise.resolve()may be insufficient for deeply-chained async eviction.
evictSessionfiresdisconnectManagerwithvoid(fire-and-forget). The internalPromise.all→await disconnectServer→removeServerchain requires multiple microtask ticks. A singleawait Promise.resolve()works here only because the mocks resolve immediately, but this is fragile — a refactor to the mock or production code could silently break these assertions.Consider
await vi.waitFor(...)orawait new Promise(r => setTimeout(r, 0))for more resilient async settling.mcpjam-inspector/server/services/progress-store.ts (1)
60-74: Misleading loop variableserverId— it's actually a scoped key.After the refactor, the outer Map key is
sessionId::serverId, not a bareserverId. The variable nameserverIdon line 62 is now misleading.Naming fix
- for (const [serverId, serverProgress] of this.store.entries()) { + for (const [scopedKey, serverProgress] of this.store.entries()) { for (const [token, event] of serverProgress.entries()) { const eventTime = new Date(event.timestamp).getTime(); if (now - eventTime > PROGRESS_STALE_THRESHOLD_MS) { serverProgress.delete(token); } } // Remove empty server maps if (serverProgress.size === 0) { - this.store.delete(serverId); + this.store.delete(scopedKey); } }mcpjam-inspector/server/services/client-manager-store.ts (2)
70-75:_sessionKeyis actually used — drop the underscore prefix.The
_convention signals "intentionally unused," yet line 72 passes it tothis.managerFactory. This mismatch could mislead readers.Fix
- getManager(_sessionKey?: string): MCPClientManager { + getManager(sessionKey?: string): MCPClientManager { if (!this.manager) { - this.manager = this.managerFactory(_sessionKey); + this.manager = this.managerFactory(sessionKey); } return this.manager; }
128-137: Map mutation during iteration — safe here, worth a brief comment.Deleting from a
Mapwhile iterating its.entries()is spec-compliant in ES2015+, but it surprises developers from other language backgrounds. A one-line comment would preempt questions.mcpjam-inspector/server/middleware/client-manager-session.ts (1)
40-60: Use Hono'sgetCookiehelper fromhono/cookieinstead of manual parsing.Hono v4.11.7 provides
getCookie(c, cookieName)which handles cookie parsing and URL decoding automatically. Switching to it would simplify line 21 and eliminate the parsing logic ingetCookieValue().mcpjam-inspector/server/routes/mcp/servers.ts (1)
294-303: Wildcard CORS on SSE stream merits attention in hosted mode.
Access-Control-Allow-Origin: "*"withAccess-Control-Expose-Headers: "*"is fine for local desktop use, but in a hosted multi-tenant deployment this opens the SSE stream to any origin. Consider tightening this toCORS_ORIGINSwhenHOSTED_MODEis enabled, consistent with the CORS middleware applied elsewhere.mcpjam-inspector/server/middleware/__tests__/client-manager-session.test.ts (1)
1-68: Solid test coverage of the session resolution logic.Clean structure, good scenario coverage for the core paths: disabled mode, header precedence, cookie fallback, and auto-generation with
Set-Cookie.A useful addition would be a test verifying header takes precedence when both header and cookie are present — this would codify the priority contract explicitly.
mcpjam-inspector/server/routes/mcp/tasks.ts (1)
166-168:console.errorin server code — uselogger.error()instead.This file has multiple
console.errorcalls (lines 22, 44, 82, 116, 145, 166, 187) that should use the centralized logger. Since you're already touching these functions, consider a quick cleanup pass. As per coding guidelines: "Do not useconsole.log,console.warn, orconsole.errordirectly in server code. Use the centralized logger utility from@/utils/logger."Example fix for the touched functions
Add the import at the top:
import { Hono } from "hono"; import "../../types/hono"; import { progressStore } from "../../services/progress-store"; +import { logger } from "../../utils/logger";Then replace occurrences, e.g.:
- console.error("Error getting progress:", error); + logger.error("Error getting progress:", error);
| const mcpClientManagerStore = createClientManagerStore({ | ||
| hostedMode: HOSTED_MODE, | ||
| managerFactory: (sessionId) => { | ||
| // Create a manager and wire RPC/progress logging for each isolated session. | ||
| const manager = new MCPClientManager( | ||
| {}, | ||
| { | ||
| rpcLogger: ({ direction, message, serverId }) => { | ||
| rpcLogBus.publish({ | ||
| sessionId, | ||
| serverId, | ||
| direction, | ||
| timestamp: new Date().toISOString(), | ||
| message, | ||
| }); | ||
| }, | ||
| progressHandler: ({ | ||
| serverId, | ||
| progressToken, | ||
| progress, | ||
| total, | ||
| message, | ||
| }) => { | ||
| // Store progress for UI access using the real progressToken from the notification | ||
| progressStore.publish({ | ||
| sessionId, | ||
| serverId, | ||
| progressToken, | ||
| progress, | ||
| total, | ||
| message, | ||
| timestamp: new Date().toISOString(), | ||
| }); | ||
| }, | ||
| }, | ||
| ); | ||
|
|
||
| // Initialize elicitation callback immediately so tasks/result calls work | ||
| // without needing to hit the elicitation endpoints first | ||
| initElicitationCallback(mcpClientManager); | ||
| // Register callback per manager instance so task-related elicitation works. | ||
| initElicitationCallback(manager); | ||
| return manager; | ||
| }, | ||
| sessionStoreOptions: readSessionStoreOptionsFromEnv(), | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
cat -n mcpjam-inspector/server/app.ts | sed -n '85,135p'Repository: MCPJam/inspector
Length of output: 1936
🏁 Script executed:
cat -n mcpjam-inspector/server/index.ts | sed -n '225,270p'Repository: MCPJam/inspector
Length of output: 1453
🏁 Script executed:
# Also check for the actual createClientManagerStore and managerFactory pattern
rg -n "createClientManagerStore|managerFactory" --type ts mcpjam-inspector/server/ -A 5 -B 1Repository: MCPJam/inspector
Length of output: 12905
Extract the createClientManagerStore configuration into a shared factory function.
Lines 90–132 in app.ts and lines 228–267 in index.ts contain identical store initialization logic. This duplicated block—including the managerFactory with RPC logger, progress handler, and elicitation callback setup—should be extracted to a reusable function (e.g., createDefaultClientManagerStore()) in a shared utilities module. Both entry points can then invoke this function, eliminating the maintenance burden of keeping two identical copies in sync.
🤖 Prompt for AI Agents
In `@mcpjam-inspector/server/app.ts` around lines 90 - 132, Extract the duplicated
createClientManagerStore configuration into a shared factory function (e.g.,
createDefaultClientManagerStore) that returns the store configured with
HOSTED_MODE, managerFactory, and sessionStoreOptions from
readSessionStoreOptionsFromEnv(); inside managerFactory re-use the
MCPClientManager construction including the rpcLogger that calls
rpcLogBus.publish and the progressHandler that calls progressStore.publish, and
ensure initElicitationCallback(manager) is invoked before returning the manager;
replace the two inline initializations in app.ts and index.ts with calls to this
new shared factory function.
| initElicitationCallback(manager); | ||
| return manager; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check how broadcastElicitation works and whether it's session-scoped
rg -n 'broadcastElicitation' --type=ts -C5
echo "---"
# Check the elicitation SSE endpoint for session filtering
rg -n 'elicitation.*stream\|elicitation.*subscribe\|elicitation.*sse' --type=ts -C5Repository: MCPJam/inspector
Length of output: 2746
🏁 Script executed:
# Find where elicitationSubscribers are registered
rg -n 'elicitationSubscribers' --type=ts -B3 -A3
echo "---"
# Look for the SSE endpoint registration
rg -n 'subscribe\|/elicitation' --type=ts -B2 -A10Repository: MCPJam/inspector
Length of output: 2466
🏁 Script executed:
# Get the full context of the SSE endpoint where subscribers are added
sed -n '80,120p' mcpjam-inspector/server/routes/mcp/elicitation.tsRepository: MCPJam/inspector
Length of output: 1202
🏁 Script executed:
# Find the complete route definition with endpoint path
rg -n 'elicitation\.get\|\.get.*subscribe\|\.get.*stream' mcpjam-inspector/server/routes/mcp/elicitation.ts -B5 -A20Repository: MCPJam/inspector
Length of output: 42
🏁 Script executed:
# Get file size and view the complete elicitation.ts
wc -l mcpjam-inspector/server/routes/mcp/elicitation.ts
echo "---"
cat -n mcpjam-inspector/server/routes/mcp/elicitation.tsRepository: MCPJam/inspector
Length of output: 6013
Elicitation broadcasts leak across sessions in hosted mode.
The broadcastElicitation function (line 13) broadcasts to all subscribers in a global elicitationSubscribers Set without session awareness. When clients connect to the /stream endpoint, they're added to this unscoped global Set with no session context. Broadcast events (lines 53–61, 147) contain no sessionId, so in hosted mode, elicitation requests from one user's MCP server reach all connected SSE clients, breaking tenant isolation.
Scoped subscribers by session (e.g., per-session Sets) or add sessionId to broadcast events and filter on the client side.
🤖 Prompt for AI Agents
In `@mcpjam-inspector/server/index.ts` around lines 263 - 264, The global
elicitationSubscribers Set and broadcastElicitation must be scoped by session to
avoid cross-tenant leaks: replace elicitationSubscribers with a Map<sessionId,
Set<Subscriber>> (or similar), update the SSE /stream subscription logic to
register subscribers into the Map under their sessionId, change
broadcastElicitation to accept a sessionId (or attach sessionId to the event
payload) and only iterate/send to the Set for that session, and update
initElicitationCallback and any code that calls broadcastElicitation to pass the
correct sessionId; alternatively, if you prefer client-side filtering, ensure
broadcast events include sessionId in their payload and only add global
subscribers that filter incoming events by sessionId.
| function getScopedServerKey(serverId: string, sessionId?: string): string { | ||
| const sessionKey = sessionId ?? ""; | ||
| return `${sessionKey}::${serverId}`; | ||
| } |
There was a problem hiding this comment.
Scoped key separator :: is ambiguous when session IDs may contain colons.
SESSION_ID_PATTERN in client-manager-session.ts permits : in session IDs. A sessionId "a:b" with serverId "c" produces key "a:b::c", which collides with sessionId "a" + serverId "b::c". Although UUIDs don't contain colons and serverIds are typically simple, the pattern allows it.
A null-byte separator (\0) or a different encoding scheme would eliminate ambiguity.
Proposed fix
function getScopedServerKey(serverId: string, sessionId?: string): string {
const sessionKey = sessionId ?? "";
- return `${sessionKey}::${serverId}`;
+ return `${sessionKey}\0${serverId}`;
}🤖 Prompt for AI Agents
In `@mcpjam-inspector/server/services/progress-store.ts` around lines 180 - 183,
The scoped key creation in getScopedServerKey currently uses the ambiguous
separator "::" and can collide when session IDs contain ":" (see
SESSION_ID_PATTERN in client-manager-session.ts); change getScopedServerKey to
use an unambiguous separator (e.g., a null byte "\0") or consistently encode
sessionId/serverId (e.g., base64 or JSON) before concatenation, and update any
code that parses or compares these keys to use the same scheme so both creation
and lookup remain consistent (reference getScopedServerKey and any consumers
that read these keys).
Reviews the mcpjam-hosted-multi-tenancy PR that replaces the singleton MCPClientManager with a per-session store for hosted deployment. Identifies critical issues: shared session token across tenants, session fixation via x-mcpjam-session-id header, no authentication layer, and memory leaks from unevicted session data in event buses. https://claude.ai/code/session_01NvstR1k7N1iKUkm7togkU3
No description provided.