feat: replace faye with custom CometD client (long-polling + WebSocket)#95
feat: replace faye with custom CometD client (long-polling + WebSocket)#95
Conversation
Implement lib/cometd.js — a lightweight Bayeux protocol client using built-in fetch and WebSocket (Node 22+). Includes mock CometD server and 11 tests covering handshake, subscribe, event delivery, extensions, disconnect, and error handling. Prepares for faye removal. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… faye
Replace require('faye') with require('./cometd'). The Subscription and
Client EventEmitter APIs remain backward-compatible. Client now
auto-handshakes and auto-connects during construction.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Test FDCStream.Client connect, Subscription event delivery, and replay ID support using the mock CometD server. Fix Subscription to await client readiness before subscribing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
faye and its 12 transitive packages are no longer needed — replaced by the custom CometD client in lib/cometd.js. Removes ~1.2MB of dependencies including tough-cookie, tunnel-agent, faye-websocket, etc. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace outdated README with comprehensive documentation covering all features, async/await examples, streaming with replay support, binary content retrieval, Apex REST, and development instructions. Reflects the faye-to-CometD migration and Node.js 22+ requirement. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughReplaces the Faye streaming layer with a new CometD client, adds a mock CometD server and streaming tests, updates README and contributor docs, removes the Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant FDCS as FDCStream.Client
participant CometD as CometDClient
participant Server as Mock/SF Streaming<br/>Server
App->>FDCS: new Client(opts)
FDCS->>CometD: new CometDClient(endpoint, opts)
FDCS->>CometD: _init() → handshake() & connect()
CometD->>Server: POST /meta/handshake
Server-->>CometD: { clientId, advice, supportedTypes }
CometD->>CometD: emit transport:up
CometD->>Server: POST /meta/connect (long-poll)
App->>FDCS: subscribe(topic, { replayId })
FDCS->>CometD: subscribe(topic, callback)
CometD->>Server: POST /meta/subscribe (includes extensions/headers)
Server-->>CometD: subscribe ack
Server->>CometD: deliver event (piggyback on connect response)
CometD->>FDCS: invoke subscription callback
FDCS->>App: emit 'data'
App->>FDCS: disconnect()
FDCS->>CometD: disconnect()
CometD->>Server: POST /meta/disconnect (best-effort)
CometD->>CometD: emit transport:down & cleanup
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (7)
lib/fdcstream.js (1)
119-122: Consider returning the disconnect promise for proper cleanup sequencing.The underlying
_cometd.disconnect()is async but the return value is discarded. Callers who need to ensure disconnect completes before proceeding (e.g., in tests or graceful shutdown) cannot await it./** Disconnect and close the CometD connection. */ - disconnect() { - this._cometd.disconnect(); + async disconnect() { + await this._cometd.disconnect(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/fdcstream.js` around lines 119 - 122, The disconnect() method currently calls this._cometd.disconnect() but discards its asynchronous result; change disconnect() to return the underlying promise so callers can await completion (either by adding async to disconnect and returning await this._cometd.disconnect() or simply returning this._cometd.disconnect()). Update the method signature for the class method disconnect to propagate the promise from this._cometd.disconnect() so tests and shutdown logic can await it.README.md (1)
152-156: Minor: Consider hyphenating "Multi-User Mode" for consistency.-## Single vs Multi User Mode +## Single vs Multi-User Mode -- **Multi mode** (default): pass `oauth` with each API call +- **Multi-user mode** (default): pass `oauth` with each API call🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 152 - 156, Update the README wording to use hyphenated mode names for consistency: replace "Multi mode" and "Single mode" with "Multi-User Mode" and "Single-User Mode" in the "Single vs Multi User Mode" section, and ensure references to the authentication behavior (e.g., the phrase "OAuth is cached on the connection after authenticate()") still read correctly with the new names.lib/cometd.js (3)
381-395: Unbounded reconnection retries could cause resource exhaustion.
_scheduleReconnectrecursively calls itself on failure without any backoff or retry limit. Under persistent network issues, this creates infinite retry loops. Consider adding exponential backoff and/or a maximum retry count.♻️ Suggested improvement with exponential backoff
+ // In constructor: + this._reconnectAttempts = 0; + this._maxReconnectAttempts = opts.maxReconnectAttempts || 10; _scheduleReconnect() { if (this._disconnecting) return; + if (this._reconnectAttempts >= this._maxReconnectAttempts) { + this.emit('error', new Error('Max reconnection attempts exceeded')); + return; + } + + const delay = Math.min( + this._retryInterval * Math.pow(2, this._reconnectAttempts), + 30000 // cap at 30 seconds + ); + this._reconnectAttempts++; + - this._connectTimer = setTimeout(async () => { + this._connectTimer = setTimeout(async () => { try { await this.handshake(); for (const topic of this._subscriptions.keys()) { await this._sendSubscribe(topic); } await this.connect(); + this._reconnectAttempts = 0; // Reset on success } catch { this._scheduleReconnect(); } - }, this._retryInterval); + }, delay); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 381 - 395, The _scheduleReconnect method currently recurses indefinitely on failure; modify it to implement exponential backoff and a maximum retry count: add a retry counter property (e.g., this._reconnectAttempts) that increments on each failed reconnect attempt and is reset to 0 on successful handshake/connect, compute the timeout using exponential backoff based on this._retryInterval (capped to a max, e.g., this._maxRetryInterval), and stop retrying after a configurable max attempts (e.g., this._maxReconnectAttempts) by clearing this._connectTimer and emitting/erroring appropriately; ensure you still respect this._disconnecting, reuse/clear this._connectTimer, and keep the existing logic that calls handshake(), _sendSubscribe(topic) for each subscription, and connect() on success.
149-172: WebSocket message handler may not be cleaned up on mismatched responses.The handler added at line 164 is only removed when a matching response is received (line 156) or after the 10-second timeout (line 168). If responses arrive out of order or without matching IDs, handlers could accumulate until timeout.
Consider using a Map of pending requests keyed by message ID for more robust cleanup:
♻️ Suggested improvement
+ // In constructor: + this._pendingRequests = new Map(); // id → { resolve, reject, timer } _sendWs(messages) { return new Promise((resolve, reject) => { const expectedId = messages[0].id; const isConnect = messages[0].channel === '/meta/connect'; if (isConnect) { this._pendingConnectResolve = resolve; + return; } - this._ws.send(JSON.stringify(messages)); - - if (!isConnect) { - const handler = async (event) => { ... }; - this._ws.addEventListener('message', handler); - setTimeout(() => { ... }, 10000); - } + const timer = setTimeout(() => { + this._pendingRequests.delete(expectedId); + reject(new Error('CometD WebSocket response timeout')); + }, 10000); + + this._pendingRequests.set(expectedId, { resolve, reject, timer }); + this._ws.send(JSON.stringify(messages)); }); } + + // In WebSocket message handler, resolve pending requests by ID🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 149 - 172, The current per-request message handler (the local handler added to this._ws) can accumulate if messages arrive with other IDs; replace this pattern with a centralized pending-requests registry: add a Map (e.g., this._pendingRequests) keyed by expectedId storing {resolve,reject,timeoutId}, install a single persistent this._ws 'message' listener that parses incoming messages, iterates responses and dispatches each response to the matching entry in this._pendingRequests by id (calling this._applyIncoming, resolve, clearing timeout, and deleting the Map entry), and have the per-request code simply insert its resolver/rejector and a timeout into the Map (timeout should call reject and delete the Map entry). Use the existing symbols expectedId, this._applyIncoming, resolve, reject and this._ws to locate the code to change.
166-170: Consider making the WebSocket response timeout configurable.The 10-second timeout is hardcoded. For consistency with other timeouts, consider using
this._timeoutor a dedicated option.- setTimeout(() => { - this._ws.removeEventListener('message', handler); - reject(new Error('CometD WebSocket response timeout')); - }, 10000); + setTimeout(() => { + this._ws.removeEventListener('message', handler); + reject(new Error('CometD WebSocket response timeout')); + }, this._timeout);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 166 - 170, The websocket response timeout is hardcoded to 10000ms in the setTimeout call; update the logic in the method that installs the handler (the setTimeout block referencing this._ws, handler, and reject) to use a configurable value (preferably this._timeout or a dedicated option like this._options.wsResponseTimeout) instead of 10000, falling back to 10000ms if the configurable value is undefined or invalid; ensure the timeout variable is read once before creating the timer and that existing behavior (removing the 'message' listener and rejecting with the same error message) is preserved.test/mock/cometd-server.js (1)
201-205: Consider movingrequire('crypto')to module top-level.Inline requires work but are unconventional. Moving it to the top improves readability and follows Node.js conventions.
const http = require('http'); +const crypto = require('crypto'); const DEFAULT_PORT = 34444;Then at line 201:
- const crypto = require('crypto'); const acceptKey = crypto🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mock/cometd-server.js` around lines 201 - 205, The inline require of crypto in test/mock/cometd-server.js should be moved to the module top-level: add const crypto = require('crypto') at the top of the file and remove the inline require so the acceptKey calculation (createHash('sha1').update(key + '258EAFA5-E914-47DA-95CA-5AB5DC65C97B').digest('base64')) continues to use the top-level crypto variable; this keeps require usage conventional and improves readability around the acceptKey/crypto usage.test/streaming.js (1)
99-100: Fixed delays may cause test flakiness under load.The 200ms delays for event delivery work but could become flaky on slower CI systems. Consider using a polling pattern or event-driven waits if flakiness occurs.
// Alternative pattern (optional): await new Promise((resolve) => { const check = () => received.length > 0 ? resolve() : setTimeout(check, 20); setTimeout(check, 20); });Also applies to: 123-123, 247-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/streaming.js` around lines 99 - 100, Replace the fixed 200ms sleep used to wait for event delivery (the await new Promise(resolve => setTimeout(...)) call) with a deterministic wait: either poll for the condition (e.g., loop checking received.length > 0 with short timeouts) or subscribe to the event/emitter and resolve when the expected event arrives; change the three occurrences (the timeout at the shown diff plus the ones around lines 123 and 247-250) so tests use a polling helper or event-driven resolve instead of fixed delays to avoid CI flakiness, and keep the check tied to the actual observable used in the test (e.g., the received array or the stream/event emitter).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 163: Update the JSDoc at the top of lib/api.js to reflect the new
CometD-based implementation instead of the old Faye reference: locate the header
comment block (lines 1-4) that mentions "Faye-based Streaming API client" and
change the description to indicate the built-in CometD client replaces the faye
dependency and any related tags (`@module`, `@summary`) so the docstring matches the
actual CometD implementation used by the module.
---
Nitpick comments:
In `@lib/cometd.js`:
- Around line 381-395: The _scheduleReconnect method currently recurses
indefinitely on failure; modify it to implement exponential backoff and a
maximum retry count: add a retry counter property (e.g.,
this._reconnectAttempts) that increments on each failed reconnect attempt and is
reset to 0 on successful handshake/connect, compute the timeout using
exponential backoff based on this._retryInterval (capped to a max, e.g.,
this._maxRetryInterval), and stop retrying after a configurable max attempts
(e.g., this._maxReconnectAttempts) by clearing this._connectTimer and
emitting/erroring appropriately; ensure you still respect this._disconnecting,
reuse/clear this._connectTimer, and keep the existing logic that calls
handshake(), _sendSubscribe(topic) for each subscription, and connect() on
success.
- Around line 149-172: The current per-request message handler (the local
handler added to this._ws) can accumulate if messages arrive with other IDs;
replace this pattern with a centralized pending-requests registry: add a Map
(e.g., this._pendingRequests) keyed by expectedId storing
{resolve,reject,timeoutId}, install a single persistent this._ws 'message'
listener that parses incoming messages, iterates responses and dispatches each
response to the matching entry in this._pendingRequests by id (calling
this._applyIncoming, resolve, clearing timeout, and deleting the Map entry), and
have the per-request code simply insert its resolver/rejector and a timeout into
the Map (timeout should call reject and delete the Map entry). Use the existing
symbols expectedId, this._applyIncoming, resolve, reject and this._ws to locate
the code to change.
- Around line 166-170: The websocket response timeout is hardcoded to 10000ms in
the setTimeout call; update the logic in the method that installs the handler
(the setTimeout block referencing this._ws, handler, and reject) to use a
configurable value (preferably this._timeout or a dedicated option like
this._options.wsResponseTimeout) instead of 10000, falling back to 10000ms if
the configurable value is undefined or invalid; ensure the timeout variable is
read once before creating the timer and that existing behavior (removing the
'message' listener and rejecting with the same error message) is preserved.
In `@lib/fdcstream.js`:
- Around line 119-122: The disconnect() method currently calls
this._cometd.disconnect() but discards its asynchronous result; change
disconnect() to return the underlying promise so callers can await completion
(either by adding async to disconnect and returning await
this._cometd.disconnect() or simply returning this._cometd.disconnect()). Update
the method signature for the class method disconnect to propagate the promise
from this._cometd.disconnect() so tests and shutdown logic can await it.
In `@README.md`:
- Around line 152-156: Update the README wording to use hyphenated mode names
for consistency: replace "Multi mode" and "Single mode" with "Multi-User Mode"
and "Single-User Mode" in the "Single vs Multi User Mode" section, and ensure
references to the authentication behavior (e.g., the phrase "OAuth is cached on
the connection after authenticate()") still read correctly with the new names.
In `@test/mock/cometd-server.js`:
- Around line 201-205: The inline require of crypto in
test/mock/cometd-server.js should be moved to the module top-level: add const
crypto = require('crypto') at the top of the file and remove the inline require
so the acceptKey calculation (createHash('sha1').update(key +
'258EAFA5-E914-47DA-95CA-5AB5DC65C97B').digest('base64')) continues to use the
top-level crypto variable; this keeps require usage conventional and improves
readability around the acceptKey/crypto usage.
In `@test/streaming.js`:
- Around line 99-100: Replace the fixed 200ms sleep used to wait for event
delivery (the await new Promise(resolve => setTimeout(...)) call) with a
deterministic wait: either poll for the condition (e.g., loop checking
received.length > 0 with short timeouts) or subscribe to the event/emitter and
resolve when the expected event arrives; change the three occurrences (the
timeout at the shown diff plus the ones around lines 123 and 247-250) so tests
use a polling helper or event-driven resolve instead of fixed delays to avoid CI
flakiness, and keep the check tied to the actual observable used in the test
(e.g., the received array or the stream/event emitter).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 21a339f2-7776-4fa9-b83d-6b117c98bfa4
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (6)
README.mdlib/cometd.jslib/fdcstream.jspackage.jsontest/mock/cometd-server.jstest/streaming.js
| - API version must be fully-qualified (`'v45.0'`, not `42` or `'42'`) | ||
| - Streaming subscriptions require the full topic path (e.g. `/topic/MyTopic`) | ||
| - Requires Node.js >= 22.0 | ||
| - Built-in CometD client replaces the faye dependency |
There was a problem hiding this comment.
Update JSDoc in lib/api.js to match new implementation.
Line 163 correctly states "Built-in CometD client replaces the faye dependency", but the JSDoc comment in lib/api.js:1-4 still references "Faye-based Streaming API client". Consider updating that comment for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 163, Update the JSDoc at the top of lib/api.js to reflect
the new CometD-based implementation instead of the old Faye reference: locate
the header comment block (lines 1-4) that mentions "Faye-based Streaming API
client" and change the description to indicate the built-in CometD client
replaces the faye dependency and any related tags (`@module`, `@summary`) so the
docstring matches the actual CometD implementation used by the module.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tasks/code-refactoring-summary.md (1)
102-117: Add language identifier to implementation sequence code block.The implementation sequence (Step 1–14) is in a fenced code block without a language identifier. Adding
textormarkdownwould improve rendering.📝 Proposed fix
Run `npm test` after each step to catch regressions immediately. -``` +```text Step 1 R26 Remove section-divider comments in api.js (cosmetic baseline) Step 2 R28 Eliminate dead code + redundant Promise.resolve (clean up noise)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tasks/code-refactoring-summary.md` around lines 102 - 117, The fenced code block containing the Step 1–14 implementation sequence lacks a language identifier; update the opening fence from ``` to ```text (or ```markdown) so it renders correctly—locate the block with the triple-backtick fences surrounding the step list (the lines starting with "Step 1 R26 ..." through "Step 14 R21 ...") and change the opening fence only, leaving the content intact.CONTRIBUTING.md (1)
80-96: Add language identifiers to fenced code blocks.Two fenced code blocks are missing language identifiers, which reduces syntax highlighting and readability:
- Lines 80–84: commit message type/scope format
- Lines 90–96: commit message examples
📝 Proposed fix
Common types: `feat`, `fix`, `docs`, `refactor`, `test`, `chore` Examples: -``` +```text feat(streaming): add platform event batch subscribe fix(auth): handle expired refresh token in single-user mode docs: update streaming API guideAnd for the format block:
This project follows [Conventional Commits](https://www.conventionalcommits.org/): -``` +```text <type>(<scope>): <description> [optional body]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 80 - 96, Add the language identifier "text" to the two fenced code blocks that show the commit message format and the examples: change the opening fences for the commit message format block (the block containing "<type>(<scope>): <description> [optional body]") and the commit examples block (the one listing feat(streaming)..., fix(auth)..., docs: update...) from ``` to ```text so both blocks render with proper syntax highlighting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 15: Update the Node.js version requirement in CONTRIBUTING.md to match
README.md by changing the version string ">= 22.0" to ">= 22.4.0" so both docs
require the same minimum (the stable WebSocket) — locate the occurrence of ">=
22.0" in CONTRIBUTING.md and replace it with ">= 22.4.0".
---
Nitpick comments:
In `@CONTRIBUTING.md`:
- Around line 80-96: Add the language identifier "text" to the two fenced code
blocks that show the commit message format and the examples: change the opening
fences for the commit message format block (the block containing
"<type>(<scope>): <description> [optional body]") and the commit examples block
(the one listing feat(streaming)..., fix(auth)..., docs: update...) from ``` to
```text so both blocks render with proper syntax highlighting.
In `@tasks/code-refactoring-summary.md`:
- Around line 102-117: The fenced code block containing the Step 1–14
implementation sequence lacks a language identifier; update the opening fence
from ``` to ```text (or ```markdown) so it renders correctly—locate the block
with the triple-backtick fences surrounding the step list (the lines starting
with "Step 1 R26 ..." through "Step 14 R21 ...") and change the opening fence
only, leaving the content intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b5e1ef26-db82-4eef-b806-f1bc04e8f821
📒 Files selected for processing (17)
CLAUDE.mdCODE_OF_CONDUCT.mdCONTRIBUTING.mdREADME.mdcode-refactoring-report.mdcode-refactoring-summary.mdcode-smell-detector-data.jsoncode-smell-detector-report.mdcode-smell-detector-summary.mdpackage.jsonrefactoring-expert-data.jsontasks/code-refactoring-report.mdtasks/code-refactoring-summary.mdtasks/code-smell-detector-data.jsontasks/code-smell-detector-report.mdtasks/code-smell-detector-summary.mdtasks/refactoring-expert-data.json
💤 Files with no reviewable changes (6)
- refactoring-expert-data.json
- code-smell-detector-data.json
- code-refactoring-report.md
- code-smell-detector-report.md
- code-refactoring-summary.md
- code-smell-detector-summary.md
✅ Files skipped from review due to trivial changes (3)
- tasks/code-smell-detector-data.json
- CLAUDE.md
- tasks/refactoring-expert-data.json
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
| ## Change Log | ||
|
|
||
| Overview documentation on [changes between versions](https://stwissel.github.io/nforce8/Changelog.html) | ||
| - **Node.js >= 22.4.0** — uses built-in `fetch` and a stable built-in `WebSocket` (Node’s global `WebSocket` was experimental in 22.0–22.3) |
There was a problem hiding this comment.
Inconsistent Node.js version requirement with CONTRIBUTING.md.
README.md specifies Node.js >= 22.4.0 (line 15), while CONTRIBUTING.md line 37 specifies >= 22.0. The README's more specific requirement (22.4.0 for stable WebSocket) appears correct given the rationale, but CONTRIBUTING.md should match.
🔧 Proposed fix for CONTRIBUTING.md
### Prerequisites
-- **Node.js >= 22.0** (uses built-in `fetch` and `WebSocket`)
++ **Node.js >= 22.4.0** (uses built-in `fetch` and stable `WebSocket`)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Node.js >= 22.4.0** — uses built-in `fetch` and a stable built-in `WebSocket` (Node’s global `WebSocket` was experimental in 22.0–22.3) | |
| ### Prerequisites | |
| - **Node.js >= 22.4.0** (uses built-in `fetch` and stable `WebSocket`) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 15, Update the Node.js version requirement in
CONTRIBUTING.md to match README.md by changing the version string ">= 22.0" to
">= 22.4.0" so both docs require the same minimum (the stable WebSocket) —
locate the occurrence of ">= 22.0" in CONTRIBUTING.md and replace it with ">=
22.4.0".
Previously upsert() bypassed the applyBody helper and always sent JSON, producing incorrect requests for binary SObjects like ContentVersion. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (6)
tasks/refactoring-expert-data.json (2)
4-21: Use structured fields instead of embedding ID + description initemstrings.Right now, IDs and descriptions are packed into one text field and duplicated between
priority_matrixandimplementation_sequence. This is brittle for tooling and easy to desync during edits.♻️ Suggested schema refactor
{ "total_recommendations": 18, "priority_matrix": [ - {"item": "R01 — Add missing require('crypto') to test/mock/cometd-server.js", "impact": "H", "complexity": "L", "risk": "L"}, + {"id": "R01", "title": "Add missing require('crypto')", "path": "test/mock/cometd-server.js", "impact": "H", "complexity": "L", "risk": "L"}, ... ], "implementation_sequence": [ - {"order": 1, "item": "R01 — Add missing require('crypto') to test/mock/cometd-server.js", "rationale": "..."}, + {"order": 1, "id": "R01", "rationale": "..."}, ... ] }Also applies to: 37-54
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tasks/refactoring-expert-data.json` around lines 4 - 21, The JSON items embed an ID and description together in the "item" string (e.g. "R01 — Add missing require('crypto')..."), which is brittle and duplicated across priority_matrix and implementation_sequence; refactor each entry into structured fields like {"id":"R01","title":"Add missing require('crypto') to test/mock/cometd-server.js","impact":"H","complexity":"L","risk":"L"} (apply for all R01..R18 and the other occurrences at 37-54), update both priority_matrix and implementation_sequence to use the new object shape, update any code that consumes this file to read id/title instead of parsing "item", and add or update any JSON schema/validation to reflect the new fields to prevent regressions.
23-35: Avoid manually maintained aggregate counts.
risk_distributionandcategory_distributionare derived data from recommendation entries, so they can silently drift after future edits. Prefer generating these in build/docs tooling or validating them in CI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tasks/refactoring-expert-data.json` around lines 23 - 35, The JSON contains manually maintained aggregate keys risk_distribution and category_distribution that can drift; remove these static counts and instead compute them from the recommendation entries at build or CI time (e.g., add a build/docs script or a CI validation step that parses the recommendations, counts risk levels and categories, and fails the build if totals don’t match); if you prefer to keep the fields for readability, make them read-only generated artifacts by adding a generation step (computeAggregates or similar) that writes these keys from the source entries and add a CI check to validate they match.README.md (1)
152-156: Minor grammar: use hyphen in "Multi-User Mode" heading.The heading "Single vs Multi User Mode" should use a hyphen for consistency with the bold text below it ("Multi-User Mode").
-## Single vs Multi User Mode +## Single vs Multi-User Mode🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 152 - 156, The heading "Single vs Multi User Mode" is missing a hyphen; update the README heading to "Single vs Multi-User Mode" to match the phrasing used in the section (and the bold "Multi-User Mode") for consistent grammar and style.test/mock/cometd-server.js (2)
276-278: Hoist inlinerequireto top of file.The inline
require('events')inside the method is unconventional. Module imports should be at the top of the file for clarity and to avoid repeated lookups (though Node caches requires).♻️ Proposed fix
At top of file with other requires:
const http = require('http'); +const crypto = require('crypto'); +const EventEmitter = require('events');In the method:
-const emitter = new (require('events').EventEmitter)(); +const emitter = new EventEmitter();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mock/cometd-server.js` around lines 276 - 278, The inline require in _createWsWrapper should be hoisted: move const { EventEmitter } = require('events') to the top of the file with the other requires and replace new (require('events').EventEmitter)() inside _createWsWrapper with new EventEmitter(); this avoids repeated lookups and follows project import conventions.
37-41: Add braces afterifcondition for consistency.Static analysis flagged the missing braces on line 38. While functionally correct, adding braces improves readability and prevents future maintenance errors.
♻️ Proposed fix
this.server.listen(this.port, (err) => { - if (err) return reject(err); + if (err) { + return reject(err); + } resolve(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mock/cometd-server.js` around lines 37 - 41, The callback passed to this.server.listen (the anonymous (err) => { ... } handler) uses a single-line if (err) return reject(err); without braces; update that handler to add braces around the if body so it becomes if (err) { return reject(err); } to match project style and improve readability and safety.test/streaming.js (1)
119-140: Consider increasing the delay for CI environments.The 100ms delay at line 134 to confirm no events arrive after unsubscribe may be insufficient under heavy CI load, potentially causing flaky tests. Consider increasing to 200-300ms or documenting this as a known limitation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/streaming.js` around lines 119 - 140, The 100ms wait after calling sub.cancel() is too short for CI and can cause flakes; increase the delay to a safer value (e.g., 250–300ms) or make it conditional on CI (use process.env.CI to set 300ms vs 100ms locally). Update the timeout in the test around received/sub.cancel()/server.pushEvent('/topic/UnsubTest') so the assertion received.length.should.equal(0) runs after the longer wait to avoid intermittent failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/cometd.js`:
- Around line 286-291: Subscriber callbacks invoked from the incoming-message
dispatch (where this._subscriptions.get(msg.channel) yields callback and
callback(msg.data) is currently called) must be isolated so thrown errors don't
bubble into the transport/async flow; wrap the invocation in a try/catch (for
both occurrences around lines handling data messages and the similar block at
352-356) and handle the exception locally (e.g., log via the existing logger or
this._logger/error handler) without rethrowing so transport state and reconnect
logic are not affected.
- Around line 154-156: The pending /meta/connect promise stored in
_pendingConnectResolve can remain unresolved if the WebSocket closes/errors
while isConnect is true; update the socket close and error handling to settle
that pending promise (call the stored _pendingConnectResolve with an error or
appropriate rejection/resolution) and then clear _pendingConnectResolve to avoid
stale references. Locate the places where isConnect is set and where the socket
'close'/'error' handlers live (references: _pendingConnectResolve, isConnect)
and add logic to detect a pending connect, invoke the resolver with an error or
rejection value, and null out _pendingConnectResolve; apply the same fix to the
other similar blocks referenced around the file (the sections noted at 240-245
and 248-255).
- Around line 165-167: The current filter treats responses with no id (the
expression responses.filter((r) => r.id === expectedId || !r.id)) as matches
which allows unsolicited frames to resolve pending requests; change the
predicate to only match responses with an id equal to expectedId (e.g.,
responses.filter(r => r.id === expectedId)) and remove the fallback || !r.id
logic, and apply the same change to the other occurrence around lines 179-183 so
only responses with a matching r.id resolve the pending request.
- Around line 126-141: The _sendHttp HTTP transport lacks timeout/abort
handling; add an AbortController and timeout logic to prevent fetch from
hanging: create an AbortController, start a timer (use an existing this._timeout
value or a sensible default), pass controller.signal into the fetch call for
this._endpoint with this._headers, clear the timer after fetch completes, and
convert an abort into a clear timeout error (throw a descriptive Error on
timeout). Ensure existing response handling still calls this._applyIncoming for
each response and that you clear the timer in both success and error paths to
avoid leaks.
- Around line 231-234: The native Node WebSocket constructor ignores a third
options arg so creating this._ws with headers is ineffective; replace the native
WebSocket with the 'ws' client (import/require('ws')) and instantiate this._ws =
new WebSocket(wsUrl, ["cometd"], { headers: this._headers }) so headers are
actually sent, or if you cannot add the dependency, encode auth into
this._endpoint/ wsUrl (e.g., query/token) before calling new WebSocket(wsUrl,
["cometd"]) to ensure the server sees the auth; update references to WebSocket,
wsUrl, this._endpoint and this._headers accordingly.
In `@test/mock/cometd-server.js`:
- Around line 1-5: The module fails at runtime because crypto.createHash(...) is
used (around the WebSocket upgrade handling where createHash is called) but the
Node 'crypto' module is not imported; add a top-level import for the Node crypto
module (e.g. const crypto = require('crypto')) and ensure the code that calls
crypto.createHash(...) uses that imported symbol (verify usages such as the call
around the WebSocket upgrade / handshake logic refer to the imported crypto).
---
Nitpick comments:
In `@README.md`:
- Around line 152-156: The heading "Single vs Multi User Mode" is missing a
hyphen; update the README heading to "Single vs Multi-User Mode" to match the
phrasing used in the section (and the bold "Multi-User Mode") for consistent
grammar and style.
In `@tasks/refactoring-expert-data.json`:
- Around line 4-21: The JSON items embed an ID and description together in the
"item" string (e.g. "R01 — Add missing require('crypto')..."), which is brittle
and duplicated across priority_matrix and implementation_sequence; refactor each
entry into structured fields like {"id":"R01","title":"Add missing
require('crypto') to
test/mock/cometd-server.js","impact":"H","complexity":"L","risk":"L"} (apply for
all R01..R18 and the other occurrences at 37-54), update both priority_matrix
and implementation_sequence to use the new object shape, update any code that
consumes this file to read id/title instead of parsing "item", and add or update
any JSON schema/validation to reflect the new fields to prevent regressions.
- Around line 23-35: The JSON contains manually maintained aggregate keys
risk_distribution and category_distribution that can drift; remove these static
counts and instead compute them from the recommendation entries at build or CI
time (e.g., add a build/docs script or a CI validation step that parses the
recommendations, counts risk levels and categories, and fails the build if
totals don’t match); if you prefer to keep the fields for readability, make them
read-only generated artifacts by adding a generation step (computeAggregates or
similar) that writes these keys from the source entries and add a CI check to
validate they match.
In `@test/mock/cometd-server.js`:
- Around line 276-278: The inline require in _createWsWrapper should be hoisted:
move const { EventEmitter } = require('events') to the top of the file with the
other requires and replace new (require('events').EventEmitter)() inside
_createWsWrapper with new EventEmitter(); this avoids repeated lookups and
follows project import conventions.
- Around line 37-41: The callback passed to this.server.listen (the anonymous
(err) => { ... } handler) uses a single-line if (err) return reject(err);
without braces; update that handler to add braces around the if body so it
becomes if (err) { return reject(err); } to match project style and improve
readability and safety.
In `@test/streaming.js`:
- Around line 119-140: The 100ms wait after calling sub.cancel() is too short
for CI and can cause flakes; increase the delay to a safer value (e.g.,
250–300ms) or make it conditional on CI (use process.env.CI to set 300ms vs
100ms locally). Update the timeout in the test around
received/sub.cancel()/server.pushEvent('/topic/UnsubTest') so the assertion
received.length.should.equal(0) runs after the longer wait to avoid intermittent
failures.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1d1d76ce-c977-4f16-b56b-e1d6fdf079a0
📒 Files selected for processing (12)
CLAUDE.mdREADME.mdlib/cometd.jslib/fdcstream.jstasks/code-refactoring-report.mdtasks/code-refactoring-summary.mdtasks/code-smell-detector-data.jsontasks/code-smell-detector-report.mdtasks/code-smell-detector-summary.mdtasks/refactoring-expert-data.jsontest/mock/cometd-server.jstest/streaming.js
✅ Files skipped from review due to trivial changes (1)
- tasks/code-smell-detector-data.json
🚧 Files skipped from review as they are similar to previous changes (1)
- CLAUDE.md
| async _sendHttp(messages) { | ||
| const res = await fetch(this._endpoint, { | ||
| method: "POST", | ||
| headers: this._headers, | ||
| body: JSON.stringify(messages), | ||
| }); | ||
| if (!res.ok) { | ||
| throw new Error("CometD HTTP error: " + res.status); | ||
| } | ||
| const responses = await res.json(); | ||
| const incoming = []; | ||
| for (const r of responses) { | ||
| incoming.push(await this._applyIncoming(r)); | ||
| } | ||
| return incoming; | ||
| } |
There was a problem hiding this comment.
Add request timeout control to HTTP transport calls.
fetch has no abort/timeout here. A stalled network call can block handshake/connect flow indefinitely and delay reconnect logic.
Proposed fix
async _sendHttp(messages) {
- const res = await fetch(this._endpoint, {
- method: "POST",
- headers: this._headers,
- body: JSON.stringify(messages),
- });
- if (!res.ok) {
- throw new Error("CometD HTTP error: " + res.status);
- }
- const responses = await res.json();
- const incoming = [];
- for (const r of responses) {
- incoming.push(await this._applyIncoming(r));
- }
- return incoming;
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), this._timeout);
+ try {
+ const res = await fetch(this._endpoint, {
+ method: 'POST',
+ headers: this._headers,
+ body: JSON.stringify(messages),
+ signal: controller.signal,
+ });
+ if (!res.ok) {
+ throw new Error('CometD HTTP error: ' + res.status);
+ }
+ const responses = await res.json();
+ const incoming = [];
+ for (const r of responses) {
+ incoming.push(await this._applyIncoming(r));
+ }
+ return incoming;
+ } finally {
+ clearTimeout(timeoutId);
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cometd.js` around lines 126 - 141, The _sendHttp HTTP transport lacks
timeout/abort handling; add an AbortController and timeout logic to prevent
fetch from hanging: create an AbortController, start a timer (use an existing
this._timeout value or a sensible default), pass controller.signal into the
fetch call for this._endpoint with this._headers, clear the timer after fetch
completes, and convert an abort into a clear timeout error (throw a descriptive
Error on timeout). Ensure existing response handling still calls
this._applyIncoming for each response and that you clear the timer in both
success and error paths to avoid leaks.
| if (isConnect) { | ||
| this._pendingConnectResolve = resolve; | ||
| } |
There was a problem hiding this comment.
Pending /meta/connect promise is never settled on socket drop.
If WS closes/errors while a connect call is waiting, _pendingConnectResolve remains unresolved. That can leave a loop frame hanging and accumulate stale pending work across reconnects.
Proposed fix
this._ws.addEventListener("close", () => {
+ if (this._pendingConnectResolve) {
+ const resolve = this._pendingConnectResolve;
+ this._pendingConnectResolve = null;
+ resolve([{ channel: '/meta/connect', successful: false }]);
+ }
if (!this._disconnecting) {
this._connected = false;
this.emit("transport:down");
this._scheduleReconnect();
}
});Also applies to: 240-245, 248-255
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cometd.js` around lines 154 - 156, The pending /meta/connect promise
stored in _pendingConnectResolve can remain unresolved if the WebSocket
closes/errors while isConnect is true; update the socket close and error
handling to settle that pending promise (call the stored _pendingConnectResolve
with an error or appropriate rejection/resolution) and then clear
_pendingConnectResolve to avoid stale references. Locate the places where
isConnect is set and where the socket 'close'/'error' handlers live (references:
_pendingConnectResolve, isConnect) and add logic to detect a pending connect,
invoke the resolver with an error or rejection value, and null out
_pendingConnectResolve; apply the same fix to the other similar blocks
referenced around the file (the sections noted at 240-245 and 248-255).
| const matching = responses.filter( | ||
| (r) => r.id === expectedId || !r.id, | ||
| ); |
There was a problem hiding this comment.
WebSocket response matching can resolve the wrong request.
Treating !r.id as a match lets unsolicited data frames satisfy a pending meta request. This can falsely resolve subscribe/unsubscribe/disconnect calls.
Proposed fix
-const matching = responses.filter(
- (r) => r.id === expectedId || !r.id,
-);
-if (matching.length > 0) {
+const hasMatch = responses.some((r) => r.id === expectedId);
+if (hasMatch) {
this._ws.removeEventListener("message", handler);
+ clearTimeout(timeoutId);
const incoming = [];
for (const r of responses) {
incoming.push(await this._applyIncoming(r));
}
resolve(incoming);
}
...
-setTimeout(() => {
+const timeoutId = setTimeout(() => {
this._ws.removeEventListener("message", handler);
reject(new Error("CometD WebSocket response timeout"));
}, this._wsResponseTimeout);Also applies to: 179-183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cometd.js` around lines 165 - 167, The current filter treats responses
with no id (the expression responses.filter((r) => r.id === expectedId ||
!r.id)) as matches which allows unsolicited frames to resolve pending requests;
change the predicate to only match responses with an id equal to expectedId
(e.g., responses.filter(r => r.id === expectedId)) and remove the fallback ||
!r.id logic, and apply the same change to the other occurrence around lines
179-183 so only responses with a matching r.id resolve the pending request.
lib/cometd.js
Outdated
| const wsUrl = this._endpoint.replace(/^http/, "ws"); | ||
| this._ws = new WebSocket(wsUrl, ["cometd"], { | ||
| headers: this._headers, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Node.js v22+, does the native/global WebSocket constructor support passing custom HTTP headers (e.g., Authorization) via a third options argument?
💡 Result:
No, the native/global WebSocket constructor in Node.js v22+ does not support passing custom HTTP headers (e.g., Authorization) via a third options argument or any options argument.
Citations:
- 1: https://undici-docs.vramana.dev/docs/api/WebSocket
- 2: websocket: add websocketinit nodejs/undici#2088
- 3: https://nodejs.org/en/learn/getting-started/websocket
- 4: https://nodejs.dev/en/learn/getting-started/websocket
- 5: https://nodejs.org/docs/latest-v22.x/api/globals.html
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Confirm whether a ws-style client is used/imported anywhere.
rg -n "new\\s+WebSocket\\(" -C2
rg -n "require\\(['\"]ws['\"]\\)|from ['\"]ws['\"]" -C2
# Check declared runtime constraints and dependencies.
cat package.json | jq '{engines, dependencies, devDependencies}'Repository: Stwissel/nforce8
Length of output: 340
WebSocket headers option is unsupported and will be silently ignored.
The native WebSocket constructor in Node.js does not accept a third options argument with headers. The code at line 232:
this._ws = new WebSocket(wsUrl, ["cometd"], {
headers: this._headers,
});will ignore the headers property. To pass authentication headers, use the ws library instead or set headers via the URL (if the Bayeux server supports it).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cometd.js` around lines 231 - 234, The native Node WebSocket constructor
ignores a third options arg so creating this._ws with headers is ineffective;
replace the native WebSocket with the 'ws' client (import/require('ws')) and
instantiate this._ws = new WebSocket(wsUrl, ["cometd"], { headers: this._headers
}) so headers are actually sent, or if you cannot add the dependency, encode
auth into this._endpoint/ wsUrl (e.g., query/token) before calling new
WebSocket(wsUrl, ["cometd"]) to ensure the server sees the auth; update
references to WebSocket, wsUrl, this._endpoint and this._headers accordingly.
| // Data messages — dispatch to subscription callback | ||
| if (msg.data !== undefined && msg.channel) { | ||
| const callback = this._subscriptions.get(msg.channel); | ||
| if (callback && typeof callback === "function") { | ||
| callback(msg.data); | ||
| } |
There was a problem hiding this comment.
Isolate subscriber callback failures from transport state.
A thrown callback currently bubbles into transport flow (or async handler context), which can trigger reconnect behavior for application-level handler bugs.
Proposed fix
if (callback && typeof callback === "function") {
- callback(msg.data);
+ try {
+ callback(msg.data);
+ } catch (err) {
+ this.emit('error', err);
+ }
}Also applies to: 352-356
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[warning] 287-287: lib/cometd.js#L287
Unexpected use of undefined.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cometd.js` around lines 286 - 291, Subscriber callbacks invoked from the
incoming-message dispatch (where this._subscriptions.get(msg.channel) yields
callback and callback(msg.data) is currently called) must be isolated so thrown
errors don't bubble into the transport/async flow; wrap the invocation in a
try/catch (for both occurrences around lines handling data messages and the
similar block at 352-356) and handle the exception locally (e.g., log via the
existing logger or this._logger/error handler) without rethrowing so transport
state and reconnect logic are not affected.
| // Auto-handshake and connect | ||
| this._ready = this._init(); | ||
| } | ||
|
|
||
| async _init() { | ||
| await this._cometd.handshake(); | ||
| await this._cometd.connect(); | ||
| } |
There was a problem hiding this comment.
Unhandled rejection if _init() fails.
If handshake() or connect() throws, the _ready promise rejects. Subscriptions await _ready and will catch this, but if no subscriptions are created, the rejection goes unhandled. Consider adding a .catch() to emit an error or log a warning.
🛡️ Proposed fix to prevent unhandled rejection
// Auto-handshake and connect
- this._ready = this._init();
+ this._ready = this._init().catch((err) => {
+ this.emit('error', err);
+ throw err; // Re-throw so subscriptions also see the error
+ });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Auto-handshake and connect | |
| this._ready = this._init(); | |
| } | |
| async _init() { | |
| await this._cometd.handshake(); | |
| await this._cometd.connect(); | |
| } | |
| // Auto-handshake and connect | |
| this._ready = this._init().catch((err) => { | |
| this.emit('error', err); | |
| throw err; // Re-throw so subscriptions also see the error | |
| }); | |
| } | |
| async _init() { | |
| await this._cometd.handshake(); | |
| await this._cometd.connect(); | |
| } |
| 'use strict'; | ||
|
|
||
| const http = require('http'); | ||
|
|
||
| const DEFAULT_PORT = 34444; |
There was a problem hiding this comment.
Critical: Missing crypto module import causes runtime error.
The file uses crypto.createHash() at line 201 but never imports the crypto module. In Node.js 22+, globalThis.crypto exists (Web Crypto API), but globalThis.crypto.createHash does not exist—only globalThis.crypto.subtle does. This will throw TypeError: crypto.createHash is not a function when the WebSocket upgrade path is exercised.
🐛 Proposed fix
'use strict';
const http = require('http');
+const crypto = require('crypto');
const DEFAULT_PORT = 34444;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 'use strict'; | |
| const http = require('http'); | |
| const DEFAULT_PORT = 34444; | |
| 'use strict'; | |
| const http = require('http'); | |
| const crypto = require('crypto'); | |
| const DEFAULT_PORT = 34444; |
🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[notice] 3-3: test/mock/cometd-server.js#L3
Strings must use doublequote.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/mock/cometd-server.js` around lines 1 - 5, The module fails at runtime
because crypto.createHash(...) is used (around the WebSocket upgrade handling
where createHash is called) but the Node 'crypto' module is not imported; add a
top-level import for the Node crypto module (e.g. const crypto =
require('crypto')) and ensure the code that calls crypto.createHash(...) uses
that imported symbol (verify usages such as the call around the WebSocket
upgrade / handshake logic refer to the imported crypto).
The .catch(should.not.exist).finally(done) pattern silently passes when assertions throw. Replaced 13 occurrences with returned promises. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Only removed the wrapper in authenticate() where it's inside a .then(). Kept _notifyAndResolve's Promise.resolve() since it must return a consistent Promise type for direct callers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…s (R12, R13) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Uses function.length to detect legacy callback-style (arity >= 3) vs modern async/promise handlers, maintaining full backward compatibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each test file now gets its own MockSfdcApi instance with isolated port and server state, eliminating shared module-level state between test files. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (6)
test/mock/cometd-server.js (1)
3-5:⚠️ Potential issue | 🔴 CriticalStill unresolved: import Node's
cryptomodule before usingcreateHash().Line 202 calls
crypto.createHash(...), but this file never importscrypto. The WebSocket upgrade path will throw as soon as a WS test exercises it.#!/bin/bash sed -n '1,8p' test/mock/cometd-server.js sed -n '194,205p' test/mock/cometd-server.js🐛 Proposed fix
const EventEmitter = require('events'); +const crypto = require('crypto'); const http = require('http');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mock/cometd-server.js` around lines 3 - 5, The file calls crypto.createHash(...) (used in the WebSocket upgrade path) but never requires Node's crypto module; add a top-level import like requiring 'crypto' (e.g. const crypto = require('crypto')) near the other requires so createHash() has access; update any tests or helper functions that reference createHash() to use this imported crypto symbol.lib/cometd.js (5)
240-255:⚠️ Potential issue | 🟠 MajorStill unresolved: settle the pending
/meta/connectpromise on socket shutdown.If the socket closes/errors while
_sendWs()is waiting on a connect response,_pendingConnectResolvestays live forever. You then schedule a reconnect on top of a hung loop frame.🔌 Proposed fix
_connectWebSocket() { return new Promise((resolve) => { const wsUrl = this._endpoint.replace(/^http/, 'ws'); this._ws = new WebSocket(wsUrl, ['cometd'], { headers: this._headers, }); + const settlePendingConnect = () => { + if (this._pendingConnectResolve) { + const resolvePending = this._pendingConnectResolve; + this._pendingConnectResolve = null; + resolvePending([{ channel: '/meta/connect', successful: false }]); + } + }; this._ws.addEventListener('open', () => { resolve(); }); this._ws.addEventListener('close', () => { + settlePendingConnect(); if (!this._disconnecting) { this._connected = false; this.emit('transport:down'); this._scheduleReconnect(); } }); this._ws.addEventListener('error', () => { + settlePendingConnect(); if (!this._connected) { // Failed to connect — fall back to long-polling this._transport = 'long-polling'; this._ws = null; resolve();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 240 - 255, When the WebSocket shuts down in the close or error handlers, ensure any pending `/meta/connect` promise from _sendWs is settled: if _pendingConnectResolve exists, call it (or call and then null it) before changing state or scheduling a reconnect. Update the close handler inside the _ws.addEventListener('close', ...) to call _pendingConnectResolve() and set _pendingConnectResolve = null before emitting 'transport:down' and calling _scheduleReconnect(), and update the error handler in the same way (or call the resolver then switch _transport to 'long-polling' and set _ws = null), ensuring the resolver is invoked exactly once to avoid hung promises.
126-140:⚠️ Potential issue | 🟠 MajorStill unresolved: add a real timeout to HTTP transport calls.
fetch()here can wait forever. A stalled handshake/connect blockstransport:down, reconnect, and evendisconnect()while the long poll is still in flight.In Node.js global fetch, is there a default request timeout, or must timeouts be implemented explicitly with AbortController/AbortSignal?⏱️ Proposed fix
async _sendHttp(messages) { - const res = await fetch(this._endpoint, { - method: 'POST', - headers: this._headers, - body: JSON.stringify(messages), - }); - if (!res.ok) { - throw new Error('CometD HTTP error: ' + res.status); - } - const responses = await res.json(); - const incoming = []; - for (const r of responses) { - incoming.push(await this._applyIncoming(r)); - } - return incoming; + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), this._timeout); + try { + const res = await fetch(this._endpoint, { + method: 'POST', + headers: this._headers, + body: JSON.stringify(messages), + signal: controller.signal, + }); + if (!res.ok) { + throw new Error('CometD HTTP error: ' + res.status); + } + const responses = await res.json(); + const incoming = []; + for (const r of responses) { + incoming.push(await this._applyIncoming(r)); + } + return incoming; + } catch (err) { + if (err.name === 'AbortError') { + throw new Error('CometD HTTP request timed out after ' + this._timeout + 'ms'); + } + throw err; + } finally { + clearTimeout(timeoutId); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 126 - 140, The _sendHttp method currently calls fetch without a timeout; add an AbortController and a configurable timeout (e.g., this._httpTimeout defaulting to a sane value) so the fetch is aborted if it exceeds the timeout. Create an AbortController, pass its signal to fetch, start a setTimeout that calls controller.abort() after the timeout, and clear that timer once fetch/res.json completes; catch the abort and rethrow a descriptive timeout Error (e.g., "CometD HTTP timeout") so callers of _sendHttp/_applyIncoming can handle it. Ensure you still use this._endpoint, this._headers, and this._applyIncoming as before and that the timer is always cleared to avoid leaks.
273-292:⚠️ Potential issue | 🟠 MajorStill unresolved: isolate subscriber callback failures from transport state.
A thrown consumer callback should not tear down the transport. In the long-poll path, this currently lands in
_connectLoop()'s catch and triggers reconnect; the same unchecked callback path exists again at Lines 351-355.🧯 Proposed fix
+ _dispatchSubscription(msg) { + const callback = this._subscriptions.get(msg.channel); + if (!callback || typeof callback !== 'function') return; + try { + callback(msg.data); + } catch (err) { + this.emit('error', err); + } + } + _handleMessage(msg) { if (msg.channel === '/meta/connect') { if (msg.advice) { Object.assign(this._advice, msg.advice); } @@ // Data messages — dispatch to subscription callback if (msg.data !== undefined && msg.channel) { - const callback = this._subscriptions.get(msg.channel); - if (callback && typeof callback === 'function') { - callback(msg.data); - } + this._dispatchSubscription(msg); } }Apply the same helper in the long-poll dispatch block at Lines 351-355.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 273 - 292, Subscriber callback exceptions in _handleMessage are currently allowed to bubble up and can cause transport-level reconnects; change the invocation of subscription callbacks (the result of this._subscriptions.get(msg.channel) currently invoked via callback(msg.data)) to be wrapped in the same safe/isolating helper used in the long-poll path so thrown errors are caught and logged but do not rethrow. Locate _handleMessage and replace the direct callback(msg.data) call with a try/catch (or call the existing helper used in _connectLoop) that logs the error and prevents it from propagating to _connectLoop/transport logic.
229-234:⚠️ Potential issue | 🟠 MajorStill unresolved: the native
WebSocketpath is not actually sending these headers.The built-in Node
WebSocketconstructor does not use a third{ headers }argument. The fallback at Lines 248-254 hides this in mock-based testing, but real Salesforce WebSocket upgrades will miss auth headers and never validate the advertised transport.In Node.js v22+, does the built-in/global WebSocket constructor support a third options argument for custom HTTP headers during the opening handshake?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 229 - 234, The WebSocket instantiation in _connectWebSocket is passing a third options arg (headers) which the Node built-in/global WebSocket does not accept; change the logic so when custom headers exist (this._headers) you create a WebSocket that supports headers (for example require/import the 'ws' package) and use that constructor to set this._ws with (wsUrl, ['cometd'], { headers: this._headers }); otherwise fall back to the global WebSocket without options. Update the conditional that sets this._ws (referencing _connectWebSocket, this._endpoint, this._headers, this._ws) so tests still work with the mock fallback but real Node v22+ connections include the auth headers.
165-167:⚠️ Potential issue | 🟠 MajorStill unresolved: only resolve non-connect requests on their own
id.
|| !r.idlets unsolicited data frames satisfy subscribe/unsubscribe/disconnect waits. That can resolve the wrong promise and drop the real response later.🎯 Proposed fix
- const matching = responses.filter( - (r) => r.id === expectedId || !r.id, - ); - if (matching.length > 0) { + if (responses.some((r) => r.id === expectedId)) { this._ws.removeEventListener('message', handler); const incoming = []; for (const r of responses) { incoming.push(await this._applyIncoming(r)); } resolve(incoming); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cometd.js` around lines 165 - 167, The filter that builds matching responses currently allows any response without an id (responses.filter((r) => r.id === expectedId || !r.id)), which lets unsolicited frames satisfy non-connect waits; change it so the || !r.id branch is only applied for connect requests. Update the predicate used in responses.filter (and the code/path that calls it) to something like (r) => r.id === expectedId || (isConnectRequest && !r.id), where isConnectRequest is derived from the request type or expectedId value, so subscribe/unsubscribe/disconnect only match exact r.id while connect still accepts id-less frames.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/auth.js`:
- Around line 102-113: The current onRefresh dispatch uses onRefresh.length to
pick legacy vs modern flow and hangs for async handlers declared with three
params; instead always call this.onRefresh once with (newOauth, oldOauth,
callback), capture its return value, and if that return is thenable (result &&
typeof result.then === 'function') await/return it (then resolve to newOauth),
otherwise fall back to the legacy callback-style Promise that resolves/rejects
when the provided callback is invoked; update the logic around
this.onRefresh.call(...) to implement this check and use result to decide
whether to wait for a Promise or the callback.
In `@lib/cometd.js`:
- Around line 149-183: The message handler for non-connect requests is being
added after calling this._ws.send, which can lose fast responses; move the
this._ws.addEventListener('message', handler) (and the timeout setup) to occur
before calling this._ws.send so the per-request handler (the const handler that
parses event.data, filters by expectedId and calls this._applyIncoming) is
registered first; keep the existing logic for isConnect
(this._pendingConnectResolve) intact and ensure you still remove the handler
with this._ws.removeEventListener('message', handler) on match or timeout.
In `@lib/util.js`:
- Around line 60-62: The code in resolveId() iterates ID_FIELD_VARIANTS and
treats values !== undefined as valid, which permits null/empty-string IDs and
leads sobjectPath() to build invalid paths; change the guard on data[variant] to
require a non-empty value (for example ensure data[variant] is not
null/undefined and is a non-empty string or use a truthy check) so only valid,
non-empty IDs are returned; update resolveId() (and any callers relying on it)
to validate the resolved ID before using sobjectPath().
In `@test/mock/cometd-server.js`:
- Around line 29-41: The start() method currently relies on the listen callback
but doesn't handle bind errors emitted via the server 'error' event; add a
one-time 'error' listener on this.server that rejects the Promise if bind/listen
fails, and also add a one-time 'listening' (or use the existing listen callback)
handler to resolve and remove the error listener so you don't leak handlers;
reference the start() method, this.server, this.server.listen(...), and the
'error' event (and optionally 'listening') to locate where to add and remove
these listeners.
In `@test/mock/sfdc-rest-api.js`:
- Around line 23-24: getLastRequest currently returns the oldest request
(this._requestStack[0]) while getGoodServerInstance appends requests with
push(), so update getLastRequest to return the most recent element (the tail)
using this._requestStack[this._requestStack.length - 1] (and safely handle empty
stacks so it returns undefined/null when length is 0) to ensure tests inspect
the latest request.
- Around line 38-50: The getServerInstance promise doesn't handle server 'error'
events (bind failures) because server.listen's callback never receives an err;
update getServerInstance to attach 'error' and 'listening' (or use 'close' on
failure) event handlers on the created server: listen for 'error' to reject (and
remove listeners and close the server if needed), listen for 'listening' to push
the server onto this._serverStack and resolve (then remove the error listener),
and ensure any temporary listeners are cleaned up to avoid leaks; reference the
getServerInstance function, the created local server variable, and
this._serverStack when making these changes.
---
Duplicate comments:
In `@lib/cometd.js`:
- Around line 240-255: When the WebSocket shuts down in the close or error
handlers, ensure any pending `/meta/connect` promise from _sendWs is settled: if
_pendingConnectResolve exists, call it (or call and then null it) before
changing state or scheduling a reconnect. Update the close handler inside the
_ws.addEventListener('close', ...) to call _pendingConnectResolve() and set
_pendingConnectResolve = null before emitting 'transport:down' and calling
_scheduleReconnect(), and update the error handler in the same way (or call the
resolver then switch _transport to 'long-polling' and set _ws = null), ensuring
the resolver is invoked exactly once to avoid hung promises.
- Around line 126-140: The _sendHttp method currently calls fetch without a
timeout; add an AbortController and a configurable timeout (e.g.,
this._httpTimeout defaulting to a sane value) so the fetch is aborted if it
exceeds the timeout. Create an AbortController, pass its signal to fetch, start
a setTimeout that calls controller.abort() after the timeout, and clear that
timer once fetch/res.json completes; catch the abort and rethrow a descriptive
timeout Error (e.g., "CometD HTTP timeout") so callers of
_sendHttp/_applyIncoming can handle it. Ensure you still use this._endpoint,
this._headers, and this._applyIncoming as before and that the timer is always
cleared to avoid leaks.
- Around line 273-292: Subscriber callback exceptions in _handleMessage are
currently allowed to bubble up and can cause transport-level reconnects; change
the invocation of subscription callbacks (the result of
this._subscriptions.get(msg.channel) currently invoked via callback(msg.data))
to be wrapped in the same safe/isolating helper used in the long-poll path so
thrown errors are caught and logged but do not rethrow. Locate _handleMessage
and replace the direct callback(msg.data) call with a try/catch (or call the
existing helper used in _connectLoop) that logs the error and prevents it from
propagating to _connectLoop/transport logic.
- Around line 229-234: The WebSocket instantiation in _connectWebSocket is
passing a third options arg (headers) which the Node built-in/global WebSocket
does not accept; change the logic so when custom headers exist (this._headers)
you create a WebSocket that supports headers (for example require/import the
'ws' package) and use that constructor to set this._ws with (wsUrl, ['cometd'],
{ headers: this._headers }); otherwise fall back to the global WebSocket without
options. Update the conditional that sets this._ws (referencing
_connectWebSocket, this._endpoint, this._headers, this._ws) so tests still work
with the mock fallback but real Node v22+ connections include the auth headers.
- Around line 165-167: The filter that builds matching responses currently
allows any response without an id (responses.filter((r) => r.id === expectedId
|| !r.id)), which lets unsolicited frames satisfy non-connect waits; change it
so the || !r.id branch is only applied for connect requests. Update the
predicate used in responses.filter (and the code/path that calls it) to
something like (r) => r.id === expectedId || (isConnectRequest && !r.id), where
isConnectRequest is derived from the request type or expectedId value, so
subscribe/unsubscribe/disconnect only match exact r.id while connect still
accepts id-less frames.
In `@test/mock/cometd-server.js`:
- Around line 3-5: The file calls crypto.createHash(...) (used in the WebSocket
upgrade path) but never requires Node's crypto module; add a top-level import
like requiring 'crypto' (e.g. const crypto = require('crypto')) near the other
requires so createHash() has access; update any tests or helper functions that
reference createHash() to use this imported crypto symbol.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eb5580ea-3bd2-4e34-a4ed-1e75c6367093
📒 Files selected for processing (15)
docs/superpowers/plans/2026-03-30-refactoring-phases-1-4.mdindex.jslib/api.jslib/auth.jslib/cometd.jslib/http.jslib/optionhelper.jslib/util.jstest/connection.jstest/crud.jstest/errors.jstest/integration.jstest/mock/cometd-server.jstest/mock/sfdc-rest-api.jstest/query.js
✅ Files skipped from review due to trivial changes (1)
- index.js
| if (this.onRefresh.length >= 3) { | ||
| // Legacy callback path | ||
| return new Promise((resolve, reject) => { | ||
| this.onRefresh.call(this, newOauth, oldOauth, (err) => { | ||
| if (err) reject(err); | ||
| else resolve(newOauth); | ||
| }); | ||
| }); | ||
| }); | ||
| } | ||
| // Modern path: onRefresh returns a value or Promise | ||
| return Promise.resolve(this.onRefresh.call(this, newOauth, oldOauth)) | ||
| .then(() => newOauth); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd lib && cat auth.js | head -150 | tail -60Repository: Stwissel/nforce8
Length of output: 2074
🏁 Script executed:
cat lib/auth.jsRepository: Stwissel/nforce8
Length of output: 8588
🏁 Script executed:
fd -type f "test" | head -20Repository: Stwissel/nforce8
Length of output: 230
🏁 Script executed:
find . -path "*/test/*" -name "*.js" | head -20Repository: Stwissel/nforce8
Length of output: 277
🏁 Script executed:
ls -la test/Repository: Stwissel/nforce8
Length of output: 836
🏁 Script executed:
cat test/connection.js | grep -A 30 "onRefresh"Repository: Stwissel/nforce8
Length of output: 3349
🏁 Script executed:
grep -n "onRefresh" test/connection.js | head -20Repository: Stwissel/nforce8
Length of output: 801
🏁 Script executed:
cat test/connection.js | grep -B 5 -A 20 "onRefresh" | head -100Repository: Stwissel/nforce8
Length of output: 3416
Fix onRefresh dispatch to handle async handlers with 3 parameters, which currently hang.
The onRefresh.length >= 3 check is too brittle. An async handler like async (newOauth, oldOauth, _unused) { ... } has length === 3, triggering the legacy callback path. This ignores the returned promise and waits forever for a callback that never fires. The tests only cover 0/2-arg handlers, not this 3-arg migration case.
Instead, call the handler once with all three arguments, then check if the result is thenable. Only settle via callback if no promise was returned:
Suggested fix
const _notifyAndResolve = function (newOauth, oldOauth) {
if (this.onRefresh) {
- if (this.onRefresh.length >= 3) {
- // Legacy callback path
- return new Promise((resolve, reject) => {
- this.onRefresh.call(this, newOauth, oldOauth, (err) => {
- if (err) reject(err);
- else resolve(newOauth);
- });
- });
- }
- // Modern path: onRefresh returns a value or Promise
- return Promise.resolve(this.onRefresh.call(this, newOauth, oldOauth))
- .then(() => newOauth);
+ return new Promise((resolve, reject) => {
+ let settled = false;
+ const done = (err) => {
+ if (settled) return;
+ settled = true;
+ if (err) reject(err);
+ else resolve(newOauth);
+ };
+
+ try {
+ const result = this.onRefresh.call(this, newOauth, oldOauth, done);
+ if (result && typeof result.then === 'function') {
+ Promise.resolve(result).then(() => done(), done);
+ } else if (this.onRefresh.length < 3) {
+ done();
+ }
+ } catch (err) {
+ done(err);
+ }
+ });
}
return Promise.resolve(newOauth);
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (this.onRefresh.length >= 3) { | |
| // Legacy callback path | |
| return new Promise((resolve, reject) => { | |
| this.onRefresh.call(this, newOauth, oldOauth, (err) => { | |
| if (err) reject(err); | |
| else resolve(newOauth); | |
| }); | |
| }); | |
| }); | |
| } | |
| // Modern path: onRefresh returns a value or Promise | |
| return Promise.resolve(this.onRefresh.call(this, newOauth, oldOauth)) | |
| .then(() => newOauth); | |
| return new Promise((resolve, reject) => { | |
| let settled = false; | |
| const done = (err) => { | |
| if (settled) return; | |
| settled = true; | |
| if (err) reject(err); | |
| else resolve(newOauth); | |
| }; | |
| try { | |
| const result = this.onRefresh.call(this, newOauth, oldOauth, done); | |
| if (result && typeof result.then === 'function') { | |
| Promise.resolve(result).then(() => done(), done); | |
| } else if (this.onRefresh.length < 3) { | |
| done(); | |
| } | |
| } catch (err) { | |
| done(err); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/auth.js` around lines 102 - 113, The current onRefresh dispatch uses
onRefresh.length to pick legacy vs modern flow and hangs for async handlers
declared with three params; instead always call this.onRefresh once with
(newOauth, oldOauth, callback), capture its return value, and if that return is
thenable (result && typeof result.then === 'function') await/return it (then
resolve to newOauth), otherwise fall back to the legacy callback-style Promise
that resolves/rejects when the provided callback is invoked; update the logic
around this.onRefresh.call(...) to implement this check and use result to decide
whether to wait for a Promise or the callback.
| return new Promise((resolve, reject) => { | ||
| const expectedId = messages[0].id; | ||
| const isConnect = messages[0].channel === '/meta/connect'; | ||
|
|
||
| // For /meta/connect, the response is deferred (long-poll style) | ||
| if (isConnect) { | ||
| this._pendingConnectResolve = resolve; | ||
| } | ||
|
|
||
| this._ws.send(JSON.stringify(messages)); | ||
|
|
||
| if (!isConnect) { | ||
| // Non-connect messages get immediate responses | ||
| const handler = async (event) => { | ||
| const data = JSON.parse(event.data); | ||
| const responses = Array.isArray(data) ? data : [data]; | ||
| const matching = responses.filter( | ||
| (r) => r.id === expectedId || !r.id, | ||
| ); | ||
| if (matching.length > 0) { | ||
| this._ws.removeEventListener('message', handler); | ||
| const incoming = []; | ||
| for (const r of responses) { | ||
| incoming.push(await this._applyIncoming(r)); | ||
| } | ||
| resolve(incoming); | ||
| } | ||
| }; | ||
| this._ws.addEventListener('message', handler); | ||
|
|
||
| // Timeout safety | ||
| setTimeout(() => { | ||
| this._ws.removeEventListener('message', handler); | ||
| reject(new Error('CometD WebSocket response timeout')); | ||
| }, this._wsResponseTimeout); |
There was a problem hiding this comment.
Attach the WS reply listener before sending the frame.
Line 158 sends before the per-request message handler exists. Against the in-process mock, a fast response can arrive first and this promise then times out even though the server replied.
🛠️ Proposed fix
_sendWs(messages) {
return new Promise((resolve, reject) => {
const expectedId = messages[0].id;
const isConnect = messages[0].channel === '/meta/connect';
// For /meta/connect, the response is deferred (long-poll style)
if (isConnect) {
this._pendingConnectResolve = resolve;
+ this._ws.send(JSON.stringify(messages));
+ return;
}
- this._ws.send(JSON.stringify(messages));
-
- if (!isConnect) {
- // Non-connect messages get immediate responses
- const handler = async (event) => {
+ // Non-connect messages get immediate responses
+ let timeoutId;
+ const handler = async (event) => {
const data = JSON.parse(event.data);
const responses = Array.isArray(data) ? data : [data];
const matching = responses.filter(
(r) => r.id === expectedId || !r.id,
);
if (matching.length > 0) {
this._ws.removeEventListener('message', handler);
+ clearTimeout(timeoutId);
const incoming = [];
for (const r of responses) {
incoming.push(await this._applyIncoming(r));
}
resolve(incoming);
}
- };
- this._ws.addEventListener('message', handler);
-
- // Timeout safety
- setTimeout(() => {
- this._ws.removeEventListener('message', handler);
- reject(new Error('CometD WebSocket response timeout'));
- }, this._wsResponseTimeout);
- }
+ };
+ this._ws.addEventListener('message', handler);
+ this._ws.send(JSON.stringify(messages));
+
+ // Timeout safety
+ timeoutId = setTimeout(() => {
+ this._ws.removeEventListener('message', handler);
+ reject(new Error('CometD WebSocket response timeout'));
+ }, this._wsResponseTimeout);
});
}🧰 Tools
🪛 GitHub Check: Codacy Static Code Analysis
[notice] 182-182: lib/cometd.js#L182
Strings must use doublequote.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cometd.js` around lines 149 - 183, The message handler for non-connect
requests is being added after calling this._ws.send, which can lose fast
responses; move the this._ws.addEventListener('message', handler) (and the
timeout setup) to occur before calling this._ws.send so the per-request handler
(the const handler that parses event.data, filters by expectedId and calls
this._applyIncoming) is registered first; keep the existing logic for isConnect
(this._pendingConnectResolve) intact and ensure you still remove the handler
with this._ws.removeEventListener('message', handler) on match or timeout.
| for (const variant of ID_FIELD_VARIANTS) { | ||
| if (data[variant] !== undefined) { | ||
| return data[variant]; |
There was a problem hiding this comment.
Don't treat empty/null values as resolved record IDs.
!== undefined makes { Id: null } and { Id: '' } look like successful hits. resolveId() feeds that into sobjectPath() in lib/api.js, so callers can quietly build /sobjects/<type> or /sobjects/user/password instead of failing fast on a missing ID. Keeping the previous truthy guard, or validating a non-empty string here, avoids that edge case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/util.js` around lines 60 - 62, The code in resolveId() iterates
ID_FIELD_VARIANTS and treats values !== undefined as valid, which permits
null/empty-string IDs and leads sobjectPath() to build invalid paths; change the
guard on data[variant] to require a non-empty value (for example ensure
data[variant] is not null/undefined and is a non-empty string or use a truthy
check) so only valid, non-empty IDs are returned; update resolveId() (and any
callers relying on it) to validate the resolved ID before using sobjectPath().
| start() { | ||
| return new Promise((resolve, reject) => { | ||
| this.server = http.createServer((req, res) => this._handleHttp(req, res)); | ||
|
|
||
| // WebSocket upgrade handling | ||
| this.server.on('upgrade', (req, socket, head) => { | ||
| this._handleWsUpgrade(req, socket, head); | ||
| }); | ||
|
|
||
| this.server.listen(this.port, (err) => { | ||
| if (err) return reject(err); | ||
| resolve(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Node.js http.Server.listen(port, callback) pass an error argument to the callback, or are bind/listen failures emitted via the "error" event?
💡 Result:
Bind/listen failures are emitted via the "error" event on the server instance. The optional callback passed to http.Server.listen(port, callback) does not receive an error argument; it is only called on successful binding (equivalent to a 'listening' event listener). TypeScript types and official net.Server documentation confirm the callback signature as => void, with no error parameter. Stack Overflow discussions and code examples consistently show that expecting an error in the callback is incorrect; errors like EADDRINUSE trigger the 'error' event instead, often unhandled leading to process termination unless listened for.
Citations:
- 1: https://stackoverflow.com/questions/53079611/handling-node-http-serverlisten-error-with-callback
- 2: https://nodejs.org/api/http.html
- 3: https://stackoverflow.com/questions/65159729/server-listen-callback-function-in-typescript-and-javascript
- 4: https://docs.deno.com/api/node/net/~/Server.prototype.listen
- 5: https://nodejs.org/docs/latest-v18.x/api/net.html
- 6: https://nodejs.org/docs/latest/api/http.html
- 7: https://nodejs.org/dist/latest-v18.x/docs/api/http.html
- 8: https://stackoverflow.com/questions/71219563/try-catch-does-not-catch-the-error-of-listen-method-of-http-the-nodejs-modu
- 9: https://stackoverflow.com/questions/9898372/how-to-fix-error-listen-eaddrinuse-while-using-nodejs
- 10: https://www.endyourif.com/node-js-server-errors-how-to-handle-eaddrinuse/
🏁 Script executed:
fd -t f "cometd-server.js" test/Repository: Stwissel/nforce8
Length of output: 86
🏁 Script executed:
sed -n '29,41p' test/mock/cometd-server.jsRepository: Stwissel/nforce8
Length of output: 466
Add error handler to reject on port binding failures.
server.listen(port, callback) does not pass an error argument to the callback; bind/listen failures are emitted via the 'error' event instead. The current code will never reject when the port is already in use, causing the promise to hang and potentially crash the test suite.
Fix
start() {
return new Promise((resolve, reject) => {
this.server = http.createServer((req, res) => this._handleHttp(req, res));
+ this.server.once('error', reject);
// WebSocket upgrade handling
this.server.on('upgrade', (req, socket, head) => {
this._handleWsUpgrade(req, socket, head);
});
- this.server.listen(this.port, (err) => {
- if (err) return reject(err);
- resolve();
- });
+ this.server.listen(this.port, () => resolve());
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| start() { | |
| return new Promise((resolve, reject) => { | |
| this.server = http.createServer((req, res) => this._handleHttp(req, res)); | |
| // WebSocket upgrade handling | |
| this.server.on('upgrade', (req, socket, head) => { | |
| this._handleWsUpgrade(req, socket, head); | |
| }); | |
| this.server.listen(this.port, (err) => { | |
| if (err) return reject(err); | |
| resolve(); | |
| }); | |
| start() { | |
| return new Promise((resolve, reject) => { | |
| this.server = http.createServer((req, res) => this._handleHttp(req, res)); | |
| this.server.once('error', reject); | |
| // WebSocket upgrade handling | |
| this.server.on('upgrade', (req, socket, head) => { | |
| this._handleWsUpgrade(req, socket, head); | |
| }); | |
| this.server.listen(this.port, () => resolve()); | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/mock/cometd-server.js` around lines 29 - 41, The start() method
currently relies on the listen callback but doesn't handle bind errors emitted
via the server 'error' event; add a one-time 'error' listener on this.server
that rejects the Promise if bind/listen fails, and also add a one-time
'listening' (or use the existing listen callback) handler to resolve and remove
the error listener so you don't leak handlers; reference the start() method,
this.server, this.server.listen(...), and the 'error' event (and optionally
'listening') to locate where to add and remove these listeners.
| getLastRequest() { | ||
| return this._requestStack[0]; |
There was a problem hiding this comment.
getLastRequest() is returning the oldest request.
getGoodServerInstance() appends with push(), so Line 24 should read the tail. Any test that performs two calls before asserting will inspect stale request data.
🧪 Proposed fix
getLastRequest() {
- return this._requestStack[0];
+ return this._requestStack[this._requestStack.length - 1];
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/mock/sfdc-rest-api.js` around lines 23 - 24, getLastRequest currently
returns the oldest request (this._requestStack[0]) while getGoodServerInstance
appends requests with push(), so update getLastRequest to return the most recent
element (the tail) using this._requestStack[this._requestStack.length - 1] (and
safely handle empty stacks so it returns undefined/null when length is 0) to
ensure tests inspect the latest request.
| getServerInstance(serverListener) { | ||
| return this.clearServerStack().then(() => { | ||
| return new Promise((resolve, reject) => { | ||
| const server = http.createServer(serverListener); | ||
| server.listen(this._port, (err) => { | ||
| if (err) { | ||
| reject(err); | ||
| } else { | ||
| serverStack.push(server); | ||
| this._serverStack.push(server); | ||
| resolve(server); | ||
| } | ||
| }); | ||
| }) | ||
| .catch(reject); | ||
| }); | ||
| }; | ||
|
|
||
| const getGoodServerInstance = (response = defaultResponse) => { | ||
| const serverListener = (req, res) => { | ||
| const chunks = []; | ||
| req.on('data', (chunk) => chunks.push(chunk)); | ||
| req.on('end', () => { | ||
| req.body = Buffer.concat(chunks).toString(); | ||
| requestStack.push(req); | ||
| const headers = Object.assign({ Connection: 'close' }, response.headers); | ||
| res.writeHead(response.code, headers); | ||
| if (response.body) { | ||
| res.end(response.body, 'utf8'); | ||
| } else { | ||
| res.end(); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Node.js http.Server.listen(port, callback) pass an error argument to the callback, or are bind/listen failures emitted via the "error" event?
💡 Result:
Bind/listen failures are emitted via the "error" event on the server, not passed as an error argument to the listen callback. The listen callback is invoked only on successful binding (equivalent to the 'listening' event handler) and takes no arguments.
Citations:
- 1: https://stackoverflow.com/questions/53079611/handling-node-http-serverlisten-error-with-callback
- 2: https://docs.deno.com/api/node/net/~/Server.prototype.listen
- 3: https://stackoverflow.com/questions/65159729/server-listen-callback-function-in-typescript-and-javascript
- 4: https://nodejs.org/docs/latest-v18.x/api/net.html
🏁 Script executed:
# Check if the file exists and examine the specific lines
if [ -f test/mock/sfdc-rest-api.js ]; then
echo "=== File exists, checking lines 38-50 ==="
sed -n '35,55p' test/mock/sfdc-rest-api.js
else
echo "File not found, searching for similar files..."
fd -t f "sfdc-rest-api" . 2>/dev/null || echo "No matches found"
fiRepository: Stwissel/nforce8
Length of output: 653
Server startup errors are not caught by this promise.
server.listen() does not pass bind failures to its callback—they are emitted via the 'error' event. The current code checks for an err parameter that will never be received, leaving port binding failures unhandled. This breaks test reliability when ports are occupied.
🛠️ Fix
getServerInstance(serverListener) {
return this.clearServerStack().then(() => {
return new Promise((resolve, reject) => {
const server = http.createServer(serverListener);
- server.listen(this._port, (err) => {
- if (err) {
- reject(err);
- } else {
- this._serverStack.push(server);
- resolve(server);
- }
- });
+ server.once('error', reject);
+ server.listen(this._port, () => {
+ this._serverStack.push(server);
+ resolve(server);
+ });
});
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getServerInstance(serverListener) { | |
| return this.clearServerStack().then(() => { | |
| return new Promise((resolve, reject) => { | |
| const server = http.createServer(serverListener); | |
| server.listen(this._port, (err) => { | |
| if (err) { | |
| reject(err); | |
| } else { | |
| serverStack.push(server); | |
| this._serverStack.push(server); | |
| resolve(server); | |
| } | |
| }); | |
| }) | |
| .catch(reject); | |
| }); | |
| }; | |
| const getGoodServerInstance = (response = defaultResponse) => { | |
| const serverListener = (req, res) => { | |
| const chunks = []; | |
| req.on('data', (chunk) => chunks.push(chunk)); | |
| req.on('end', () => { | |
| req.body = Buffer.concat(chunks).toString(); | |
| requestStack.push(req); | |
| const headers = Object.assign({ Connection: 'close' }, response.headers); | |
| res.writeHead(response.code, headers); | |
| if (response.body) { | |
| res.end(response.body, 'utf8'); | |
| } else { | |
| res.end(); | |
| } | |
| }); | |
| getServerInstance(serverListener) { | |
| return this.clearServerStack().then(() => { | |
| return new Promise((resolve, reject) => { | |
| const server = http.createServer(serverListener); | |
| server.once('error', reject); | |
| server.listen(this._port, () => { | |
| this._serverStack.push(server); | |
| resolve(server); | |
| }); | |
| }); | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/mock/sfdc-rest-api.js` around lines 38 - 50, The getServerInstance
promise doesn't handle server 'error' events (bind failures) because
server.listen's callback never receives an err; update getServerInstance to
attach 'error' and 'listening' (or use 'close' on failure) event handlers on the
created server: listen for 'error' to reject (and remove listeners and close the
server if needed), listen for 'listening' to push the server onto
this._serverStack and resolve (then remove the error listener), and ensure any
temporary listeners are cleaned up to avoid leaks; reference the
getServerInstance function, the created local server variable, and
this._serverStack when making these changes.
Summary
Replace the
fayedependency with a custom, lightweight CometD/Bayeux client built on Node.js built-infetchandWebSocket. This eliminates faye and its 12 transitive dependencies (~1.2MB), while adding dual-transport support and comprehensive test coverage for the Streaming API — which previously had zero tests.Why
WebSocketandfetchmake external deps unnecessaryWhat changed
lib/cometd.js(new, ~500 lines) — Custom CometD client implementing Bayeux protocol:transport:up,transport:down)lib/fdcstream.js(rewritten) — Swappedrequire('faye')forrequire('./cometd'):SubscriptionandClientEventEmitter APIs remain identicaltest/mock/cometd-server.js(new, ~470 lines) — Mock Bayeux server:test/streaming.js(new, ~340 lines) — 15 tests covering:README.md— Complete rewrite with current API, async/await examples, streaming docspackage.json— faye removed, 12 transitive packages eliminatedMetrics
Public API
No breaking changes. All three streaming methods work identically:
CometD Protocol Implementation
The custom client implements Bayeux 1.0:
/meta/handshake/meta/connect/meta/subscribe/meta/unsubscribe/meta/disconnectTransport selection: Prefers WebSocket when server supports it, falls back to long-polling automatically.
Reconnection: Handles server advice (
retry,handshake,none), re-subscribes all active subscriptions after re-handshake.Files Changed
lib/cometd.jslib/fdcstream.jstest/mock/cometd-server.jstest/streaming.jsREADME.mdpackage.jsonpackage-lock.jsonTest plan
npm test— 143 tests passing (128 existing + 15 new streaming)npm run lint— cleannpm ls faye— returns empty (removed)require('faye')in codebase🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Tests
Chores