Skip to content

process logging middleware#195

Merged
jbolda merged 28 commits intomainfrom
process-logging-middleware
Apr 6, 2026
Merged

process logging middleware#195
jbolda merged 28 commits intomainfrom
process-logging-middleware

Conversation

@jbolda
Copy link
Copy Markdown
Member

@jbolda jbolda commented Mar 16, 2026

Motivation

With the adjustment in operation priorities, the classic @effectionx/process logging example no longer catches a finally block. It is subtle and you can work around it, but more ergonomic handling of use cases surrounding IO is through a middleware.

Approach

We are setting the default to process.std*.write() and one can add additional handling or remove/adjust that default through the @effectionx/context-api function that is exported from the package.

Summary by CodeRabbit

  • New Features

    • Process objects now support an around(...) middleware API for per-process and global interception of stdout/stderr.
  • Documentation

    • Added examples for streaming stdout consumption and a new section showing middleware-based per-line handling and forwarding.
  • Tests

    • Test suite updated to cover stdio redirection and middleware capture behavior; suite setup now applies global no-op stdio wrapping.
  • Chores

    • Package version bumped to 0.8.0 and runtime dependencies expanded.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a Stdio middleware context API and exposes it via the package entrypoint; refactors POSIX and Win32 process factories from resource-based producers to generator-based factories that evaluate middleware in an evalScope, forward per-chunk stdio through the Stdio API, and expose per-process around middleware.

Changes

Cohort / File(s) Summary
Stdio Context & API
process/src/api.ts, process/mod.ts, process/src/exec/types.ts
Adds Stdio context API (stdout/stderr operations) and StdioApi types; re-exports API from process/mod.ts; extends Process type with an around middleware entrypoint.
POSIX Process Factory
process/src/exec/posix.ts
Converts createPosixProcess to a generator Operation<Process> using useEvalScope/evalScope.eval, moves cleanup into ensure(...), forwards each stdout/stderr chunk through Stdio.operations and local collectors, and adds a *around(...) method on returned Process.
Win32 Process Factory
process/src/exec/win32.ts
Converts createWin32Process to a generator Operation<Process>, uses evalScope for middleware evaluation, shifts cleanup into ensure(...) with guarded escalation, forwards chunks to Stdio ops and collectors, and adds a *around(...) method on returned Process.
Exec Surface & Types
process/src/exec.ts, process/src/exec/error.ts, process/src/exec/types.ts
Redirects type imports/re-exports to ./exec/types.ts, adds JSDoc to Exec methods, introduces StdioApi and wires Process.around into types.
Docs & Examples
process/README.md
Updates stdout streaming example to use each(process.stdout) and adds “Handling Process Output With Middleware” demonstrating process.around({ *stdout(...) { ... }, *stderr(...) { ... } }) with process.expect().
Tests
process/test/exec.test.ts
Adds suite-wide beforeEach applying Stdio.around no-op handlers, restructures tests, and adds io api tests that validate global and per-process stdio capture/redirection.
Package & Build
process/package.json, process/tsconfig.json
Bumps version to 0.8.0, adds workspace deps @effectionx/context-api and @effectionx/scope-eval, and adds TS project references to ../scope-eval and ../context-api.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Test as Test / Caller
participant Factory as Process Factory
participant Eval as EvalScope
participant StdioAPI as Stdio API
participant Host as Host stdout/stderr

Test->>+Factory: createProcess(command, options)
Factory->>+Eval: eval process implementation (evalScope.eval)
Eval->>+StdioAPI: register/enter Stdio.around (middleware)
loop read stdout/stderr chunks
Factory->>StdioAPI: yield
Stdio.operations.stdout(chunk)
StdioAPI->>Host: write/transform chunk -> stdout/stderr
Factory->>Factory: forward chunk to local collectors/signals
end
Test->>Factory: proc.around(...) (per-process middleware)
Factory->>Eval: eval per-process Stdio.around in evalScope
Factory->>Test: return Process (around, join, expect, stdin/stdout/stderr)
Factory->>+Host: ensure(...) cleanup (SIGTERM/ctrlc/killTree, await stream close)
Host-->>-Factory: cleanup complete

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Suggested reviewers

  • taras
  • cowboyd
🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'process logging middleware' directly describes the main change: introducing middleware for process logging, which is confirmed by the addition of the Stdio context API and its integration throughout the codebase.
Description check ✅ Passed The description follows the required template with both 'Motivation' and 'Approach' sections fully completed, clearly explaining the problem being solved and the solution implemented.
Policy Compliance ✅ Passed Pull request complies with all Strict and Recommended policies including no-agent-marketing, package.json metadata, and version bump requirements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch process-logging-middleware

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jbolda jbolda force-pushed the process-logging-middleware branch from 512fae3 to 98b3a29 Compare March 24, 2026 20:30
@jbolda jbolda force-pushed the process-logging-middleware branch from 13a1714 to 9e9bec7 Compare March 26, 2026 17:26
@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 26, 2026

Open in StackBlitz

npm i https://pkg.pr.new/@effectionx/process@195

commit: 47be6bc

@jbolda jbolda marked this pull request as ready for review March 26, 2026 18:06
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@context-api/mod.ts`:
- Around line 62-63: Add an object constraint to the createApi generic and
remove the unsafe cast: change the function signature to declare A extends
object (i.e., createApi<A extends object>(...)) so null/undefined can't be
passed, and update the fields initialization to call Object.keys(handler)
without casting handler to object (you may still assert the result to (keyof
A)[] for typing). This touches the createApi function and the fields variable
where Object.keys(handler as object) is currently used.

In `@process/package.json`:
- Around line 41-43: The package.json's version field needs a semantic bump
because source/API changed: update the "version" value in process/package.json
(the package's top-level "version" field) from 0.7.4 to 0.8.0 (or appropriate
minor increment) and commit that change so the package version reflects the new
functionality and satisfies the repository's version-bump policy.

In `@process/README.md`:
- Around line 73-81: The example middleware is broken: the generator handler
stdout is missing the line parameter and the code writes to the local Process
object instead of Node's stdio. Update the handlers on process.around so they
accept the line argument (e.g., function* stdout(line) { ... } and function*
stderr(line) { ... }) and write to Node's global stdio (use
global.process.stdout.write(line) and global.process.stderr.write(line) or
otherwise reference the Node process object rather than the local variable) so
the example executes correctly; apply these changes to the stdout and stderr
handlers shown in the snippet.

In `@process/src/exec/posix.ts`:
- Around line 82-86: The stderr pump is calling stdioApi.operations.stdout and
thus bypassing the stderr middleware; update the generator started by spawn (the
block that awaits io.stderr.next()) to forward bytes to
stdioApi.operations.stderr instead of stdioApi.operations.stdout and ensure it
still calls stderr.send(next.value) so the stderr middleware and downstream
consumers receive the correct stderr bytes (refer to io.stderr.next,
stdioApi.operations.stdout → replace with stdioApi.operations.stderr, and
stderr.send).

In `@process/src/exec/win32.ts`:
- Around line 184-186: The returned object is exposing stdioApi.around unbound
so callers mutate the wrong context; bind stdioApi.around to evalScope before
returning so proc.around() runs in the same scope as
stdioApi.operations.stdout/stderr. Update the return to use the bound function
(e.g. const around = stdioApi.around.bind(evalScope) and return around as the
around property) referencing stdioApi.around and evalScope in the same scope
where pid is set.
- Around line 177-180: The current condition uses !childProcess.exitCode which
treats exitCode 0 as false and may erroneously force-kill a cleanly exited
process; change the guard to explicitly check childProcess.exitCode === null
before calling killTree(pid) so the force-kill path only runs while the child is
still running (update the conditional that references pid and
childProcess.exitCode in the win32.ts block that calls killTree).

In `@process/test/exec.test.ts`:
- Around line 306-340: Tests under the "io api" describe block are
non-deterministic because stdout and stderr are merged into one array and for
the inline variant the middleware is attached after the child has started; fix
by capturing stdout and stderr into separate buffers (e.g., stdoutArr and
stderrArr), assert each with join("") rather than relying on raw chunk
boundaries, and ensure you register the around middleware before spawning the
process (use stdioApi.around or attach proc.around prior to calling exec/spawn)
so the handlers see all output; update both the "allows redirecting stdio to
array" and "allows redirecting stdio inline" tests to follow this pattern and
use proc.expect() only after capturing is guaranteed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: d3e36241-edf1-42bc-8f3c-860512184e57

📥 Commits

Reviewing files that changed from the base of the PR and between cdf75f3 and 5088fc9.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (11)
  • context-api/mod.ts
  • process/README.md
  • process/package.json
  • process/src/api.ts
  • process/src/exec.ts
  • process/src/exec/error.ts
  • process/src/exec/posix.ts
  • process/src/exec/types.ts
  • process/src/exec/win32.ts
  • process/test/exec.test.ts
  • process/tsconfig.json

Comment thread context-api/mod.ts Outdated
Comment thread process/package.json
Comment thread process/README.md
Comment thread process/src/exec/posix.ts
Comment thread process/src/exec/win32.ts Outdated
Comment thread process/src/exec/win32.ts Outdated
Comment thread process/test/exec.test.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
process/package.json (1)

4-4: ⚠️ Potential issue | 🟠 Major

Version bump is required for this feature change.

The version field remains at 0.7.4 despite this PR adding new functionality (process logging middleware and new dependencies). A semantic version bump is required.

As per coding guidelines: ".policies/version-bump.md: If PR changes package source (non-test), include a semantic version bump in that package's package.json."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/package.json` at line 4, The package.json "version" field was not
updated for this PR's new functionality; open the package.json and update the
"version" value (the "version" key) to the next semantic version (e.g., bump
0.7.4 → 0.8.0 for a minor feature addition) so the package reflects the change
and complies with .policies/version-bump.md; commit the updated package.json
with the bumped version.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@process/package.json`:
- Line 4: The package.json "version" field was not updated for this PR's new
functionality; open the package.json and update the "version" value (the
"version" key) to the next semantic version (e.g., bump 0.7.4 → 0.8.0 for a
minor feature addition) so the package reflects the change and complies with
.policies/version-bump.md; commit the updated package.json with the bumped
version.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 764d47d2-f0a6-40b1-8bba-a77ac9ef7772

📥 Commits

Reviewing files that changed from the base of the PR and between 5088fc9 and 24d0856.

📒 Files selected for processing (2)
  • context-api/mod.ts
  • process/package.json

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
process/test/exec.test.ts (1)

307-323: ⚠️ Potential issue | 🟠 Major

io api assertions are still nondeterministic and currently failing CI.

Line 322 and Line 340 assert exact merged stdout/stderr chunk order, which is not stable. Also, in Line 328-338 the inline middleware is attached after spawn, so early writes can race.

Proposed stabilization (assert per-stream content, not merged ordering)
-    it("allows redirecting stdio to array", function* () {
-      let output: string[] = [];
+    it("allows redirecting stdio to array", function* () {
+      let stdout: string[] = [];
+      let stderr: string[] = [];
       yield* stdioApi.around({
         *stdout([bytes]) {
-          output.push(bytes.toString());
+          stdout.push(bytes.toString());
         },
         *stderr([bytes]) {
-          output.push(bytes.toString());
+          stderr.push(bytes.toString());
         },
       });

       let proc = yield* exec("node './fixtures/hello-world.js'", {
         cwd: import.meta.dirname,
       });
       yield* proc.expect();
-      expect(output).toEqual(["hello\n", "world\n", "boom\n"]);
+      expect(stdout.join("")).toEqual("hello\nworld\n");
+      expect(stderr.join("")).toEqual("boom\n");
     });

     it("allows redirecting stdio inline", function* () {
-      let output: string[] = [];
+      let stdout: string[] = [];
+      let stderr: string[] = [];

       let proc = yield* exec("node './fixtures/hello-world.js'", {
         cwd: import.meta.dirname,
       });
       yield* proc.around({
         *stdout([bytes]) {
-          output.push(bytes.toString().trim());
+          stdout.push(bytes.toString());
         },
         *stderr([bytes]) {
-          output.push(bytes.toString().trim());
+          stderr.push(bytes.toString());
         },
       });
       yield* proc.expect();
-      expect(output).toEqual(["hello", "world", "boom"]);
+      expect(stdout.join("")).toContain("hello");
+      expect(stdout.join("")).toContain("world");
+      expect(stderr.join("")).toContain("boom");
     });

As per coding guidelines: ".policies/deterministic-tests.md - Assert on lifecycle invariants, not timing."

Also applies to: 325-340

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/test/exec.test.ts` around lines 307 - 323, The test is
nondeterministic because stdio middleware is attached after the child is spawned
and it asserts merged stdout/stderr chunk ordering; move the stdioApi.around
call to wrap/attach before calling exec so no early writes are missed (reference
stdioApi.around and exec/ proc.expect), and change assertions to validate
per-stream content (collect stdout chunks and stderr chunks separately inside
the stdout/ stderr handlers) and assert each stream's concatenated/ordered
content independently rather than relying on interleaved merged ordering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@process/test/exec.test.ts`:
- Around line 307-323: The test is nondeterministic because stdio middleware is
attached after the child is spawned and it asserts merged stdout/stderr chunk
ordering; move the stdioApi.around call to wrap/attach before calling exec so no
early writes are missed (reference stdioApi.around and exec/ proc.expect), and
change assertions to validate per-stream content (collect stdout chunks and
stderr chunks separately inside the stdout/ stderr handlers) and assert each
stream's concatenated/ordered content independently rather than relying on
interleaved merged ordering.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: e09f154b-ef79-4845-a2c2-6bf6cecf7fa5

📥 Commits

Reviewing files that changed from the base of the PR and between 24d0856 and cafeae6.

📒 Files selected for processing (2)
  • process/package.json
  • process/test/exec.test.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
process/test/exec.test.ts (1)

306-342: ⚠️ Potential issue | 🔴 Critical

Fix the flaky io api tests – chunk boundaries are non-deterministic.

The pipeline failures confirm the issue: expecting ["hello", "world", "boom"] but receiving ["hello\nworld", "boom"]. The tests assume each console.log() produces a separate chunk, but OS buffering can batch multiple lines into a single chunk. Additionally, the "inline" variant registers proc.around() after the process has started, risking missed output.

Recommended fixes:

  1. Capture raw bytes without per-chunk trimming
  2. Join all captured output at the end, then split by newlines for comparison
  3. Keep stdout and stderr in separate arrays to avoid interleaving ambiguity
  4. For the inline test, use the streaming-capture approach or accept that exact ordering of stdout vs stderr may vary
🐛 Proposed fix
   describe("io api", () => {
     it("allows redirecting stdio to array", function* () {
-      let output: string[] = [];
+      let stdoutChunks: Buffer[] = [];
+      let stderrChunks: Buffer[] = [];
       yield* stdioApi.around({
         *stdout([bytes]) {
-          output.push(bytes.toString().trim());
+          stdoutChunks.push(bytes);
         },
         *stderr([bytes]) {
-          output.push(bytes.toString().trim());
+          stderrChunks.push(bytes);
         },
       });

       let proc = yield* exec("node './fixtures/hello-world.js'", {
         cwd: import.meta.dirname,
       });
       yield* proc.expect();
-      expect(output).toEqual(["hello", "world", "boom"]);
+
+      const stdout = Buffer.concat(stdoutChunks).toString();
+      const stderr = Buffer.concat(stderrChunks).toString();
+      expect(stdout).toEqual("hello\nworld\n");
+      expect(stderr).toEqual("boom\n");
     });

     it("allows redirecting stdio inline", function* () {
-      let output: string[] = [];
+      let stdoutChunks: Buffer[] = [];
+      let stderrChunks: Buffer[] = [];

       let proc = yield* exec("node './fixtures/hello-world.js'", {
         cwd: import.meta.dirname,
       });
       yield* proc.around({
         *stdout([bytes]) {
-          output.push(bytes.toString().trim());
+          stdoutChunks.push(bytes);
         },
         *stderr([bytes]) {
-          output.push(bytes.toString().trim());
+          stderrChunks.push(bytes);
         },
       });
       yield* proc.expect();
-      expect(output).toEqual(["hello", "world", "boom"]);
+
+      const stdout = Buffer.concat(stdoutChunks).toString();
+      const stderr = Buffer.concat(stderrChunks).toString();
+      expect(stdout).toEqual("hello\nworld\n");
+      expect(stderr).toEqual("boom\n");
     });
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/test/exec.test.ts` around lines 306 - 342, The io tests assume
per-console-line chunks but buffering can merge lines and the inline test can
miss early output; update both tests (the stdioApi.around and proc.around usages
around exec/ proc.expect) to collect raw bytes into separate stdout and stderr
arrays (do not trim per-chunk), then after yield* proc.expect() join each array
into a single string and split on '\n' to produce lines for assertion; for the
inline variant ensure you attach capture via proc.around as early as possible
(or use stdioApi.around to wrap the exec call) so no output is missed, and
assert on the split lines rather than per-chunk values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@process/test/exec.test.ts`:
- Around line 306-342: The io tests assume per-console-line chunks but buffering
can merge lines and the inline test can miss early output; update both tests
(the stdioApi.around and proc.around usages around exec/ proc.expect) to collect
raw bytes into separate stdout and stderr arrays (do not trim per-chunk), then
after yield* proc.expect() join each array into a single string and split on
'\n' to produce lines for assertion; for the inline variant ensure you attach
capture via proc.around as early as possible (or use stdioApi.around to wrap the
exec call) so no output is missed, and assert on the split lines rather than
per-chunk values.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 27834f53-ee90-4df3-af5f-3be84c260ebc

📥 Commits

Reviewing files that changed from the base of the PR and between cafeae6 and 05dcbd5.

📒 Files selected for processing (1)
  • process/test/exec.test.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
process/test/exec.test.ts (2)

325-341: ⚠️ Potential issue | 🔴 Critical

Race condition: middleware registered after process spawns.

proc.around() is called after exec() has already spawned the child process. Output may be emitted before the handlers are installed, causing missed chunks and non-deterministic test failures.

This has the same Uint8Array[].toString() assertion bug as the previous test.

🔧 Proposed fix: register middleware before awaiting output

If proc.around must be used inline, consider whether the API should support registering handlers via exec() options instead, or document that proc.around must be called before any yield* that consumes streams.

     it("allows redirecting stdio inline", function* () {
-      let output: Uint8Array[] = [];
+      let stdoutChunks: Uint8Array[] = [];
+      let stderrChunks: Uint8Array[] = [];

       let proc = yield* exec("node './fixtures/hello-world.js'", {
         cwd: import.meta.dirname,
       });
+      // Register middleware immediately after spawn, before consuming any output
       yield* proc.around({
         *stdout([bytes]) {
-          output.push(bytes);
+          stdoutChunks.push(bytes);
         },
         *stderr([bytes]) {
-          output.push(bytes);
+          stderrChunks.push(bytes);
         },
       });
       yield* proc.expect();
-      expect(output.toString()).toEqual("hello\nworld\nboom\n");
+
+      const decoder = new TextDecoder();
+      const stdout = stdoutChunks.map((c) => decoder.decode(c)).join("");
+      const stderr = stderrChunks.map((c) => decoder.decode(c)).join("");
+
+      expect(stdout).toEqual("hello\nworld\n");
+      expect(stderr).toEqual("boom\n");
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/test/exec.test.ts` around lines 325 - 341, The test registers
proc.around handlers after calling exec("...") which spawns the child and can
miss early stdout/stderr chunks; move the around registration so handlers are
installed before the child can emit output (e.g., change the flow to register
proc.around immediately after creating the proc or use an exec option to pass
stdout/stderr handlers), and also fix the assertion by converting the
concatenated Uint8Array[] into a string properly instead of calling
Uint8Array[].toString(); update references to exec, proc.around, and proc.expect
accordingly.

306-323: ⚠️ Potential issue | 🔴 Critical

Uint8Array[].toString() produces comma-separated byte values, not decoded text.

Calling .toString() on an array of Uint8Array yields "104,101,108,108,111,10" rather than "hello\nworld\nboom\n". This assertion will fail immediately when the test runs.

Additionally, merging stdout and stderr into the same array prevents verifying that each stream is correctly routed to its handler—a critical validation point since stream routing bugs can easily be missed with this approach.

🔧 Proposed fix: decode chunks and capture streams separately
   describe("io api", () => {
     it("allows redirecting stdio to array", function* () {
-      let output: Uint8Array[] = [];
+      let stdoutChunks: Uint8Array[] = [];
+      let stderrChunks: Uint8Array[] = [];
       yield* stdioApi.around({
         *stdout([bytes]) {
-          output.push(bytes);
+          stdoutChunks.push(bytes);
         },
         *stderr([bytes]) {
-          output.push(bytes);
+          stderrChunks.push(bytes);
         },
       });

       let proc = yield* exec("node './fixtures/hello-world.js'", {
         cwd: import.meta.dirname,
       });
       yield* proc.expect();
-      expect(output.toString()).toEqual("hello\nworld\nboom\n");
+
+      const decoder = new TextDecoder();
+      const stdout = stdoutChunks.map((c) => decoder.decode(c)).join("");
+      const stderr = stderrChunks.map((c) => decoder.decode(c)).join("");
+
+      expect(stdout).toEqual("hello\nworld\n");
+      expect(stderr).toEqual("boom\n");
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/test/exec.test.ts` around lines 306 - 323, The test is incorrectly
calling .toString() on an array of Uint8Array and conflating stdout/stderr into
one array; update the stdioApi.around handlers in the test (the generator
callbacks passed to stdioApi.around and the local variable currently named
output) to collect stdout and stderr separately (e.g., stdoutChunks and
stderrChunks), decode each Uint8Array chunk with a TextDecoder (or
Buffer.from(...).toString()) before pushing, then after proc.expect() join the
decoded chunks and assert stdout and stderr strings individually against the
expected values; keep references to stdioApi.around, the handler generator names
(*stdout, *stderr), exec and proc.expect to locate where to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@process/test/exec.test.ts`:
- Around line 306-342: Change the stderr routing in process/src/exec/posix.ts so
that stderr chunks are dispatched to stdioApi.operations.stderr(...) instead of
stdioApi.operations.stdout(...); locate the loop that reads the child's stderr
(the place currently calling stdioApi.operations.stdout(next.value)) and call
stdioApi.operations.stderr(next.value) there. Also update the tests in
process/test/exec.test.ts ("io api") to collect stdout and stderr into separate
buffers (e.g., stdoutBuf and stderrBuf) and assert each buffer's contents
independently after exec/expect so the *stderr() handler (and its invocation via
proc.around / stdioApi.around) is verified.

---

Duplicate comments:
In `@process/test/exec.test.ts`:
- Around line 325-341: The test registers proc.around handlers after calling
exec("...") which spawns the child and can miss early stdout/stderr chunks; move
the around registration so handlers are installed before the child can emit
output (e.g., change the flow to register proc.around immediately after creating
the proc or use an exec option to pass stdout/stderr handlers), and also fix the
assertion by converting the concatenated Uint8Array[] into a string properly
instead of calling Uint8Array[].toString(); update references to exec,
proc.around, and proc.expect accordingly.
- Around line 306-323: The test is incorrectly calling .toString() on an array
of Uint8Array and conflating stdout/stderr into one array; update the
stdioApi.around handlers in the test (the generator callbacks passed to
stdioApi.around and the local variable currently named output) to collect stdout
and stderr separately (e.g., stdoutChunks and stderrChunks), decode each
Uint8Array chunk with a TextDecoder (or Buffer.from(...).toString()) before
pushing, then after proc.expect() join the decoded chunks and assert stdout and
stderr strings individually against the expected values; keep references to
stdioApi.around, the handler generator names (*stdout, *stderr), exec and
proc.expect to locate where to change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: aca0b932-85a2-4a4c-8e30-29a6b9c5b8c9

📥 Commits

Reviewing files that changed from the base of the PR and between 05dcbd5 and 38880a3.

📒 Files selected for processing (1)
  • process/test/exec.test.ts

Comment thread process/test/exec.test.ts
@jbolda jbolda requested a review from cowboyd March 27, 2026 07:03
Comment thread process/src/api.ts Outdated
Comment thread process/src/exec/posix.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
process/src/exec/win32.ts (1)

185-193: ⚠️ Potential issue | 🟠 Major

around should be bound to evalScope for middleware to take effect on Win32.

Unlike the POSIX implementation (which wraps Stdio.around in evalScope.eval()), this returns Stdio.around directly. Middleware registered via proc.around() will modify the caller's context, not the context where Stdio.operations.stdout/stderr are invoked. This means Win32 process-level middleware won't intercept output.

Proposed fix (align with POSIX implementation)
     return {
       pid: pid as number,
-      around: Stdio.around,
+      *around(...args: Parameters<typeof Stdio.around>) {
+        const result = yield* evalScope.eval(() => Stdio.around(...args));
+        return unbox(result);
+      },
       stdin,
       stdout,
       stderr,
       join,
       expect,
     } satisfies Yielded<ReturnType<CreateOSProcess>>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/src/exec/win32.ts` around lines 185 - 193, The returned `around` is
currently the raw Stdio.around, so middleware registered via proc.around() runs
in the caller context instead of the process evalScope; update the return to
bind `around` through the process `evalScope` like the POSIX side (i.e., replace
direct `Stdio.around` with `evalScope.eval(Stdio.around)` or equivalent) so that
`Stdio.operations.stdout/stderr` invoke middleware in the process context;
modify the object returned by the function that yields CreateOSProcess (the
value satisfying Yielded<ReturnType<CreateOSProcess>>) to use the eval-bound
`around`.
process/README.md (1)

73-82: ⚠️ Potential issue | 🟡 Minor

The middleware example references the wrong process object.

The example writes to process.stdout.write(line) and process.stderr.write(line), but process here refers to the local Process variable from exec(), not Node's global process. The Process interface has stdout/stderr as Stream<string>, not writable streams.

Suggested fix
 await main(function* () {
   let process = yield* exec("npm install");

   yield* process.around({
     *stdout(line) {
       // it does this by default
-      process.stdout.write(line);
+      globalThis.process.stdout.write(line);
     },
     *stderr(line) {
       // it does this by default
-      process.stderr.write(line);
+      globalThis.process.stderr.write(line);
     },
   });

   // Wait for the process to complete
   yield* process.expect();
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/README.md` around lines 73 - 82, The README example is shadowing
Node's global process with the local Process from exec() and thus calls
process.stdout.write/process.stderr.write on the wrong object; update the
example inside process.around to reference Node's global process (e.g.
globalThis.process.stdout.write(line) and globalThis.process.stderr.write(line))
or rename the local variable to avoid shadowing so the example clearly uses the
real Node writable streams; mention the affected symbols: process.around,
exec(), and the Process interface stdout/stderr to locate the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@process/src/api.ts`:
- Line 2: Replace the circular import by stopping the direct import of StdioApi
from "mod.ts" and instead import StdioApi from the module that actually
declares/exports that interface (the original API implementation file) so you no
longer depend on mod.ts which re-exports api.ts; update the import statement in
this file to a direct named import of StdioApi from its defining module (keeping
the same symbol name) to break the circular dependency.

In `@process/src/exec/types.ts`:
- Around line 46-53: The JSDoc example for proc.around's stdout handler
incorrectly calls yield* next(args) with an undefined variable args; update the
example inside the stdout middleware to pass the handler parameter (line) to
next instead (i.e., call next with the same line value), so the stdout example
uses the actual variable provided to the handler and avoids the undefined args
reference.

---

Duplicate comments:
In `@process/README.md`:
- Around line 73-82: The README example is shadowing Node's global process with
the local Process from exec() and thus calls
process.stdout.write/process.stderr.write on the wrong object; update the
example inside process.around to reference Node's global process (e.g.
globalThis.process.stdout.write(line) and globalThis.process.stderr.write(line))
or rename the local variable to avoid shadowing so the example clearly uses the
real Node writable streams; mention the affected symbols: process.around,
exec(), and the Process interface stdout/stderr to locate the code.

In `@process/src/exec/win32.ts`:
- Around line 185-193: The returned `around` is currently the raw Stdio.around,
so middleware registered via proc.around() runs in the caller context instead of
the process evalScope; update the return to bind `around` through the process
`evalScope` like the POSIX side (i.e., replace direct `Stdio.around` with
`evalScope.eval(Stdio.around)` or equivalent) so that
`Stdio.operations.stdout/stderr` invoke middleware in the process context;
modify the object returned by the function that yields CreateOSProcess (the
value satisfying Yielded<ReturnType<CreateOSProcess>>) to use the eval-bound
`around`.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 17e298ad-89b5-443a-9029-8003a9e0bc3b

📥 Commits

Reviewing files that changed from the base of the PR and between 05dcbd5 and 341799e.

📒 Files selected for processing (8)
  • process/README.md
  • process/mod.ts
  • process/src/api.ts
  • process/src/exec.ts
  • process/src/exec/posix.ts
  • process/src/exec/types.ts
  • process/src/exec/win32.ts
  • process/test/exec.test.ts

Comment thread process/src/api.ts Outdated
Comment thread process/src/exec/types.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
process/src/exec/win32.ts (1)

89-109: ⚠️ Potential issue | 🟠 Major

Resolve stdoutDone/stderrDone in finally.

There is now extra fallible work inside both reader loops before the resolver is completed. If either loop throws or is canceled, teardown can block forever waiting on the pending *.Done operation. Move close() and resolve() into finally in both loops.

🔧 Suggested fix
     yield* spawn(function* () {
-      let next = yield* io.stdout.next();
-      while (!next.done) {
-        yield* Stdio.operations.stdout(next.value);
-        stdout.send(next.value);
-        next = yield* io.stdout.next();
-      }
-      stdout.close();
-      io.stdoutDone.resolve();
+      try {
+        let next = yield* io.stdout.next();
+        while (!next.done) {
+          yield* Stdio.operations.stdout(next.value);
+          stdout.send(next.value);
+          next = yield* io.stdout.next();
+        }
+      } finally {
+        stdout.close();
+        io.stdoutDone.resolve();
+      }
     });

     yield* spawn(function* () {
-      let next = yield* io.stderr.next();
-      while (!next.done) {
-        yield* Stdio.operations.stderr(next.value);
-        stderr.send(next.value);
-        next = yield* io.stderr.next();
-      }
-      stderr.close();
-      io.stderrDone.resolve();
+      try {
+        let next = yield* io.stderr.next();
+        while (!next.done) {
+          yield* Stdio.operations.stderr(next.value);
+          stderr.send(next.value);
+          next = yield* io.stderr.next();
+        }
+      } finally {
+        stderr.close();
+        io.stderrDone.resolve();
+      }
     });

Based on learnings, "Resources must clean up properly on scope exit in Effection code."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/src/exec/win32.ts` around lines 89 - 109, The reader loops inside the
two spawn tasks (the generator passed to spawn that iterates io.stdout and
io.stderr) currently call stdout.close()/stderr.close() and
io.stdoutDone.resolve()/io.stderrDone.resolve() only after the while loop, which
can hang teardown if an exception or cancellation occurs; move the close() and
corresponding io.*Done.resolve() calls into a finally block inside each spawn
generator so that stdout.close()/io.stdoutDone.resolve() and
stderr.close()/io.stderrDone.resolve() are always executed on normal completion,
error, or cancellation (keep the body that yields Stdio.operations.stdout/stderr
and sends values unchanged).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@process/src/exec/win32.ts`:
- Around line 175-181: The code currently does an unconditional yield*
all([io.stdoutDone.operation, io.stderrDone.operation]) which can block forever
and prevent the killTree(pid) fallback from running; change this to race between
the stdio drain operations and the child process exiting so the cleanup path
remains reachable. Concretely, replace the all(...) wait with a race that
listens to io.stdoutDone.operation, io.stderrDone.operation, and a watcher for
childProcess.exitCode becoming non-null (or child exit event), then after the
race completes check if pid is set and childProcess.exitCode is still null and
call killTree(pid); keep use of existing symbols (all, race,
io.stdoutDone.operation, io.stderrDone.operation, childProcess.exitCode,
killTree, pid) so the forced cleanup runs even if stdio never drains.
- Around line 187-190: The public around(...) methods need explicit return
types: change the method signatures for around in both implementations (the one
that yields evalScope.eval(() => Stdio.around(...args)) and returns
unbox(result)) to declare ReturnType<typeof Stdio.around> (which is
Operation<void>) as the return type; update both the win32 around and the posix
around signatures to include this explicit return type while leaving the body
(evalScope.eval, Stdio.around call, unbox(result)) unchanged.

---

Outside diff comments:
In `@process/src/exec/win32.ts`:
- Around line 89-109: The reader loops inside the two spawn tasks (the generator
passed to spawn that iterates io.stdout and io.stderr) currently call
stdout.close()/stderr.close() and
io.stdoutDone.resolve()/io.stderrDone.resolve() only after the while loop, which
can hang teardown if an exception or cancellation occurs; move the close() and
corresponding io.*Done.resolve() calls into a finally block inside each spawn
generator so that stdout.close()/io.stdoutDone.resolve() and
stderr.close()/io.stderrDone.resolve() are always executed on normal completion,
error, or cancellation (keep the body that yields Stdio.operations.stdout/stderr
and sends values unchanged).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 18fe37aa-c9b7-4277-9e3f-2151d64de26e

📥 Commits

Reviewing files that changed from the base of the PR and between cf59625 and 1645afb.

📒 Files selected for processing (1)
  • process/src/exec/win32.ts

Comment thread process/src/exec/win32.ts
Comment thread process/src/exec/win32.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
process/src/exec/posix.ts (1)

71-85: ⚠️ Potential issue | 🔴 Critical

Resolve the process only after the stdio middleware drains.

Lines 74 and 85 can now suspend inside Stdio.operations.*(), but join()/expect() still resolve from the child "close" event alone. On POSIX, proc.around(() => proc.expect()) can therefore unwind before the last chunk finishes its middleware path, so the tail of stdout/stderr can miss the middleware/finally handling. process/src/exec/win32.ts:123-128 already waits for stdoutDone/stderrDone before resolving processResult.

🔧 Suggested change
     yield* spawn(function* () {
       let value = yield* once<ProcessResultValue>(childProcess, "close");
+      yield* all([io.stdoutDone.operation, io.stderrDone.operation]);
       processResult.resolve(Ok(value));
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/src/exec/posix.ts` around lines 71 - 85, The POSIX stdio spawn
coroutines currently resolve io.stdoutDone/io.stderrDone before the
Stdio.operations middleware has fully drained; update the spawned generators
around Stdio.operations.stdout and Stdio.operations.stderr so they only call
stdout.close() and io.stdoutDone.resolve() (and
stderr.close()/io.stderrDone.resolve()) after the while-loop and after the
yield* Stdio.operations.*(next.value) call completes for the last chunk, and
ensure the parent process resolution (the code that uses join()/expect()) waits
for those io.stdoutDone/io.stderrDone signals; in short, move or delay closing
and resolving (stdout.close()/stderr.close() and
io.stdoutDone.resolve()/io.stderrDone.resolve()) until after the
Stdio.operations calls finish so proc.around(() => proc.expect()) cannot resolve
before middleware drains.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@process/src/exec/posix.ts`:
- Around line 126-136: The finalizer inside ensure currently wraps both the
process.kill call and the pipe drain awaits in one try/catch, so if
process.kill(-childProcess.pid, "SIGTERM") throws the catch returns early and
io.stdoutDone/ io.stderrDone are never awaited; change the code in the
ensure(function* () { ... }) generator so that only the kill call is
try/catch-wrapped (or move the drain awaits into a finally block) and always
yield* all([io.stdoutDone.operation, io.stderrDone.operation]) regardless of
whether process.kill throws; reference childProcess.pid, process.kill, and
io.stdoutDone/io.stderrDone when making the update.

In `@process/src/exec/win32.ts`:
- Around line 157-185: The finalizer in ensure() calls ctrlc(pid) and writes
"Y\n" then immediately checks childProcess.exitCode and may call killTree(pid),
so the graceful shutdown never gets time to complete; after the stdin cleanup
(after stdin.end()) insert a wait for stdio to finish by yielding the io
stdout/stderr completion (e.g. yield* all([io.stdoutDone.operation,
io.stderrDone.operation])) or alternatively race that wait against a short
timeout, and only then check childProcess.exitCode and call killTree(pid) if
still running; update references around ensure, ctrlc, childProcess,
io.stdoutDone.operation, io.stderrDone.operation and killTree accordingly.

---

Outside diff comments:
In `@process/src/exec/posix.ts`:
- Around line 71-85: The POSIX stdio spawn coroutines currently resolve
io.stdoutDone/io.stderrDone before the Stdio.operations middleware has fully
drained; update the spawned generators around Stdio.operations.stdout and
Stdio.operations.stderr so they only call stdout.close() and
io.stdoutDone.resolve() (and stderr.close()/io.stderrDone.resolve()) after the
while-loop and after the yield* Stdio.operations.*(next.value) call completes
for the last chunk, and ensure the parent process resolution (the code that uses
join()/expect()) waits for those io.stdoutDone/io.stderrDone signals; in short,
move or delay closing and resolving (stdout.close()/stderr.close() and
io.stdoutDone.resolve()/io.stderrDone.resolve()) until after the
Stdio.operations calls finish so proc.around(() => proc.expect()) cannot resolve
before middleware drains.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 09e5fe80-16c6-4079-99c0-0a9151ca4086

📥 Commits

Reviewing files that changed from the base of the PR and between 1645afb and 49ac4f7.

📒 Files selected for processing (2)
  • process/src/exec/posix.ts
  • process/src/exec/win32.ts

Comment thread process/src/exec/posix.ts
Comment thread process/src/exec/win32.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
process/src/exec/win32.ts (1)

180-188: ⚠️ Potential issue | 🟠 Major

Stdio drain wait can still block the force-kill fallback.

If the child process ignores ctrlc(pid) and keeps its stdout/stderr pipes open, line 182 will block indefinitely, preventing the killTree(pid) fallback from ever executing. The previous review flagged this concern, and it appears unresolved.

Consider racing the stdio drain against a bounded timeout or the process exit event:

🔧 Possible fix direction
-      // depending on how we shutdown, this may already be closed and
-      // will pass immediately over the operations
-      yield* all([io.stdoutDone.operation, io.stderrDone.operation]);
-
-      if (pid && childProcess.exitCode === null) {
+      // Race stdio drain against process exit; if neither completes quickly
+      // and process is still alive, escalate to force-kill
+      const exitOrDrain = yield* race([
+        all([io.stdoutDone.operation, io.stderrDone.operation]),
+        processResult.operation,
+      ]);
+      
+      if (pid && childProcess.exitCode === null) {

This ensures the force-kill path remains reachable even when stdio never drains. Alternatively, use sleep() with a grace period if race against processResult is not sufficient.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/src/exec/win32.ts` around lines 180 - 188, The current yield*
all([io.stdoutDone.operation, io.stderrDone.operation]) can block forever and
prevent the killTree(pid) fallback; change the logic in the shutdown sequence to
race the stdio-drain operations against a bounded wait (either the child process
exit/event or a timeout) so the path to killTree(pid) remains reachable: use a
race between the combined stdio drain (io.stdoutDone.operation +
io.stderrDone.operation) and a timeout or the child process exit detection
(childProcess.exitCode !== null / process exit event), and only await the stdio
drain if the race wins; otherwise proceed to call killTree(pid) (refer to
io.stdoutDone.operation, io.stderrDone.operation, childProcess.exitCode,
killTree, and any existing processResult/ctrlc logic when wiring the race).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@process/src/exec/win32.ts`:
- Around line 180-188: The current yield* all([io.stdoutDone.operation,
io.stderrDone.operation]) can block forever and prevent the killTree(pid)
fallback; change the logic in the shutdown sequence to race the stdio-drain
operations against a bounded wait (either the child process exit/event or a
timeout) so the path to killTree(pid) remains reachable: use a race between the
combined stdio drain (io.stdoutDone.operation + io.stderrDone.operation) and a
timeout or the child process exit detection (childProcess.exitCode !== null /
process exit event), and only await the stdio drain if the race wins; otherwise
proceed to call killTree(pid) (refer to io.stdoutDone.operation,
io.stderrDone.operation, childProcess.exitCode, killTree, and any existing
processResult/ctrlc logic when wiring the race).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 844a2f2f-76ee-461c-9720-265c9ec91c8c

📥 Commits

Reviewing files that changed from the base of the PR and between 49ac4f7 and 0288c05.

📒 Files selected for processing (1)
  • process/src/exec/win32.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
process/src/exec/win32.ts (1)

181-189: ⚠️ Potential issue | 🟠 Major

killTree() is still unreachable when Ctrl-C leaves stdio open.

Line 183 waits for both drain resolvers before Line 189 can force-kill the tree. If the child ignores Ctrl-C and keeps either pipe open, teardown blocks forever and never reaches the fallback. This needs a bounded grace/race before the unconditional wait, then a final drain after killTree(pid).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/src/exec/win32.ts` around lines 181 - 189, The current teardown waits
unbounded on io.stdoutDone.operation and io.stderrDone.operation (the yield*
all([...]) before checking pid/childProcess.exitCode), so if a child ignores
Ctrl-C and leaves a pipe open killTree(pid) never runs; change this to first
race the combined drain promises against a short timeout/grace period (e.g.,
race between yield* all([io.stdoutDone.operation, io.stderrDone.operation]) and
a sleep/timeout), and if the timeout wins and pid && childProcess.exitCode ===
null then call killTree(pid); after calling killTree(pid) perform a second
bounded wait for the remaining drains (await or yield* the operations again) to
ensure cleanup completes.
process/src/exec/posix.ts (1)

127-137: ⚠️ Potential issue | 🔴 Critical

Don't send SIGTERM to a stale process-group ID on normal teardown.

ensure() runs when the scope unwinds, not just on cancellation. If this scope outlives the child, -pid can already belong to a different process group, and if process.kill() throws you still skip the stdio drain wait. Gate the signal on the child still being live and move the drain wait out of the catch.

Suggested change
     yield* ensure(function* () {
       try {
-        if (typeof childProcess.pid === "undefined") {
-          throw new Error("no pid for childProcess");
-        }
-        process.kill(-childProcess.pid, "SIGTERM");
-        yield* all([io.stdoutDone.operation, io.stderrDone.operation]);
+        if (
+          typeof pid !== "undefined" &&
+          childProcess.exitCode === null &&
+          childProcess.signalCode === null
+        ) {
+          process.kill(-pid, "SIGTERM");
+        }
       } catch (_e) {
         // do nothing, process is probably already dead
       }
+      yield* all([io.stdoutDone.operation, io.stderrDone.operation]);
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@process/src/exec/posix.ts` around lines 127 - 137, The teardown currently
sends SIGTERM to -childProcess.pid inside ensure() and catches any error, which
can skip waiting for stdio drains if process.kill throws; change ensure() so it
first checks the child is still alive (e.g., ensure typeof childProcess.pid !==
"undefined" and that sending a harmless signal like 0 to the child PID succeeds)
before calling process.kill(-childProcess.pid, "SIGTERM"), and ensure the yield*
all([io.stdoutDone.operation, io.stderrDone.operation]) call happens outside the
catch block so stdio drains are awaited regardless of whether process.kill
threw; update the block referencing ensure, childProcess.pid, process.kill, and
io.stdoutDone.operation/io.stderrDone.operation accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@process/src/exec/posix.ts`:
- Around line 127-137: The teardown currently sends SIGTERM to -childProcess.pid
inside ensure() and catches any error, which can skip waiting for stdio drains
if process.kill throws; change ensure() so it first checks the child is still
alive (e.g., ensure typeof childProcess.pid !== "undefined" and that sending a
harmless signal like 0 to the child PID succeeds) before calling
process.kill(-childProcess.pid, "SIGTERM"), and ensure the yield*
all([io.stdoutDone.operation, io.stderrDone.operation]) call happens outside the
catch block so stdio drains are awaited regardless of whether process.kill
threw; update the block referencing ensure, childProcess.pid, process.kill, and
io.stdoutDone.operation/io.stderrDone.operation accordingly.

In `@process/src/exec/win32.ts`:
- Around line 181-189: The current teardown waits unbounded on
io.stdoutDone.operation and io.stderrDone.operation (the yield* all([...])
before checking pid/childProcess.exitCode), so if a child ignores Ctrl-C and
leaves a pipe open killTree(pid) never runs; change this to first race the
combined drain promises against a short timeout/grace period (e.g., race between
yield* all([io.stdoutDone.operation, io.stderrDone.operation]) and a
sleep/timeout), and if the timeout wins and pid && childProcess.exitCode ===
null then call killTree(pid); after calling killTree(pid) perform a second
bounded wait for the remaining drains (await or yield* the operations again) to
ensure cleanup completes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 37a0ece1-6873-41dd-96a2-3f200652a32f

📥 Commits

Reviewing files that changed from the base of the PR and between 0288c05 and 5b889fb.

📒 Files selected for processing (3)
  • process/src/api.ts
  • process/src/exec/posix.ts
  • process/src/exec/win32.ts

This reverts commit 5b889fb.
@jbolda jbolda merged commit 54cda1f into main Apr 6, 2026
10 of 11 checks passed
@jbolda jbolda deleted the process-logging-middleware branch April 6, 2026 17:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants