Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -169,10 +169,15 @@ jobs:
run: bun install --frozen-lockfile
- name: Unit Tests
run: bun run test:unit
- name: Upload Coverage
- name: Isolated Tests
run: bun run test:isolated --coverage --coverage-reporter=lcov --coverage-dir=coverage-isolated
- name: Merge Coverage Reports
run: bun run script/merge-lcov.ts coverage/lcov.info coverage-isolated/lcov.info > coverage/merged.lcov
- name: Coverage Report
uses: getsentry/codecov-action@main
with:
token: ${{ secrets.GITHUB_TOKEN }}
files: ./coverage/merged.lcov

build-binary:
name: Build Binary (${{ matrix.target }})
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dist-bin

# code coverage
coverage
coverage-isolated
*.lcov

# test artifacts
Expand Down
8 changes: 4 additions & 4 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -666,14 +666,11 @@ mock.module("./some-module", () => ({
* **Install script: BSD sed and awk JSON parsing breaks OCI digest extraction**: The install script parses OCI manifests with awk (no jq). Key trap: BSD sed \`\n\` is literal, not newline. Fix: single awk pass tracking last-seen \`"digest"\`, printing when \`"org.opencontainers.image.title"\` matches target. The config digest (\`sha256:44136fa...\`) is a 2-byte \`{}\` blob — downloading it instead of the real binary causes \`gunzip: unexpected end of file\`.

<!-- lore:019cb963-cb63-722d-9365-b34336f4766d -->
* **Bun.mmap() unusable on running executables on ANY platform**: \`Bun.mmap()\` always opens files with \`PROT\_WRITE\`/\`O\_RDWR\` regardless of the \`shared\` flag. This fails on the running binary: macOS sends uncatchable SIGKILL (AMFI rejects writable mappings on signed Mach-O), Linux returns ETXTBSY (kernel blocks opening running executables for write). The platform-conditional fix (PR #340: darwin→arrayBuffer, else→mmap) was insufficient — mmap fails on BOTH platforms. Fixed by using \`new Uint8Array(await Bun.file(oldPath).arrayBuffer())\` unconditionally in bspatch.ts. Costs ~100 MB heap for the old binary but is the only approach that works cross-platform.
* **macOS SIGKILL on MAP\_SHARED mmap of signed Mach-O binaries**: \`Bun.mmap()\` always opens files with \`PROT\_WRITE\`/\`O\_RDWR\` regardless of the \`shared\` flag. This fails on the running binary: macOS sends uncatchable SIGKILL (AMFI rejects writable mappings on signed Mach-O), Linux returns ETXTBSY (kernel blocks opening running executables for write). Fix: copy-then-mmap — copy the running binary to a temp file, then mmap the copy (separate inode, no AMFI/ETXTBSY restrictions). Falls back to \`arrayBuffer()\` if copy/mmap fails. Do NOT use child-process probe (\`process.execPath -e ...\`) — compiled Bun binaries don't support \`-e\`, causing the child to rerun the main program.

<!-- lore:019c969a-1c90-7041-88a8-4e4d9a51ebed -->
* **Multiple mockFetch calls replace each other — use unified mocks for multi-endpoint tests**: Bun test mocking gotchas: (1) \`mockFetch()\` replaces \`globalThis.fetch\` — calling it twice replaces the first mock. Use a single unified fetch mock dispatching by URL pattern. (2) \`mock.module()\` pollutes the module registry for ALL subsequent test files. Tests using it must live in \`test/isolated/\` and run via \`test:isolated\`. (3) For \`Bun.spawn\`, use direct property assignment in \`beforeEach\`/\`afterEach\`.

<!-- lore:019c8a7a-5321-7a48-a86c-1340ee3e90db -->
* **Several commands bypass telemetry by importing buildCommand from @stricli/core directly**: src/lib/command.ts wraps Stricli's buildCommand to auto-capture flag/arg telemetry via Sentry. But trace/list, trace/view, log/view, api.ts, and help.ts import buildCommand directly from @stricli/core, silently skipping telemetry. Fix: change their imports to use ../../lib/command.js. Consider adding a Biome lint rule (noRestrictedImports equivalent) to prevent future regressions.

<!-- lore:019c9741-d78e-73b1-87c2-e360ef6c7475 -->
* **useTestConfigDir without isolateProjectRoot causes DSN scanning of repo tree**: \`useTestConfigDir()\` creates temp dirs under \`.test-tmp/\` in the repo tree. Without \`{ isolateProjectRoot: true }\`, \`findProjectRoot\` walks up and finds the repo's \`.git\`, causing DSN detection to scan real source code and trigger network calls against test mocks (timeouts). Always pass \`isolateProjectRoot: true\` when tests exercise \`resolveOrg\`, \`detectDsn\`, or \`findProjectRoot\`.

Expand All @@ -688,6 +685,9 @@ mock.module("./some-module", () => ({
<!-- lore:019cb162-d3ad-7b05-ab4f-f87892d517a6 -->
* **Shared pagination infrastructure: buildPaginationContextKey and parseCursorFlag**: List commands with cursor pagination use \`buildPaginationContextKey(type, identifier, flags)\` for composite context keys and \`parseCursorFlag(value)\` accepting \`"last"\` magic value. Critical: \`resolveCursor()\` must be called inside the \`org-all\` override closure, not before \`dispatchOrgScopedList\` — otherwise cursor validation errors fire before the correct mode-specific error.

<!-- lore:019cbd5f-ec35-7e2d-8386-6d3a67adf0cf -->
* **Telemetry instrumentation pattern: withTracingSpan + captureException for handled errors**: For graceful-fallback operations, use \`withTracingSpan\` from \`src/lib/telemetry.ts\` for child spans and \`captureException\` from \`@sentry/bun\` (named import — Biome forbids namespace imports) with \`level: 'warning'\` for non-fatal errors. \`withTracingSpan\` uses \`onlyIfParent: true\` so it's a no-op without active transaction. When returning \`withTracingSpan(...)\` directly, drop \`async\` and use \`Promise.resolve(null)\` for early returns. User-visible fallbacks should use \`log.warn()\` not \`log.debug()\` — debug is invisible at default level. Also: several commands bypass telemetry by importing \`buildCommand\` from \`@stricli/core\` directly instead of \`../../lib/command.js\`. Affected: trace/list, trace/view, log/view, api.ts, help.ts.

### Preference

<!-- lore:019c9700-0fc3-730c-82c3-a290d5ecc2ea -->
Expand Down
187 changes: 187 additions & 0 deletions script/merge-lcov.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
#!/usr/bin/env bun
/**
* Merge multiple LCOV coverage files by taking the MAX hit count per line.
*
* The getsentry/codecov-action aggregates multiple coverage files by summing
* all statements — if a source file appears in two lcov files, its hit counts
* get doubled rather than merged. This script merges lcov files properly:
* for each source file, it takes the maximum DA (line hit count) across all
* input files, producing a single lcov with the best coverage from any suite.
*
* All summary fields (LF, LH, FNF, FNH, BRF, BRH) are recomputed from the
* merged data rather than taking the max from inputs.
*
* Usage: bun run script/merge-lcov.ts file1.lcov file2.lcov [file3.lcov ...]
* Output: merged lcov to stdout
*/

export type FileData = {
/** DA entries: line number → max hit count */
da: Map<number, number>;
/** FN entries: function name → "lineNo,name" */
fn: Map<string, string>;
/** FNDA entries: function name → max execution count */
fnda: Map<string, number>;
/** BRDA entries: "line,block,branch" key → max taken count (string, "-" = not taken) */
brda: Map<string, string>;
/** Max FNF seen from inputs (fallback when no FN/FNDA entries available) */
fnfMax: number;
/** Max FNH seen from inputs (fallback when no FN/FNDA entries available) */
fnhMax: number;
};

function createFileData(): FileData {
return {
da: new Map(),
fn: new Map(),
fnda: new Map(),
brda: new Map(),
fnfMax: 0,
fnhMax: 0,
};
}

const files = process.argv.slice(2);
if (files.length === 0) {
console.error("Usage: merge-lcov.ts <file1.lcov> [file2.lcov ...]");
process.exit(1);
}

/** Merged data per source file path */
const merged = new Map<string, FileData>();
/** Track insertion order of source files */
const fileOrder: string[] = [];

for (const filePath of files) {
const content = await Bun.file(filePath).text();
let currentSF = "";
let data: FileData | null = null;

for (const line of content.split("\n")) {
if (line.startsWith("SF:")) {
currentSF = line.slice(3);
if (!merged.has(currentSF)) {
merged.set(currentSF, createFileData());
fileOrder.push(currentSF);
}
data = merged.get(currentSF) ?? null;
} else if (line.startsWith("DA:") && data) {
const comma = line.indexOf(",", 3);
const lineNo = Number.parseInt(line.slice(3, comma), 10);
const hits = Number.parseInt(line.slice(comma + 1), 10);
if (!data.da.has(lineNo) || hits > (data.da.get(lineNo) ?? 0)) {
data.da.set(lineNo, hits);
}
} else if (line.startsWith("FN:") && data) {
const val = line.slice(3);
const comma = val.indexOf(",");
const name = val.slice(comma + 1);
data.fn.set(name, val);
} else if (line.startsWith("FNDA:") && data) {
const val = line.slice(5);
const comma = val.indexOf(",");
const count = Number.parseInt(val.slice(0, comma), 10);
const name = val.slice(comma + 1);
const prev = data.fnda.get(name) ?? 0;
if (count > prev) {
data.fnda.set(name, count);
}
} else if (line.startsWith("FNF:") && data) {
const v = Number.parseInt(line.slice(4), 10);
if (v > data.fnfMax) {
data.fnfMax = v;
}
} else if (line.startsWith("FNH:") && data) {
const v = Number.parseInt(line.slice(4), 10);
if (v > data.fnhMax) {
data.fnhMax = v;
}
} else if (line.startsWith("BRDA:") && data) {
// BRDA format: line,block,branch,taken
// Use "line,block,branch" as key, take max of "taken"
const val = line.slice(5);
const lastComma = val.lastIndexOf(",");
const branchKey = val.slice(0, lastComma);
const taken = val.slice(lastComma + 1);
const prev = data.brda.get(branchKey);
if (prev === undefined || prev === "-") {
// No previous value or previous was "not taken"
data.brda.set(branchKey, taken);
} else if (taken !== "-") {
// Both are numeric — take the max
const prevNum = Number.parseInt(prev, 10);
const takenNum = Number.parseInt(taken, 10);
if (takenNum > prevNum) {
data.brda.set(branchKey, taken);
}
}
}
// LF, LH, FNF, FNH, BRF, BRH, end_of_record are all skipped — we recompute them
}
}

// Output merged lcov
const out: string[] = [];

for (const sf of fileOrder) {
const data = merged.get(sf);
if (!data) {
continue;
}

out.push(`SF:${sf}`);

// FN lines
for (const val of data.fn.values()) {
out.push(`FN:${val}`);
}

// FNDA lines + compute FNF/FNH
let fnh = 0;
for (const [name, count] of data.fnda) {
out.push(`FNDA:${count},${name}`);
if (count > 0) {
fnh += 1;
}
}

// Prefer recomputed FNF/FNH from merged FN/FNDA entries when available.
// Bun's lcov output only has FNF/FNH summary lines (no individual FN/FNDA),
// so fall back to max-of-inputs for those.
if (data.fn.size > 0) {
out.push(`FNF:${data.fn.size}`);
out.push(`FNH:${fnh}`);
} else if (data.fnfMax > 0) {
out.push(`FNF:${data.fnfMax}`);
out.push(`FNH:${data.fnhMax}`);
}

// DA lines sorted by line number + compute LF/LH
const sortedLines = [...data.da.entries()].sort((a, b) => a[0] - b[0]);
let lh = 0;
for (const [lineNo, hits] of sortedLines) {
out.push(`DA:${lineNo},${hits}`);
if (hits > 0) {
lh += 1;
}
}
out.push(`LF:${data.da.size}`);
out.push(`LH:${lh}`);

// BRDA lines + compute BRF/BRH
let brh = 0;
for (const [key, taken] of data.brda) {
out.push(`BRDA:${key},${taken}`);
if (taken !== "-" && Number.parseInt(taken, 10) > 0) {
brh += 1;
}
}
if (data.brda.size > 0) {
out.push(`BRF:${data.brda.size}`);
out.push(`BRH:${brh}`);
}

out.push("end_of_record");
}

process.stdout.write(`${out.join("\n")}\n`);
100 changes: 85 additions & 15 deletions src/lib/bspatch.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,20 @@
* TRDIFF10 format (produced by zig-bsdiff with `--use-zstd`). Designed for
* minimal memory usage during CLI self-upgrades:
*
* - Old binary: loaded via `Bun.file().arrayBuffer()` (~100 MB heap)
* - Old binary: copy-then-mmap for 0 JS heap (CoW on btrfs/xfs/APFS),
* falling back to `arrayBuffer()` if copy/mmap fails
* - Diff/extra blocks: streamed via `DecompressionStream('zstd')`
* - Output: written incrementally to disk via `Bun.file().writer()`
* - Integrity: SHA-256 computed inline via `Bun.CryptoHasher`
*
* Total heap usage: ~100 MB for old file + ~1-2 MB for streaming buffers.
* `Bun.mmap()` is NOT usable here because the old file is the running binary:
* - macOS: AMFI sends uncatchable SIGKILL (PROT_WRITE on signed Mach-O)
* - Linux: ETXTBSY from `open()` with write flags on a running executable
* `Bun.mmap()` cannot target the running binary directly because it opens
* with PROT_WRITE/O_RDWR:
* - macOS: AMFI sends uncatchable SIGKILL (writable mapping on signed Mach-O)
* - Linux: ETXTBSY from `open()` (kernel blocks write-open on running ELF)
*
* The copy-then-mmap strategy sidesteps both: the copy is a regular file
* with no running process, so mmap succeeds. On CoW-capable filesystems
* (btrfs, xfs, APFS) the copy is near-instant with zero extra disk I/O.
*
* TRDIFF10 format (from zig-bsdiff):
* ```
Expand All @@ -25,6 +30,11 @@
* ```
*/

import { constants, copyFileSync } from "node:fs";
import { unlink } from "node:fs/promises";
import { tmpdir } from "node:os";
import { join } from "node:path";

/** TRDIFF10 header magic bytes */
const TRDIFF10_MAGIC = "TRDIFF10";

Expand Down Expand Up @@ -210,12 +220,70 @@ function createZstdStreamReader(compressed: Uint8Array): BufferedStreamReader {
);
}

/** Result of loading the old binary for patching */
type OldFileHandle = {
/** Memory-mapped or in-memory view of the old binary */
data: Uint8Array;
/** Cleanup function to call after patching (removes temp copy, if any) */
cleanup: () => void | Promise<void>;
};

/**
* Load the old binary for read access during patching.
*
* Strategy: copy to temp file, then try mmap on the copy. The copy is a
* regular file (no running process), so `Bun.mmap()` succeeds on both
* Linux and macOS — ETXTBSY (Linux) and AMFI SIGKILL (macOS) only affect
* the running binary's inode, not a copy. On CoW filesystems (btrfs, xfs,
* APFS) the copy is a metadata-only reflink (near-instant).
*
* Falls back to `Bun.file().arrayBuffer()` (~100 MB heap) if copy or
* mmap fails for any reason.
*/
let loadCounter = 0;

async function loadOldBinary(oldPath: string): Promise<OldFileHandle> {
loadCounter += 1;
const tempCopy = join(
tmpdir(),
`sentry-patch-old-${process.pid}-${loadCounter}`
);
try {
// COPYFILE_FICLONE: attempt CoW reflink first (near-instant on btrfs/xfs/APFS),
// silently falls back to regular copy on filesystems that don't support it.
copyFileSync(oldPath, tempCopy, constants.COPYFILE_FICLONE);

// mmap the copy — safe because it's a separate inode, not the running
// binary. MAP_PRIVATE avoids write-back to disk.
const data = Bun.mmap(tempCopy, { shared: false });
return {
data,
cleanup: () =>
unlink(tempCopy).catch(() => {
/* Best-effort cleanup — OS will reclaim on reboot */
}),
};
} catch {
// Copy or mmap failed — fall back to reading into JS heap
await unlink(tempCopy).catch(() => {
/* May not exist if copyFileSync failed */
});
return {
data: new Uint8Array(await Bun.file(oldPath).arrayBuffer()),
cleanup: () => {
// Data is in JS heap — no temp file to clean up
},
};
}
}

/**
* Apply a TRDIFF10 binary patch with streaming I/O for minimal memory usage.
*
* Reads the old file into memory via `Bun.file().arrayBuffer()`, then streams
* diff/extra blocks (~16 KB buffers) via `DecompressionStream('zstd')`,
* writes output via `Bun.file().writer()`, and computes SHA-256 inline.
* Copies the old file to a temp path and mmaps the copy (0 JS heap), falling
* back to `arrayBuffer()` if mmap fails. Streams diff/extra blocks via
* `DecompressionStream('zstd')`, writes output via `Bun.file().writer()`,
* and computes SHA-256 inline.
*
* @param oldPath - Path to the existing (old) binary file
* @param patchData - Complete TRDIFF10 patch file contents
Expand Down Expand Up @@ -246,12 +314,10 @@ export async function applyPatch(
);
const extraReader = createZstdStreamReader(patchData.subarray(extraStart));

// Bun.mmap() is NOT usable for the old file during self-upgrades because
// it always opens with PROT_WRITE, and the old file is the running binary:
// - macOS: AMFI sends uncatchable SIGKILL on writable mapping of signed Mach-O
// - Linux: open() returns ETXTBSY when opening a running executable for write
// Reading into memory costs ~100 MB heap but avoids both platform restrictions.
const oldFile = new Uint8Array(await Bun.file(oldPath).arrayBuffer());
// Load old binary via copy-then-mmap (0 JS heap) or arrayBuffer fallback.
// See loadOldBinary() for why direct mmap of the running binary is impossible.
const { data: oldFile, cleanup: cleanupOldFile } =
await loadOldBinary(oldPath);

// Streaming output: write directly to disk, no output buffer in memory
const writer = Bun.file(destPath).writer();
Expand Down Expand Up @@ -300,7 +366,11 @@ export async function applyPatch(
oldpos += seekBy;
}
} finally {
await writer.end();
try {
await writer.end();
} finally {
await cleanupOldFile();
}
}

// Validate output size matches header
Expand Down
Loading
Loading