feat(update): core binary self-update via GitHub Releases#153
feat(update): core binary self-update via GitHub Releases#153oxoxDev wants to merge 20 commits intotinyhumansai:mainfrom
Conversation
… backend Implements the update module for openhuman-core sidecar self-update: - Config schema (UpdateConfig, UpdateMode: auto/prompt/manual) - GitHub Releases resolver with ETag caching and target-triple asset matching - SHA256 digest verification for downloaded binaries - Staged binary write (.next) with atomic swap and rollback (.bak) - RPC controllers: update_status, update_set_policy, update_check, update_apply - Background check on core server startup - Preflight swap on CLI startup and Tauri sidecar spawn - Release workflow: root Cargo.toml version sync, core asset upload, validation - Frontend TS types and RPC wrappers - JSON-RPC E2E test covering full check+apply+staged flow Addresses tinyhumansai#82 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add structured [update] prefixed log::debug! calls at key checkpoints: - Preflight swap entry/result - Background check evaluation, cadence, and mode - Release fetch URL and ETag - Version comparison result - Asset download start/completion with byte count - Digest verification (present/skipped) - Staged binary write and activation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
In auto mode (default), maybe_background_check() now downloads, verifies, and stages the newer binary after detecting an available update. Previously it only recorded availability without acting on it. The next restart will activate the staged binary via the preflight swap. Extracts download_and_stage() helper shared between update_apply and the background check to avoid duplication. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds openhuman.update_dismiss controller that persists the dismissed version in config so prompt mode stops prompting for that specific release. Required for UI to let users skip a version. - ops.rs: update_dismiss(version) sets last_dismissed_version - schemas.rs: dismiss schema with version input + handler - tauriCommands.ts: openhumanUpdateDismiss() frontend wrapper Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously update_status reconstructed the latest UpdateAsset with empty strings when only the version was cached. Now check_for_update persists tag, asset name, download URL, release URL, and digest to UpdateConfig, and update_status reconstructs them faithfully. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a doc comment explaining the duplication between the Tauri host's apply_staged_sidecar_update and the core crate's canonical implementation, since the Tauri crate does not depend on openhuman_core. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Notes that SHA256 verification is optional until companion .sha256 files are added to the release pipeline, and flags it as a follow-up hardening item. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Extends json_rpc_update_check_and_apply_stages_binary test to also exercise: - openhuman.update_set_policy: switch to manual mode with 48h interval - openhuman.update_dismiss: suppress prompt for dismissed version Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a GitHub Releases–backed self-update system for the openhuman-core sidecar: new config types, resolver, store, ops, RPC controllers, background checks, staged binary write/apply with backup/rollback, Tauri frontend RPC wrappers and preflight activation on startup, plus CI changes to publish core CLI artifacts. Changes
Sequence Diagram(s)sequenceDiagram
participant Frontend as Frontend (Tauri)
participant RPC as Core RPC Server
participant GitHubAPI as GitHub Releases API
participant FS as Local FileSystem
participant BG as Background Task
BG->>RPC: maybe_background_check()
RPC->>RPC: evaluate schedule (last_check_at + interval)
RPC->>GitHubAPI: fetch_latest_release (If-None-Match: etag)
GitHubAPI-->>RPC: release data or 304
RPC->>RPC: compare_versions(current, latest)
alt update available
RPC->>GitHubAPI: download asset
GitHubAPI-->>RPC: binary payload
RPC->>RPC: verify_digest (optional)
RPC->>FS: write_staged_binary(bytes)
FS-->>RPC: staged_path
RPC->>RPC: persist metadata
end
Frontend->>RPC: openhumanUpdateCheck()
RPC->>GitHubAPI: fetch_latest_release(etag)
GitHubAPI-->>RPC: latest
RPC-->>Frontend: UpdateCheckStatus
Frontend->>RPC: openhumanUpdateApply()
RPC->>GitHubAPI: download asset (if needed) / verify
RPC->>FS: write_staged_binary(bytes)
FS-->>RPC: staged_path
RPC-->>Frontend: UpdateApplyStatus
Note over Frontend,RPC: On next startup
Frontend->>RPC: run_core_from_args()
RPC->>FS: apply_staged_update_preflight()
FS->>FS: rename active -> .bak, .next -> active (rollback on failure)
FS-->>RPC: activation result
RPC->>RPC: continue startup
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
src/core/jsonrpc.rs (1)
421-423: Consider adding panic/error logging for the background task.The spawned task is fire-and-forget with no error handling. If
maybe_background_check()panics, it will silently terminate. Consider logging any unexpected termination for observability:♻️ Optional: Add error/panic logging
tokio::spawn(async { - crate::openhuman::update::rpc::maybe_background_check().await; + if let Err(e) = std::panic::AssertUnwindSafe(async { + crate::openhuman::update::rpc::maybe_background_check().await; + }) + .catch_unwind() + .await + { + log::error!("[update] background check panicked: {:?}", e); + } });Alternatively, if
maybe_background_checkalready handles its errors internally and logs them, this may be acceptable as-is. As per coding guidelines, substantial debug logging should be added on new/changed flows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/jsonrpc.rs` around lines 421 - 423, The fire-and-forget tokio::spawn calling crate::openhuman::update::rpc::maybe_background_check() should be made resilient by catching panics/errors and logging unexpected termination; wrap the spawned future so it awaits maybe_background_check(), uses catch_unwind (or handle Result if maybe_background_check returns one) and logs any Err/panic via the project logger (e.g., log::error! or crate logger) with context like "maybe_background_check background task failed". Ensure you still spawn the task but add this error/panic handling around maybe_background_check() so failures are observable.app/src-tauri/src/core_process.rs (1)
202-239: Code duplication acknowledged but consider adding a sync reminder.The inline comment correctly documents that this function mirrors
store::apply_staged_update_for_path. However, since the Tauri crate cannot depend onopenhuman_core, any future changes to the store.rs version won't automatically propagate here.Consider adding a more explicit cross-reference to ensure future maintainers sync both implementations:
📝 Suggested comment enhancement
/// Activate a staged `.next` sidecar binary before spawning the child process. /// Mirrors `openhuman_core::openhuman::update::store::apply_staged_update_for_path` -/// — kept inline because the Tauri crate does not depend on `openhuman_core`. +/// — kept inline because the Tauri crate does not depend on `openhuman_core`. +/// SYNC WARNING: If you modify the logic here, also update `src/openhuman/update/store.rs`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/core_process.rs` around lines 202 - 239, Update the inline comment above apply_staged_sidecar_update to include an explicit cross-reference and sync reminder to the canonical implementation (openhuman_core::openhuman::update::store::apply_staged_update_for_path), e.g. note that this is a duplicate kept because Tauri cannot depend on openhuman_core and must be manually kept in sync, include the symbol name and file (store::apply_staged_update_for_path) and a short checklist / TODO with a pointer to update both implementations whenever one changes and optionally reference an issue or PR template to track such syncs.app/src/utils/tauriCommands.ts (1)
1039-1087: Mirror the existing[memory]logging for the new update wrappers.These helpers are now the frontend entrypoints for the updater, but they don't emit any app-side trace when a call starts, succeeds, or fails. A tiny
[update]wrapper here—similar to the pattern starting at Line 191—would make it much easier to tell whether a field failure came from the UI call site or the Rust core. As per coding guidelines, "Add substantial debug logging on new/changed flows using namespaced debug logs in React/app code".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/utils/tauriCommands.ts` around lines 1039 - 1087, Add namespaced "[update]" debug logs to each frontend updater wrapper (openhumanUpdateStatus, openhumanUpdateSetPolicy, openhumanUpdateCheck, openhumanUpdateApply, openhumanUpdateDismiss) mirroring the existing "[memory]" logging pattern: log a trace when the call starts (include method and params), log on success with the response, and catch/log errors with the error object before rethrowing; place logs around the callCoreRpc invocation so you emit start/success/failure traces for each wrapper.src/openhuman/update/ops.rs (1)
153-280: Split the RPC entrypoints out ofops.rs.
update_status,update_set_policy,update_dismiss,update_check, andupdate_applyare the RPC layer, while this file also contains orchestration helpers likecheck_for_update()anddownload_and_stage(). Moving the public handlers tosrc/openhuman/update/rpc.rswould keep the new update domain aligned with the repo layout before it grows further. As per coding guidelines, "Keep implementation in openhuman::/, controllers in openhuman::/rpc.rs, routes in core_server/".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/update/ops.rs` around lines 153 - 280, Split the RPC handlers out of the ops module by moving the public functions update_status, update_set_policy, update_dismiss, update_check, and update_apply into a new rpc.rs module (openhuman::update::rpc) and leaving orchestration helpers like check_for_update and download_and_stage in ops.rs; keep the function signatures unchanged, add the new module to the update mod declarations, update any use/import paths that referenced the moved functions to point to the rpc module, and ensure ops.rs exposes any helper functions needed by rpc.rs (or make them pub(crate)) so compilation and callers continue to work.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release.yml:
- Around line 744-748: The publish-release job's requiredPatterns array still
includes the Windows core regex openhuman-core_.*_x86_64-pc-windows-msvc\.exe$
while the Windows build matrix entry is disabled, causing "Missing required
installer assets" failures; fix by either re-enabling the Windows matrix entry
in the workflow matrix that generates core artifacts or remove/comment the
Windows pattern from the requiredPatterns array so the publish-release
validation no longer expects the Windows installer.
In `@src/openhuman/update/ops.rs`:
- Around line 217-223: The load-or-init -> mutate -> save sequences (e.g., in
the block using Config::load_or_init(), setting config.update.mode /
config.update.check_interval_hours, then config.save() and calling
update_status()) must be serialized to avoid last-writer-wins with
maybe_background_check(); wrap these update paths with a global async mutex or
route all mutations through a single serialized update worker (e.g., a
module-level tokio::sync::Mutex<()>/Mutex<Config> or a channel to an updater
task) so only one caller can load/mutate/save at a time; apply the same
protection to the other affected blocks (lines mentioned: 226-231, 234-248,
258-267, 282-347) and ensure update_status() or any post-save notifications run
after the lock is released or from the serialized worker to preserve order.
- Around line 119-123: The current handling in update_apply() treats an ETag 304
(resolved.not_modified) as Ok(None), which loses the persisted asset metadata
and breaks check-now/apply-later; instead, when resolved.not_modified is true
(and likewise at the similar branch around where lines 258-261 return Ok(None)),
fetch the persisted asset metadata from config.update.last_seen_* (or equivalent
fields), compare its version against CARGO_PKG_VERSION, and if the persisted
asset is newer return Ok(Some(persisted_asset_metadata)) and set
config.update.last_result appropriately (e.g., "not_modified_but_newer_cached"),
otherwise return Ok(None) only when the cached/persisted metadata is not newer
than the running CARGO_PKG_VERSION; update both the resolved.not_modified branch
and the other branch (lines ~258-261) accordingly so the cached asset is reused
when appropriate.
- Around line 30-41: The download_asset function currently builds a reqwest
client without timeouts; replace the reqwest::Client::builder() usage in
download_asset with the existing build_runtime_proxy_client_with_timeouts(...)
helper to configure explicit timeouts (e.g., total 30s, connect 10s) and
preserve the existing headers/behavior; do the same in fetch_latest_release() in
resolver.rs so API calls use a client created by
build_runtime_proxy_client_with_timeouts with appropriate timeouts (suggested
30s total, 10s connect) to prevent hanging update_apply()/auto-update tasks.
In `@src/openhuman/update/store.rs`:
- Around line 55-69: The current swap flow removes the backup too eagerly which
breaks resumability; change the logic in the activation path that uses
backup_binary_path, target_bin and staged so you do not remove the .bak before
the new binary is successfully moved: if backup.exists() and target_bin is
missing and staged.exists(), treat this as an interrupted activation and either
restore the backup to target_bin or finish the rename from staged to target_bin
before deleting the backup; otherwise, when performing the normal swap, move
target_bin to backup only as part of an atomic sequence where failure rolls back
(i.e., attempt std::fs::rename(staged, target_bin) first or ensure you only
remove the .bak after staged -> target_bin succeeds). Also add a regression test
that simulates the interrupted state (.bak + .next present, target missing) and
verifies activation resumes correctly.
---
Nitpick comments:
In `@app/src-tauri/src/core_process.rs`:
- Around line 202-239: Update the inline comment above
apply_staged_sidecar_update to include an explicit cross-reference and sync
reminder to the canonical implementation
(openhuman_core::openhuman::update::store::apply_staged_update_for_path), e.g.
note that this is a duplicate kept because Tauri cannot depend on openhuman_core
and must be manually kept in sync, include the symbol name and file
(store::apply_staged_update_for_path) and a short checklist / TODO with a
pointer to update both implementations whenever one changes and optionally
reference an issue or PR template to track such syncs.
In `@app/src/utils/tauriCommands.ts`:
- Around line 1039-1087: Add namespaced "[update]" debug logs to each frontend
updater wrapper (openhumanUpdateStatus, openhumanUpdateSetPolicy,
openhumanUpdateCheck, openhumanUpdateApply, openhumanUpdateDismiss) mirroring
the existing "[memory]" logging pattern: log a trace when the call starts
(include method and params), log on success with the response, and catch/log
errors with the error object before rethrowing; place logs around the
callCoreRpc invocation so you emit start/success/failure traces for each
wrapper.
In `@src/core/jsonrpc.rs`:
- Around line 421-423: The fire-and-forget tokio::spawn calling
crate::openhuman::update::rpc::maybe_background_check() should be made resilient
by catching panics/errors and logging unexpected termination; wrap the spawned
future so it awaits maybe_background_check(), uses catch_unwind (or handle
Result if maybe_background_check returns one) and logs any Err/panic via the
project logger (e.g., log::error! or crate logger) with context like
"maybe_background_check background task failed". Ensure you still spawn the task
but add this error/panic handling around maybe_background_check() so failures
are observable.
In `@src/openhuman/update/ops.rs`:
- Around line 153-280: Split the RPC handlers out of the ops module by moving
the public functions update_status, update_set_policy, update_dismiss,
update_check, and update_apply into a new rpc.rs module (openhuman::update::rpc)
and leaving orchestration helpers like check_for_update and download_and_stage
in ops.rs; keep the function signatures unchanged, add the new module to the
update mod declarations, update any use/import paths that referenced the moved
functions to point to the rpc module, and ensure ops.rs exposes any helper
functions needed by rpc.rs (or make them pub(crate)) so compilation and callers
continue to work.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2827c017-7ac3-4d3b-a3e6-62518fb047f5
📒 Files selected for processing (19)
.github/workflows/release.ymlapp/src-tauri/src/core_process.rsapp/src/utils/tauriCommands.tsdocs/src-tauri/README.mdsrc/core/all.rssrc/core/jsonrpc.rssrc/lib.rssrc/openhuman/config/mod.rssrc/openhuman/config/schema/mod.rssrc/openhuman/config/schema/types.rssrc/openhuman/config/schema/update.rssrc/openhuman/mod.rssrc/openhuman/update/mod.rssrc/openhuman/update/ops.rssrc/openhuman/update/resolver.rssrc/openhuman/update/schemas.rssrc/openhuman/update/store.rssrc/openhuman/update/types.rstests/json_rpc_e2e.rs
| async fn download_asset(url: &str) -> Result<Vec<u8>, String> { | ||
| log::debug!("[update] downloading asset from {url}"); | ||
| let client = reqwest::Client::builder() | ||
| .build() | ||
| .map_err(|e| format!("failed to build http client: {e}"))?; | ||
|
|
||
| let response = client | ||
| .get(url) | ||
| .header("User-Agent", "openhuman-core-updater") | ||
| .send() | ||
| .await | ||
| .map_err(|e| format!("failed to download update asset: {e}"))?; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd src && find . -name "*.rs" -path "*/openhuman/*" | head -20Repository: tinyhumansai/openhuman
Length of output: 727
🏁 Script executed:
cat -n src/openhuman/update/ops.rs | head -60Repository: tinyhumansai/openhuman
Length of output: 2351
🏁 Script executed:
# Check if download_asset is called from other places and if timeouts are set elsewhere
rg "download_asset|reqwest::Client|timeout" src/openhuman/update/ -A 2 -B 2Repository: tinyhumansai/openhuman
Length of output: 1296
🏁 Script executed:
# Check reqwest usage patterns in the entire codebase
rg "reqwest::Client" src/ --type rust -B 2 -A 5 | head -100Repository: tinyhumansai/openhuman
Length of output: 7133
🏁 Script executed:
# Check if there's a build_runtime_proxy_client function that might provide timeout handling
rg "build_runtime_proxy_client" src/ -B 2 -A 10Repository: tinyhumansai/openhuman
Length of output: 18822
🏁 Script executed:
# Check if there's any timeout configuration at the application level
rg "timeout|Duration" src/openhuman/config/ -A 2 -B 2Repository: tinyhumansai/openhuman
Length of output: 14698
🏁 Script executed:
# Check the update flow to understand when download_asset is called
rg "download_and_stage|update_apply" src/openhuman/update/ops.rs -B 3 -A 8 | head -80Repository: tinyhumansai/openhuman
Length of output: 1575
🏁 Script executed:
# Verify the resolver.rs pattern and its usage
cat -n src/openhuman/update/resolver.rs | head -100Repository: tinyhumansai/openhuman
Length of output: 3827
🏁 Script executed:
# Check if resolver client is also used in critical paths
rg "fetch_latest_release" src/openhuman/update/ -B 2 -A 5Repository: tinyhumansai/openhuman
Length of output: 1756
Add explicit HTTP timeouts to the update clients.
Both download_asset() and the client in fetch_latest_release() (resolver.rs) lack timeout configuration. A stalled network transfer can hang update_apply() and the auto-update background task indefinitely. Use the build_runtime_proxy_client_with_timeouts() helper for consistency with the rest of the codebase:
Suggested change
async fn download_asset(url: &str) -> Result<Vec<u8>, String> {
log::debug!("[update] downloading asset from {url}");
- let client = reqwest::Client::builder()
- .build()
- .map_err(|e| format!("failed to build http client: {e}"))?;
+ let client = crate::openhuman::config::build_runtime_proxy_client_with_timeouts(
+ "update.asset",
+ 300, // 5-minute timeout for binary download
+ 10, // 10-second connect timeout
+ );Apply the same pattern to fetch_latest_release() in resolver.rs with appropriate timeouts (e.g., 30s total, 10s connect for API calls).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/update/ops.rs` around lines 30 - 41, The download_asset
function currently builds a reqwest client without timeouts; replace the
reqwest::Client::builder() usage in download_asset with the existing
build_runtime_proxy_client_with_timeouts(...) helper to configure explicit
timeouts (e.g., total 30s, connect 10s) and preserve the existing
headers/behavior; do the same in fetch_latest_release() in resolver.rs so API
calls use a client created by build_runtime_proxy_client_with_timeouts with
appropriate timeouts (suggested 30s total, 10s connect) to prevent hanging
update_apply()/auto-update tasks.
|
This binary needs to be manually merged in the next release. do not merge this now |
Windows core binary is not yet built in CI, so the required-asset validation was failing. Remove the pattern until Windows builds land. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace bare reqwest::Client::builder() in fetch_latest_release() with build_runtime_proxy_client_with_timeouts to respect proxy settings and enforce 30s total / 10s connect timeouts for API calls. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- update_apply() now falls back to cached asset metadata when check_for_update returns not_modified (ETag 304), instead of erroring with "no newer update is available" - Add UPDATE_CONFIG_MUTEX (tokio::sync::Mutex) to serialize all config load→mutate→save sequences, preventing concurrent mutation races between background check and user-triggered RPCs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Detect and recover from two interrupted-swap states on preflight: 1. Target missing + backup exists (no staged) → restore from backup 2. Target missing + both staged and backup exist → activate staged Adds two unit tests covering both recovery paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
WebSearchConfig and WebhookConfig were duplicated in the re-export block after rebasing on upstream changes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/release.yml (1)
738-747:⚠️ Potential issue | 🔴 CriticalRemove the Windows installer check until the Windows job is back.
This array is being updated for the new core assets, but Line 743 still requires a Windows installer even though the Windows matrix entry is commented out above.
publish-releasewill stay red even if the newopenhuman-core_*uploads succeed.Suggested fix
const requiredPatterns = [ /OpenHuman_.*_aarch64\.dmg$/, /OpenHuman_.*_x64\.dmg$/, /OpenHuman_.*_amd64\.AppImage$/, /OpenHuman_.*_amd64\.deb$/, - /(OpenHuman_.*_x64-setup\.exe$|OpenHuman_.*_x64.*\.msi$)/, + // Re-enable once the windows-latest matrix target is restored. + // /(OpenHuman_.*_x64-setup\.exe$|OpenHuman_.*_x64.*\.msi$)/, /openhuman-core_.*_aarch64-apple-darwin$/, /openhuman-core_.*_x86_64-apple-darwin$/, /openhuman-core_.*_x86_64-unknown-linux-gnu$/,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 738 - 747, The requiredPatterns array currently includes a Windows installer regex (/(OpenHuman_.*_x64-setup\.exe$|OpenHuman_.*_x64.*\.msi$)/) which enforces a Windows asset even though the Windows matrix job is disabled; remove that Windows pattern from the requiredPatterns array (or temporarily comment it out) so publish-release no longer fails when only the openhuman-core_* assets are uploaded, keeping the other patterns intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release.yml:
- Around line 127-135: When updating the root crate version (the block that
reads coreCargoPath, builds updatedCoreCargo using nextVersion and writes via
fs.writeFileSync), also regenerate and stage Cargo.lock and include it in the
same commit: after writing the updated Cargo.toml run a lockfile refresh (e.g.,
cargo generate-lockfile or cargo update -p openhuman at the repo root) to update
Cargo.lock, then add/stage Cargo.lock before committing; apply the same change
for the second analogous block (the other version-bump at lines similar to
155-163) so every bump of the root package version is committed together with
the refreshed Cargo.lock.
In `@src/openhuman/update/resolver.rs`:
- Around line 90-98: The function expected_asset_name uses cfg(windows) so the
.exe suffix is chosen by build host instead of the resolved target; change
expected_asset_name to call target_triple() and test the returned triple string
(e.g., triple.contains("windows")) to decide whether to append ".exe", then
format and return the asset name accordingly (update the function
expected_asset_name to perform a runtime check on target_triple() and append
".exe" when the triple indicates Windows).
- Around line 194-197: The code currently falls back from
asset.browser_download_url to asset.url when building download_url, which can
cause download_asset() in ops.rs to fetch the API JSON endpoint instead of the
binary; change the logic in the resolver where download_url is set (the variable
named download_url built from asset) to require browser_download_url only and
return an error if browser_download_url is None (remove the or(asset.url)
fallback), updating the error message to say the asset is missing a
browser_download_url so download_asset() receives a proper direct-download URL.
---
Outside diff comments:
In @.github/workflows/release.yml:
- Around line 738-747: The requiredPatterns array currently includes a Windows
installer regex (/(OpenHuman_.*_x64-setup\.exe$|OpenHuman_.*_x64.*\.msi$)/)
which enforces a Windows asset even though the Windows matrix job is disabled;
remove that Windows pattern from the requiredPatterns array (or temporarily
comment it out) so publish-release no longer fails when only the
openhuman-core_* assets are uploaded, keeping the other patterns intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 49712c69-bb07-45d5-871c-ba04fdb28902
📒 Files selected for processing (5)
.github/workflows/release.ymlsrc/openhuman/config/mod.rssrc/openhuman/update/ops.rssrc/openhuman/update/resolver.rssrc/openhuman/update/store.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- src/openhuman/config/mod.rs
- src/openhuman/update/store.rs
- src/openhuman/update/ops.rs
| let download_url = asset | ||
| .browser_download_url | ||
| .or(asset.url) | ||
| .ok_or_else(|| format!("asset '{expected_name}' missing download URL"))?; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub REST release asset object: what is the difference between browser_download_urlandurl, and does GET /repos/{owner}/{repo}/releases/assets/{asset_id}requireAccept: application/octet-stream to return the binary rather than asset metadata?
💡 Result:
In the GitHub REST API release asset object, url is the API endpoint URL to retrieve metadata about the asset (e.g., https://api.github.com/repos/OWNER/REPO/releases/assets/ASSET_ID), while browser_download_url is the direct download URL for the asset's binary content, suitable for browsers or simple HTTP clients without authentication. The GET /repos/{owner}/{repo}/releases/assets/{asset_id} endpoint returns asset metadata (JSON) by default with Accept: application/vnd.github+json. To download the binary content instead, set Accept: application/octet-stream; the API will then either stream the binary directly (200 OK) or redirect to it (302 Found). Clients should handle both responses, often using -L in curl for redirects.
Citations:
- 1: https://docs.github.com/en/rest/releases/assets
- 2: https://docs.github.com/rest/releases/releases
- 3: https://docs.github.com/pt/rest/releases/assets
- 4: https://docs.github.com/enterprise-cloud@latest/rest/releases/assets
🏁 Script executed:
# First, let's understand the context around lines 194-197
cat -n src/openhuman/update/resolver.rs | head -220 | tail -40Repository: tinyhumansai/openhuman
Length of output: 1313
🏁 Script executed:
# Now let's see how download_url is used after this assignment
rg "download_url" src/openhuman/update/resolver.rs -A 5Repository: tinyhumansai/openhuman
Length of output: 594
🏁 Script executed:
# Search for where the download actually happens to see if Accept headers are set
rg -r "Accept.*octet-stream|accept.*octet|application/octet-stream" src/openhuman --type rustRepository: tinyhumansai/openhuman
Length of output: 365
🏁 Script executed:
# Also check for HTTP client setup and how assets are downloaded
rg "asset.*download|download.*asset|reqwest|http.*get" src/openhuman/update/ --type rust -iRepository: tinyhumansai/openhuman
Length of output: 1359
🏁 Script executed:
# Let's examine the download_asset function implementation
cat -n src/openhuman/update/ops.rs | grep -A 20 "async fn download_asset"Repository: tinyhumansai/openhuman
Length of output: 941
🏁 Script executed:
# Also check the full download_and_stage function
cat -n src/openhuman/update/ops.rs | grep -A 30 "async fn download_and_stage"Repository: tinyhumansai/openhuman
Length of output: 1731
🏁 Script executed:
# Let's also check if there's any Accept header config in the reqwest client setup
rg "reqwest.*Client|ClientBuilder|headers|header" src/openhuman/update/ --type rust -B 2 -A 2Repository: tinyhumansai/openhuman
Length of output: 1265
Remove the fallback to asset.url when browser_download_url is absent.
The asset.url field is the GitHub API endpoint (e.g., https://api.github.com/repos/OWNER/REPO/releases/assets/ASSET_ID), while browser_download_url is the direct binary download URL. When download_asset() in ops.rs requests the asset.url endpoint without the Accept: application/octet-stream header, GitHub returns JSON metadata instead of the binary, causing digest verification to fail or staging invalid content.
Suggested fix
let download_url = asset
.browser_download_url
- .or(asset.url)
- .ok_or_else(|| format!("asset '{expected_name}' missing download URL"))?;
+ .ok_or_else(|| format!("asset '{expected_name}' missing browser_download_url"))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/update/resolver.rs` around lines 194 - 197, The code currently
falls back from asset.browser_download_url to asset.url when building
download_url, which can cause download_asset() in ops.rs to fetch the API JSON
endpoint instead of the binary; change the logic in the resolver where
download_url is set (the variable named download_url built from asset) to
require browser_download_url only and return an error if browser_download_url is
None (remove the or(asset.url) fallback), updating the error message to say the
asset is missing a browser_download_url so download_asset() receives a proper
direct-download URL.
Accept upstream additions (ArchetypeConfig, OrchestratorConfig) while keeping update module types (UpdateConfig, UpdateMode) and core asset patterns in release workflow. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
.github/workflows/release.yml (1)
127-135:⚠️ Potential issue | 🟠 MajorStage
Cargo.lockalongside the rootCargo.tomlbump.Line 127 updates root
Cargo.tomland Line 161 commits it, butCargo.lockis still not refreshed/staged in this flow. That can leave the release tag inconsistent for locked dependency workflows.#!/bin/bash set -euo pipefail python - <<'PY' from pathlib import Path import re import tomllib cargo_toml = Path("Cargo.toml") cargo_lock = Path("Cargo.lock") if not cargo_toml.exists(): print("Cargo.toml not found") raise SystemExit(1) root = tomllib.loads(cargo_toml.read_text()) root_ver = ((root.get("package") or {}).get("version")) print(f"root Cargo.toml package.version: {root_ver!r}") if not cargo_lock.exists(): print("Cargo.lock not found") raise SystemExit(0) text = cargo_lock.read_text() m = re.search(r'\[\[package\]\]\s+name = "openhuman"\s+version = "([^"]+)"', text, re.S) lock_ver = m.group(1) if m else None print(f'Cargo.lock "openhuman" version: {lock_ver!r}') if root_ver and lock_ver and root_ver != lock_ver: print("MISMATCH: Cargo.lock appears stale for root package version.") else: print("OK: versions match (or lock entry missing).") PYAlso applies to: 161-161
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 127 - 135, The root Cargo.toml bump code updates coreCargo (coreCargoPath) and writes updatedCoreCargo but does not refresh or stage Cargo.lock; after writing updatedCoreCargo, regenerate the lockfile (e.g., run a command like cargo generate-lockfile or cargo update in the repo workspace) or programmatically update Cargo.lock so its openhuman package version matches, then ensure you git add Cargo.lock (stage it) before the existing commit step so the commit that includes the bumped Cargo.toml also includes the updated Cargo.lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/config/schema/types.rs`:
- Around line 118-119: The config currently uses serde(default) so missing
[update] blocks deserialize to UpdateConfig::default() which sets
UpdateMode::Auto; change this so legacy/missing configs default to a
non-autoupdate mode (e.g., UpdateMode::Prompt or UpdateMode::Manual) instead of
Auto. Concretely: replace the blanket #[serde(default)] usage for the update
field with an explicit default function (e.g., #[serde(default =
"UpdateConfig::missing_config_default")]) or change UpdateConfig::default() to
return UpdateConfig with mode = UpdateMode::Prompt/Manual; ensure
UpdateMode::Auto remains only when explicitly configured.
- Around line 118-119: The Config.update field (type UpdateConfig) is not
processed by apply_env_overrides(), so env vars like OPENHUMAN_UPDATE_* are
ignored; update the apply_env_overrides() function in
src/openhuman/config/schema/load.rs to detect and apply OPENHUMAN_UPDATE_*
environment variables to Config.update (map each supported UpdateConfig field to
a corresponding OPENHUMAN_UPDATE_<FIELD> env var, parse types as needed, and set
the values on the Config.update instance), following the same pattern used for
existing root-level overrides (api_key, model, workspace, temperature,
web_search, storage, proxy, learning) so UpdateConfig fields are overridden from
env vars.
---
Duplicate comments:
In @.github/workflows/release.yml:
- Around line 127-135: The root Cargo.toml bump code updates coreCargo
(coreCargoPath) and writes updatedCoreCargo but does not refresh or stage
Cargo.lock; after writing updatedCoreCargo, regenerate the lockfile (e.g., run a
command like cargo generate-lockfile or cargo update in the repo workspace) or
programmatically update Cargo.lock so its openhuman package version matches,
then ensure you git add Cargo.lock (stage it) before the existing commit step so
the commit that includes the bumped Cargo.toml also includes the updated
Cargo.lock.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f59d5714-166e-4207-bd20-033011b91332
📒 Files selected for processing (4)
.github/workflows/release.ymlsrc/openhuman/config/mod.rssrc/openhuman/config/schema/mod.rssrc/openhuman/config/schema/types.rs
✅ Files skipped from review due to trivial changes (2)
- src/openhuman/config/schema/mod.rs
- src/openhuman/config/mod.rs
| #[serde(default)] | ||
| pub update: UpdateConfig, |
There was a problem hiding this comment.
Implicit Auto-update default for legacy configs is risky
At Line 118 and Line 168, #[serde(default)] + UpdateConfig::default() means existing config files without an [update] block will inherit UpdateMode::Auto (per src/openhuman/config/schema/update.rs), enabling unattended update behavior without explicit operator opt-in. Please default implicit/missing config to prompt or manual, and reserve auto for explicit configuration.
Suggested adjustment
+fn default_update_config_safe() -> UpdateConfig {
+ UpdateConfig {
+ mode: UpdateMode::Prompt, // or UpdateMode::Manual
+ ..UpdateConfig::default()
+ }
+}
+
pub struct Config {
@@
- #[serde(default)]
+ #[serde(default = "default_update_config_safe")]
pub update: UpdateConfig,
@@
- update: UpdateConfig::default(),
+ update: default_update_config_safe(),Also applies to: 168-168
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/config/schema/types.rs` around lines 118 - 119, The config
currently uses serde(default) so missing [update] blocks deserialize to
UpdateConfig::default() which sets UpdateMode::Auto; change this so
legacy/missing configs default to a non-autoupdate mode (e.g.,
UpdateMode::Prompt or UpdateMode::Manual) instead of Auto. Concretely: replace
the blanket #[serde(default)] usage for the update field with an explicit
default function (e.g., #[serde(default =
"UpdateConfig::missing_config_default")]) or change UpdateConfig::default() to
return UpdateConfig with mode = UpdateMode::Prompt/Manual; ensure
UpdateMode::Auto remains only when explicitly configured.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Locate the schema loader and confirm it's the file handling env overrides
fd -p 'load.rs$' src/openhuman/config/schema
# 2) Inspect update-related env mapping/override logic
rg -n -C4 'update|UPDATE|env|override' src/openhuman/config/schema/load.rs
# 3) Cross-check the new field exists in Config root
rg -n -C2 'pub update:\s*UpdateConfig' src/openhuman/config/schema/types.rsRepository: tinyhumansai/openhuman
Length of output: 16852
Add env var override support for Config.update field in apply_env_overrides()
The new update: UpdateConfig field added at line 119 is missing environment variable override handling in src/openhuman/config/schema/load.rs. The apply_env_overrides() method handles configuration for other root-level fields (api_key, model, workspace, temperature, web_search, storage, proxy, learning, etc.) but does not include any handling for update.* environment variables.
Add corresponding env var override logic (e.g., OPENHUMAN_UPDATE_*) to the apply_env_overrides() method to comply with the coding guideline: "Rust config uses TOML-based Config struct in src/openhuman/config/schema/types.rs with env var overrides applied in src/openhuman/config/schema/load.rs".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/config/schema/types.rs` around lines 118 - 119, The
Config.update field (type UpdateConfig) is not processed by
apply_env_overrides(), so env vars like OPENHUMAN_UPDATE_* are ignored; update
the apply_env_overrides() function in src/openhuman/config/schema/load.rs to
detect and apply OPENHUMAN_UPDATE_* environment variables to Config.update (map
each supported UpdateConfig field to a corresponding OPENHUMAN_UPDATE_<FIELD>
env var, parse types as needed, and set the values on the Config.update
instance), following the same pattern used for existing root-level overrides
(api_key, model, workspace, temperature, web_search, storage, proxy, learning)
so UpdateConfig fields are overridden from env vars.
Add cargo generate-lockfile step and stage Cargo.lock alongside the version-bumped Cargo.toml files so release tags have consistent lockfile. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Existing configs without an [update] block were silently getting Auto mode, enabling unattended downloads without explicit opt-in. Default to Prompt so users must acknowledge updates. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace #[cfg(windows)] compile-time check with runtime inspection of the resolved target triple. Fixes incorrect asset name when OPENHUMAN_UPDATE_TARGET overrides to a Windows triple from a non-Windows host. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- check_for_update() now returns the cached UpdateAsset when ETag 304 confirms the release is unchanged but the cached version is newer, fixing the check-now/apply-later flow for all callers - Simplify update_apply() since the root cause is now handled upstream - Add Accept: application/octet-stream header to download_asset() so the GitHub API url fallback returns binary instead of JSON metadata Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/openhuman/update/resolver.rs (1)
67-87: Prefer Cargo’s compiled target triple as the fallback.This rebuilds the triple from
target_os+ARCH, which drops ABI/vendor details likemuslvsgnu. If the supported artifact matrix ever includes a target outside these hard-coded shapes, asset resolution can drift from the actual release name.option_env!("TARGET")keeps the default aligned with the compiled binary while still lettingOPENHUMAN_UPDATE_TARGEToverride at runtime.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/update/resolver.rs` around lines 67 - 87, In target_triple(), replace the runtime std::env::var("TARGET") lookup with the compile-time option_env!("TARGET") so the fallback uses Cargo's compiled target triple (option_env!("TARGET")) before falling back to rebuilding from std::env::consts; specifically, update the chain that currently calls std::env::var("TARGET") to consult option_env!("TARGET") (converting the Option<&'static str> to a String) and keep the existing reconstruction logic in the existing unwrap_or_else block only as the last resort.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/update/ops.rs`:
- Around line 120-163: When resolved.not_modified is true in update_status(),
don’t reuse the cached last_seen_* metadata unless it was recorded for the same
target; compare the current target used by fetch_latest_release (the same
target/resolver logic that produced resolved) against
config.update.last_seen_target (or whatever field holds the last target) and
only build the cached UpdateAsset if they match. Update the conditional that
produces cached (the block that reads config.update.last_seen_version,
last_seen_download_url, last_seen_tag, last_seen_asset_name,
last_seen_digest_sha256, last_seen_release_url) to first verify target equality,
and mirror the same target check in the other affected branch (the second block
around lines 205-230) so update_apply() cannot get a mismatched-target binary.
- Around line 101-107: The current Err(error) arm for the
apply_staged_update_for_path() call should only swallow Windows
lock/share-violation errors; change that branch to detect the Windows
sharing-lock message (e.g., match if the error String contains "sharing
violation" or "used by another process" / the typical Windows text) and return
Ok(false) only in that case, otherwise propagate the original error (return
Err(error)) so backup/rename/activation failures are surfaced; update the Err
arm where apply_staged_update_for_path() is handled accordingly.
---
Nitpick comments:
In `@src/openhuman/update/resolver.rs`:
- Around line 67-87: In target_triple(), replace the runtime
std::env::var("TARGET") lookup with the compile-time option_env!("TARGET") so
the fallback uses Cargo's compiled target triple (option_env!("TARGET")) before
falling back to rebuilding from std::env::consts; specifically, update the chain
that currently calls std::env::var("TARGET") to consult option_env!("TARGET")
(converting the Option<&'static str> to a String) and keep the existing
reconstruction logic in the existing unwrap_or_else block only as the last
resort.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 70edf025-37be-40d6-9a99-77f0a56696d0
📒 Files selected for processing (4)
.github/workflows/release.ymlsrc/openhuman/config/schema/update.rssrc/openhuman/update/ops.rssrc/openhuman/update/resolver.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/openhuman/config/schema/update.rs
- .github/workflows/release.yml
| Err(error) => { | ||
| #[cfg(windows)] | ||
| { | ||
| log::warn!( | ||
| "[update] staged update present but could not be activated yet (will retry): {error}" | ||
| ); | ||
| return Ok(false); |
There was a problem hiding this comment.
Only suppress the Windows lock/share-violation case here.
The PR intent is “retry on Windows lock failures”, but this branch converts every apply_staged_update_for_path() error into Ok(false). In src/openhuman/update/store.rs:43-115, the same String error type is used for backup-restore failures, rename failures, and activation failures, so permission/corruption problems will now be silently retried forever instead of surfaced. Preserve typed I/O detail here so only the sharing-violation path is swallowed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/update/ops.rs` around lines 101 - 107, The current Err(error)
arm for the apply_staged_update_for_path() call should only swallow Windows
lock/share-violation errors; change that branch to detect the Windows
sharing-lock message (e.g., match if the error String contains "sharing
violation" or "used by another process" / the typical Windows text) and return
Ok(false) only in that case, otherwise propagate the original error (return
Err(error)) so backup/rename/activation failures are surfaced; update the Err
arm where apply_staged_update_for_path() is handled accordingly.
| let resolved = fetch_latest_release(config.update.last_etag.as_deref()).await?; | ||
|
|
||
| config.update.last_check_at = Some(now_rfc3339()); | ||
| config.update.last_error = None; | ||
|
|
||
| if resolved.not_modified { | ||
| log::debug!("[update] release not modified (ETag match), checking cached asset"); | ||
| // Reuse persisted asset metadata when ETag 304 confirms release unchanged | ||
| let cached = config | ||
| .update | ||
| .last_seen_version | ||
| .as_deref() | ||
| .and_then(|v| compare_versions(v, current_version).ok()) | ||
| .map(|o| o.is_gt()) | ||
| .unwrap_or(false) | ||
| .then(|| { | ||
| let url = config | ||
| .update | ||
| .last_seen_download_url | ||
| .clone() | ||
| .unwrap_or_default(); | ||
| UpdateAsset { | ||
| version: config.update.last_seen_version.clone().unwrap_or_default(), | ||
| tag: config.update.last_seen_tag.clone().unwrap_or_default(), | ||
| name: config | ||
| .update | ||
| .last_seen_asset_name | ||
| .clone() | ||
| .unwrap_or_default(), | ||
| download_url: url, | ||
| digest_sha256: config.update.last_seen_digest_sha256.clone(), | ||
| release_url: config | ||
| .update | ||
| .last_seen_release_url | ||
| .clone() | ||
| .unwrap_or_default(), | ||
| } | ||
| }); | ||
| if cached.is_some() { | ||
| config.update.last_result = Some("update_available".to_string()); | ||
| } else { | ||
| config.update.last_result = Some("not_modified".to_string()); | ||
| } | ||
| return Ok(cached); |
There was a problem hiding this comment.
Key the cached asset metadata by target, not just by version/ETag.
fetch_latest_release() resolves assets against the current target in src/openhuman/update/resolver.rs:67-98, but the cached last_seen_* fields are reused here without validating that they still belong to that target. If OPENHUMAN_UPDATE_TARGET changes between checks, a 304 Not Modified can resurrect the previous asset metadata and let update_apply() stage the wrong binary. Apply the same target check before rebuilding latest in update_status().
Also applies to: 205-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/update/ops.rs` around lines 120 - 163, When
resolved.not_modified is true in update_status(), don’t reuse the cached
last_seen_* metadata unless it was recorded for the same target; compare the
current target used by fetch_latest_release (the same target/resolver logic that
produced resolved) against config.update.last_seen_target (or whatever field
holds the last target) and only build the cached UpdateAsset if they match.
Update the conditional that produces cached (the block that reads
config.update.last_seen_version, last_seen_download_url, last_seen_tag,
last_seen_asset_name, last_seen_digest_sha256, last_seen_release_url) to first
verify target equality, and mirror the same target check in the other affected
branch (the second block around lines 205-230) so update_apply() cannot get a
mismatched-target binary.
|
merging into #372 |
Summary
openhuman-coresidecar binary using GitHub Releases as the source of truthauto(default, downloads + stages silently),prompt(detects but waits for confirmation),manual(check only on explicit request)update_status,update_check,update_apply,update_set_policy,update_dismissopenhuman-coreartifacts per target and validates their presence before publishingProblem
Solution
src/openhuman/update/with clean separation:types.rs,ops.rs,resolver.rs,store.rs,schemas.rs.next, atomic rename-swap with.bakrollback on next startupupdate_statuseven across restartsupdate_dismissallows prompt-mode users to suppress repeated prompts for a specific versionCargo.tomlversion, uploadopenhuman-core_<version>_<target>assets, and gate on their presenceSubmission Checklist
cargo testcovers semver normalization/comparison, staged swap roundtrip, expected asset name generationjson_rpc_update_check_and_apply_stages_binaryintests/json_rpc_e2e.rsexercises full flow: status baseline, check (detects update), apply (downloads + verifies + stages), post-apply status, set_policy, and dismiss — all against a mock GitHub release server//!onmod.rsandupdate.rs, doc comment on Tauri swap mirrorImpact
[update]section in config TOML (fully backward-compatible with#[serde(default)]).sha256files are added to the release pipeline (documented as follow-up)Related
.sha256sidecar files to release pipeline for mandatory digest verificationSummary by CodeRabbit
New Features
Documentation
Tests