-
Notifications
You must be signed in to change notification settings - Fork 3
Sync upstream rust-v0.80.0 #181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
The problem is that the `tokio` task own an `Arc` reference of the session and that this task only exit with the broadcast channel get closed. But this never get closed if the session is not dropped. So it's a snake biting his tail basically The most notable result was that non of the `Drop` implementation were triggered (temporary files, shell snapshots, session cleaning etc etc) when closing the session (through a `/new` for example) The fix is just to weaken the `Arc` and upgrade it on the fly
We should not have any `PathBuf` fields in `ConfigToml` or any of the transitive structs we include, as we should use `AbsolutePathBuf` instead so that we do not have to keep track of the file from which `ConfigToml` was loaded such that we need it to resolve relative paths later when the values of `ConfigToml` are used. I only found two instances of this: `experimental_instructions_file` and `experimental_compact_prompt_file`. Incidentally, when these were specified as relative paths, they were resolved against `cwd` rather than `config.toml`'s parent, which seems wrong to me. I changed the behavior so they are resolved against the parent folder of the `config.toml` being parsed, which we get "for free" due to the introduction of `AbsolutePathBufGuard ` in openai/codex#7796. While it is not great to change the behavior of a released feature, these fields are prefixed with `experimental_`, which I interpret to mean we have the liberty to change the contract. For reference: - `experimental_instructions_file` was introduced in openai/codex#1803 - `experimental_compact_prompt_file` was introduced in openai/codex#5959
### Summary * Make `app_server.list_models` to be non-blocking and consumers (i.e. extension) can manage the flow themselves. * Force config to use remote models and therefore fetch codex-auto model list.
- Load models from static file as a fallback - Make API users use this file directly - Add tests to make sure updates to the file always serialize
- Batch read ACL creation for online/offline sandbox user - creates a new ACL helper process that is long-lived and runs in the background - uses a mutex so that only one helper process is running at a time.
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
This PR does various types of cleanup before I can proceed with more ambitious changes to config loading. First, I noticed duplicated code across these two methods: https://github.com/openai/codex/blob/774bd9e432fa2e0f4e059e97648cf92216912e19/codex-rs/core/src/config/mod.rs#L314-L324 https://github.com/openai/codex/blob/774bd9e432fa2e0f4e059e97648cf92216912e19/codex-rs/core/src/config/mod.rs#L334-L344 This has now been consolidated in `load_config_as_toml_with_cli_overrides()`. Further, I noticed that `Config::load_with_cli_overrides()` took two similar arguments: https://github.com/openai/codex/blob/774bd9e432fa2e0f4e059e97648cf92216912e19/codex-rs/core/src/config/mod.rs#L308-L311 The difference between `cli_overrides` and `overrides` was not immediately obvious to me. At first glance, it appears that one should be able to be expressed in terms of the other, but it turns out that some fields of `ConfigOverrides` (such as `cwd` and `codex_linux_sandbox_exe`) are, by design, not configurable via a `.toml` file or a command-line `--config` flag. That said, I discovered that many callers of `Config::load_with_cli_overrides()` were passing `ConfigOverrides::default()` for `overrides`, so I created two separate methods: - `Config::load_with_cli_overrides(cli_overrides: Vec<(String, TomlValue)>)` - `Config::load_with_cli_overrides_and_harness_overrides(cli_overrides: Vec<(String, TomlValue)>, harness_overrides: ConfigOverrides)` The latter has a long name, as it is _not_ what should be used in the common case, so the extra typing is designed to draw attention to this fact. I tried to update the existing callsites to use the shorter name, where possible. Further, in the cases where `ConfigOverrides` is used, usually only a limited subset of fields are actually set, so I updated the declarations to leverage `..Default::default()` where possible.
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
1. Remove PUBLIC skills and introduce SYSTEM skills embedded in the binary and installed into $CODEX_HOME/skills/.system at startup. 2. Skills are now always enabled (feature flag removed). 3. Update skills/list to accept forceReload and plumb it through (not used by clients yet).
Instead of failing to start Codex, clearly call out that N skills did not load and provide warnings so that the user may fix them. <img width="3548" height="874" alt="image" src="https://github.com/user-attachments/assets/6ce041b2-1373-4007-a6dd-0194e58fafe4" />
Introduce `ConfigBuilder` as an alternative to our existing `Config` constructors. I noticed that the existing constructors, `Config::load_with_cli_overrides()` and `Config::load_with_cli_overrides_and_harness_overrides()`, did not take `codex_home` as a parameter, which can be a problem. Historically, when Codex was purely a CLI, we wanted to be extra sure that the creation of `codex_home` was always done via `find_codex_home()`, so we did not expose `codex_home` as a parameter when creating `Config` in business logic. But in integration tests, `codex_home` nearly always needs to be configured (as a temp directory), which is why callers would have to go through `Config::load_from_base_config_with_overrides()` instead. Now that the Codex harness also functions as an app server, which could conceivably load multiple threads where `codex_home` is parameterized differently in each one, I think it makes sense to make this configurable. Going to a builder pattern makes it more flexible to ensure an arbitrary permutation of options can be set when constructing a `Config` while using the appropriate defaults for the options that aren't set explicitly. Ultimately, I think this should make it possible for us to make `Config::load_from_base_config_with_overrides()` private because all integration tests should be able to leverage `ConfigBuilder` instead. Though there could be edge cases, so I'll pursue that migration after we get through the current config overhaul. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/8235). * #8237 * __->__ #8235
This pull request updates the ChatGPT login description in the onboarding authentication widgets to clarify which plans include usage. The description now lists "Business" rather than "Team" and adds "Education" plans in addition to the previously mentioned plans. I have read the CLA Document and I hereby sign the CLAs. --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
1. Reintroduce feature flags for skills; 2. UI tweaks (truncate descriptions, better validation error display).
Over in `config_loader/macos.rs`, we were doing this complicated `mod` thing to expose one version of `load_managed_admin_config_layer()` for Mac: https://github.com/openai/codex/blob/580c59aa9af61cb4bffb5b204bd16a5dcc4bc911/codex-rs/core/src/config_loader/macos.rs#L4-L5 While exposing a trivial implementation for non-Mac: https://github.com/openai/codex/blob/580c59aa9af61cb4bffb5b204bd16a5dcc4bc911/codex-rs/core/src/config_loader/macos.rs#L110-L117 That was being used like this: https://github.com/openai/codex/blob/580c59aa9af61cb4bffb5b204bd16a5dcc4bc911/codex-rs/core/src/config_loader/layer_io.rs#L47-L48 This PR simplifies that callsite in `layer_io.rs` to just be: ```rust #[cfg(not(target_os = "macos"))] let managed_preferences = None; ``` And updates `config_loader/mod.rs` so we only pull in `macos.rs` on Mac: ```rust #[cfg(target_os = "macos")] mod macos; ``` This simplifies `macos.rs` considerably, though it looks like a big change because everything gets unindented and reformatted because we can drop the whole `mod native` thing now. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/8248). * #8251 * #8249 * __->__ #8248
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
This is some minor API cleanup that will make it easier to use `AbsolutePathBuf` in more places in a subsequent PR.
This pull request makes a small update to the session picker documentation for `codex resume`. The main change clarifies how to view the original working directory (CWD) for sessions and when the Git branch is shown. - The session picker now displays the recorded Git branch when available, and instructions are added for showing the original working directory by using the `--all` flag, which also disables CWD filtering and adds a `CWD` column.
Welcome caribou <img width="1536" height="1024" alt="image" src="https://github.com/user-attachments/assets/2a67b21f-40cf-4518-aee4-691af331ab50" />
Add a name to Beta features <img width="906" height="153" alt="Screenshot 2025-12-18 at 16 42 49" src="https://github.com/user-attachments/assets/d56f3519-0613-4d9a-ad4d-38b1a7eb125a" />
## Summary - add a shared git-ref resolver and use it for `codex cloud exec` and TUI task submission - expose a new `--branch` flag to override the git ref passed to cloud tasks - cover the git-ref resolution behavior with new async unit tests and supporting dev dependencies ## Testing - cargo test -p codex-cloud-tasks ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_692decc6cbec8332953470ef063e11ab) --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Jeremy Rose <172423086+nornagon-openai@users.noreply.github.com> Co-authored-by: Jeremy Rose <nornagon@openai.com>
This is a significant change to how layers of configuration are applied. In particular, the `ConfigLayerStack` now has two important fields: - `layers: Vec<ConfigLayerEntry>` - `requirements: ConfigRequirements` We merge `TomlValue`s across the layers, but they are subject to `ConfigRequirements` before creating a `Config`. How I would review this PR: - start with `codex-rs/app-server-protocol/src/protocol/v2.rs` and note the new variants added to the `ConfigLayerSource` enum: `LegacyManagedConfigTomlFromFile` and `LegacyManagedConfigTomlFromMdm` - note that `ConfigLayerSource` now has a `precedence()` method and implements `PartialOrd` - `codex-rs/core/src/config_loader/layer_io.rs` is responsible for loading "admin" preferences from `/etc/codex/managed_config.toml` and MDM. Because `/etc/codex/managed_config.toml` is now deprecated in favor of `/etc/codex/requirements.toml` and `/etc/codex/config.toml`, we now include some extra information on the `LoadedConfigLayers` returned in `layer_io.rs`. - `codex-rs/core/src/config_loader/mod.rs` has major changes to `load_config_layers_state()`, which is what produces `ConfigLayerStack`. The docstring has the new specification and describes the various layers that will be loaded and the precedence order. - It uses the information from `LoaderOverrides` "twice," both in the spirit of legacy support: - We use one instances to derive an instance of `ConfigRequirements`. Currently, the only field in `managed_config.toml` that contributes to `ConfigRequirements` is `approval_policy`. This PR introduces `Constrained::allow_only()` to support this. - We use a clone of `LoaderOverrides` to derive `ConfigLayerSource::LegacyManagedConfigTomlFromFile` and `ConfigLayerSource::LegacyManagedConfigTomlFromMdm` layers, as appropriate. As before, this ends up being a "best effort" at enterprise controls, but is enforcement is not guaranteed like it is for `ConfigRequirements`. - Now we only create a "user" layer if `$CODEX_HOME/config.toml` exists. (Previously, a user layer was always created for `ConfigLayerStack`.) - Similarly, we only add a "session flags" layer if there are CLI overrides. - `config_loader/state.rs` contains the updated implementation for `ConfigLayerStack`. Note the public API is largely the same as before, but the implementation is quite different. We leverage the fact that `ConfigLayerSource` is now `PartialOrd` to ensure layers are in the correct order. - A `Config` constructed via `ConfigBuilder.build()` will use `load_config_layers_state()` to create the `ConfigLayerStack` and use the associated `ConfigRequirements` when constructing the `Config` object. - That said, a `Config` constructed via `Config::load_from_base_config_with_overrides()` does _not_ yet use `ConfigBuilder`, so it creates a `ConfigRequirements::default()` instead of loading a proper `ConfigRequirements`. I will fix this in a subsequent PR. Then the following files are mostly test changes: ``` codex-rs/app-server/tests/suite/v2/config_rpc.rs codex-rs/core/src/config/service.rs codex-rs/core/src/config_loader/tests.rs ``` Again, because we do not always include "user" and "session flags" layers when the contents are empty, `ConfigLayerStack` sometimes has fewer layers than before (and the precedence order changed slightly), which is the main reason integration tests changed.
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
before: <img width="795" height="150" alt="Screenshot 2025-12-18 at 10 48 01 AM" src="https://github.com/user-attachments/assets/6f4d8856-b4c2-4e2a-b60a-b86f82b956a0" /> after: <img width="795" height="150" alt="Screenshot 2025-12-18 at 10 48 39 AM" src="https://github.com/user-attachments/assets/dd0d167a-5d09-4bb7-9d36-95a2eb1aaa83" />
Add a dmg target that bundles the codex and codex responses api proxy binaries for MacOS. this target is signed and notarized. Verified by triggering a build here: https://github.com/openai/codex/actions/runs/20318136302/job/58367155205. Downloaded the artifact and verified that the dmg is signed and notarized, and the codex binary contained works as expected.
…er (#8275) when granting read access to the sandbox user, grant the codex/command-runner exe directory first so commands can run before the entire read ACL process is finished.
QoL improvement so we don't accidentally add these dirs while we prototype bazel things
**Before:**
```
Error loading configuration: value `Never` is not in the allowed set [OnRequest]
```
**After:**
```
Error loading configuration: invalid value for `approval_policy`: `Never` is not in the
allowed set [OnRequest] (set by MDM com.openai.codex:requirements_toml_base64)
```
Done by introducing a new struct `ConfigRequirementsWithSources` onto
which we `merge_unset_fields` now. Also introduces a pair of requirement
value and its `RequirementSource` (inspired by `ConfigLayerSource`):
```rust
pub struct Sourced<T> {
pub value: T,
pub source: RequirementSource,
}
```
Fix flakiness of CI tests: https://github.com/openai/codex/actions/runs/20350530276/job/58473691443?pr=8282 This PR does two things: 1. test with responses API instead of chat completions API in thread_resume tests; 2. have a new responses API fixture that mocks out arbitrary numbers of responses API calls (including no calls) and have the same repeated response. Tested by CI
This test looks flaky on Windows:
```
FAIL [ 0.034s] (1442/2802) codex-otel::tests suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
stdout ───
running 1 test
test suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector ... FAILED
failures:
failures:
suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 14 filtered out; finished in 0.02s
stderr ───
Error: ProviderShutdown { source: InternalFailure("[InternalFailure(\"Failed to shutdown\")]") }
────────────
Summary [ 175.360s] 2802 tests run: 2801 passed, 1 failed, 15 skipped
FAIL [ 0.034s] (1442/2802) codex-otel::tests suite::otlp_http_loopback::otlp_http_exporter_sends_metrics_to_collector
```
Automated update of models.json. Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
This updates core shell environment policy handling to match Windows case-insensitive variable names and adds a Windows-only regression test, so Path/TEMP are no longer dropped when inherit=core.
…913) When a user has a managed_config which doesn't specify read-only, Codex fails to launch.
I didn't know this existed because its not listed in the hints.
Historically we started with a CodexAuth that knew how to refresh it's own tokens and then added AuthManager that did a different kind of refresh (re-reading from disk). I don't think it makes sense for both `CodexAuth` and `AuthManager` to be mutable and contain behaviors. Move all refresh logic into `AuthManager` and keep `CodexAuth` as a data object.
…ssues (#8932) I have seen this test flake out sometimes when running the macOS build using Bazel in CI: openai/codex#8875. Perhaps Bazel runs with greater parallelism, inducing a heavier load, causing an issue?
…t_rule() test (#8931) Because the path to `git` is used to construct `elicitations_to_accept`, we need to ensure that we resolve which `git` to use the same way our Bash process will: https://github.com/openai/codex/blob/c9c65606852c0cda9d983b4917359a0826a4b7f0/codex-rs/exec-server/tests/suite/accept_elicitation.rs#L59-L69 This fixes an issue when running the test on macOS using Bazel (openai/codex#8875) where the login shell chose `/opt/homebrew/bin/git` whereas the non-login shell chose `/usr/bin/git`.
Fix flakiness of CI test: https://github.com/openai/codex/actions/runs/20350530276/job/58473691434?pr=8282 This PR does two things: 1. move the flakiness test to use responses API instead of chat completion API 2. make mcp_process agnostic to the order of responses/notifications/requests that come in, by buffering messages not read
## Summary - add thread/conversation fork endpoints to the protocol (v1 + v2) - implement fork handling in app-server using thread manager and config overrides - add fork coverage in app-server tests and document `thread/fork` usage
… use in-place matcher (#8858)
We are deprecating chat completions. Move all app server tests from chat completion to responses.
When authentication fails, first attempt to reload the auth from file and then attempt to refresh it.
This seems to be necessary to get the Bazel builds on ARM Linux to go green on openai/codex#8875. I don't feel great about timeout-whack-a-mole, but we're still learning here...
Elevated Sandbox NUX: * prompt for elevated sandbox setup when agent mode is selected (via /approvals or at startup) * prompt for degraded sandbox if elevated setup is declined or fails * introduce /elevate-sandbox command to upgrade from degraded experience.
Handle null tool arguments in the MCP resource handler so optional resource tools accept null without failing, preserving normal JSON parsing for non-null payloads and improving robustness when models emit null; this avoids spurious argument parse errors for list/read MCP resource calls.
- Enforce a 5s timeout around the remote models refresh to avoid hanging /models calls.
As explained in openai/codex#8945 and openai/codex#8472, there are legitimate cases where users expect processes spawned by Codex to inherit environment variables such as `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH`, where failing to do so can cause significant performance issues. This PR removes the use of `codex_process_hardening::pre_main_hardening()` in Codex CLI (which was added not in response to a known security issue, but because it seemed like a prudent thing to do from a security perspective: openai/codex#4521), but we will continue to use it in `codex-responses-api-proxy`. At some point, we probably want to introduce a slightly different version of `codex_process_hardening::pre_main_hardening()` in Codex CLI that excludes said environment variables from the Codex process itself, but continues to propagate them to subprocesses.
- Add conversation/thread fork endpoints in the protocol and app server so clients can branch a session into a new thread. (#8866) - Expose requirements via `requirement/list` so clients can read `requirements.toml` and adjust agent-mode UX. (#8800) - Introduce metrics capabilities with additional counters for observability. (#8318, #8910) - Add elevated sandbox onboarding with prompts for upgrade/degraded mode plus the `/elevate-sandbox` command. (#8789) - Allow explicit skill invocations through v2 API user input. (#8864) ## Bug Fixes - `/review <instructions>` in TUI/TUI2 now launches the review flow instead of sending plain text. (#8823) - Patch approval “allow this session” now sticks for previously approved files. (#8451) - Model upgrade prompt now appears even if the current model is hidden from the picker. (#8802) - Windows paste handling now supports non-ASCII multiline input reliably. (#8774) - Git apply path parsing now handles quoted/escaped paths and `/dev/null` correctly to avoid misclassified changes. (#8824) - Codex CLI subprocesses again inherit env vars like `LD_LIBRARY_PATH`/`DYLD_LIBRARY_PATH` to avoid runtime issues. (#8951) ## Documentation - App-server README now documents skills support and usage. (#8853) - Skill-creator docs clarify YAML frontmatter formatting and quoting rules. (#8610) ## Changelog Full Changelog: openai/codex@rust-v0.79.0...rust-v0.80.0 - #8734 fix: do not propose to add multiline commands to execpolicy @tibo-openai - #8802 Enable model upgrade popup even when selected model is no longer in picker @charley-oai - #8805 chore: stabilize core tool parallelism test @tibo-openai - #8820 chore: silent just fmt @jif-oai - #8824 fix: parse git apply paths correctly @tibo-openai - #8823 fix: handle /review arguments in TUI @tibo-openai - #8822 chore: rename unified exec sessions @jif-oai - #8825 fix: handle early codex exec exit @tibo-openai - #8830 chore: unify conversation with thread name @jif-oai - #8840 Move tests below auth manager @pakrym-oai - #8845 fix: upgrade lru crate to 0.16.3 @bolinfest - #8763 Merge Modelfamily into modelinfo @aibrahim-oai - #8842 remove unnecessary todos @aibrahim-oai - #8846 Stop using AuthManager as the source of codex_home @pakrym-oai - #8844 Fix app-server `write_models_cache` to treat models with less priority number as higher priority. @aibrahim-oai - #8850 chore: drop useless feature flags @jif-oai - #8848 chore: drop some deprecated @jif-oai - #8853 [chore] update app server doc with skills @celia-oai - #8451 fix: implement 'Allow this session' for apply_patch approvals @owenlin0 - #8856 Override truncation policy at model info level @aibrahim-oai - #8849 Simplify error managment in `run_turn` @aibrahim-oai - #8767 Add feature for optional request compression @cconger - #8610 Clarify YAML frontmatter formatting in skill-creator @darlingm - #8847 Warn in /model if BASE_URL set @gt-oai - #8801 Support symlink for skills discovery. @xl-openai - #8800 Feat: appServer.requirementList for requirement.toml @shijie-oai - #8861 fix: update resource path resolution logic so it works with Bazel @bolinfest - #8868 fix: use tokio for I/O in an async function @bolinfest - #8867 add footer note to TUI @iceweasel-oai - #8879 feat: introduce find_resource! macro that works with Cargo or Bazel @bolinfest - #8864 Support UserInput::Skill in V2 API. @xl-openai - #8876 add ability to disable input temporarily in the TUI. @iceweasel-oai - #8884 fix: make the find_resource! macro responsible for the absolutize() call @bolinfest - #8774 fix: windows can now paste non-ascii multiline text @dylan-hurd-oai - #8855 chore: add list thread ids on manager @jif-oai - #8318 feat: metrics capabilities @jif-oai - #8826 fix: stabilize list_dir pagination order @tibo-openai - #8892 chore: drop metrics exporter config @jif-oai - #8896 chore: align error limit comment @tibo-openai - #8899 fix: include project instructions in /review subagent @tibo-openai - #8894 chore: add small debug client @jif-oai - #8888 fix: leverage find_resource! macro in load_sse_fixture_with_id @bolinfest - #8691 Avoid setpgid for inherited stdio on macOS @seeekr - #8887 fix: leverage codex_utils_cargo_bin() in codex-rs/core/tests/suite @bolinfest - #8907 chore: drop useless interaction_input @jif-oai - #8903 nit: drop unused function call error @jif-oai - #8910 feat: add a few metrics @jif-oai - #8911 gitignore bazel-* @zbarsky-openai - #8843 config requirements: improve requirement error messages @gt-oai - #8914 fix: reduce duplicate include_str!() calls @bolinfest - #8902 feat: add list loaded threads to app server @jif-oai - #8870 [fix] app server flaky thread/resume tests @celia-oai - #8916 clean: all history cloning @jif-oai - #8915 otel test: retry WouldBlock errors @gt-oai - #8792 Update models.json @github-actions - #8897 fix: preserve core env vars on Windows @tibo-openai - #8913 Add `read-only` when backfilling requirements from managed_config @gt-oai - #8926 add tooltip hint for shell commands (!) @fps7806 - #8857 Immutable CodexAuth @pakrym-oai - #8927 nit: parse_arguments @jif-oai - #8932 fix: increase timeout for tests that have been flaking with timeout issues @bolinfest - #8931 fix: correct login shell mismatch in the accept_elicitation_for_prompt_rule() test @bolinfest - #8874 [fix] app server flaky send_messages test @celia-oai - #8866 feat: fork conversation/thread @apanasenko-oai - #8858 remove `get_responses_requests` and `get_responses_request_bodies` to use in-place matcher @aibrahim-oai - #8939 [chore] move app server tests from chat completion to responses @celia-oai - #8880 Attempt to reload auth as a step in 401 recovery @pakrym-oai - #8946 fix: increase timeout for wait_for_event() for Bazel @bolinfest - #8789 Elevated sandbox NUX @iceweasel-oai - #8917 fix: treat null MCP resource args as empty @tibo-openai - #8942 Add 5s timeout to models list call + integration test @aibrahim-oai - #8951 fix: remove existing process hardening from Codex CLI @bolinfest
Upstream Release v0.80.0 AnalysisSummary of ChangesThis release brings several significant features and bug fixes from upstream: New Features
Critical Bug Fixes
Documentation
Vital Work Items for Nori
Priority: Items 1, 3, and 6 should be validated first as they touch core functionality (performance, security, and skills) that are central to nori's value proposition. |
Upstream Sync
This PR syncs changes from upstream release
rust-v0.80.0.Summary
rust-v0.80.0Workflow Sanitization
The following upstream workflows had their triggers replaced with `workflow_dispatch`:
cargo-deny.ymlci.ymlcla.ymlclose-stale-contributor-prs.ymlcodespell.ymlissue-deduplicator.ymlissue-labeler.ymlrust-release-prepare.ymlrust-release.ymlsdk.ymlshell-tool-mcp-ci.ymlshell-tool-mcp.ymlMerge Instructions
git checkout dev git merge sync/upstream-v0.80.0 --no-ff # Resolve conflicts if anycd codex-rs && cargo testcargo insta reviewAfter Merge
sync/upstream-v0.80.0branch