Skip to content

Conversation

@tangledbytes
Copy link
Member

@tangledbytes tangledbytes commented Nov 12, 2025

Describe the Problem

This PR replaces GLACIER_DA with DEEP_ARCHIVE and changes the way we override the storage class without overwriting the storage class HTTP header.

Issues: Fixed #xxx / Gap #xxx

Testing Instructions:

  • Doc added/updated
  • Tests added

Summary by CodeRabbit

  • Bug Fixes

    • Broadened Glacier handling so restore/read flows consider multiple Glacier-related storage classes.
    • Removed the pre-redirect automatic storage-class override step.
  • Refactor

    • Renamed storage class option from GLACIER_DA to DEEP_ARCHIVE and exposed a grouped Glacier storage-classes list.
  • SDK

    • Public type updated: DEEP_ARCHIVE replaces GLACIER_DA as a valid storage-class option.

@coderabbitai
Copy link

coderabbitai bot commented Nov 12, 2025

Walkthrough

Removed the pre-redirect storage-class override call and deleted the override implementation; renamed STORAGE_CLASS_GLACIER_DA → STORAGE_CLASS_DEEP_ARCHIVE and introduced GLACIER_STORAGE_CLASSES; updated parsing, runtime checks, and TypeScript storage-class type to include DEEP_ARCHIVE and treat multiple glacier classes as glacier.

Changes

Cohort / File(s) Change Summary
S3 request flow
src/endpoint/s3/s3_rest.js
Removed call to s3_utils.override_storage_class(req) from handle_request — request flow now proceeds directly to populate/redirect logic.
S3 utils: rename & API surface
src/endpoint/s3/s3_utils.js
Renamed STORAGE_CLASS_GLACIER_DASTORAGE_CLASS_DEEP_ARCHIVE; removed override_storage_class function and its export; added GLACIER_STORAGE_CLASSES export and STORAGE_CLASS_DEEP_ARCHIVE export; adjusted parsing and mapping logic.
SDK types
src/sdk/nb.d.ts
Replaced GLACIER_DA with DEEP_ARCHIVE in StorageClass type definition.
Glacier/restore logic
src/sdk/glacier.js, src/endpoint/s3/ops/s3_get_object.js, src/sdk/namespace_fs.js
Replaced direct equality checks against a single glacier constant with membership checks using GLACIER_STORAGE_CLASSES.includes(...); adjusted restore/expiry/support logic and storage-class derivation accordingly.

Sequence Diagram(s)

mermaid
sequenceDiagram
autonumber
participant Client
participant S3_REST as s3_rest.handle_request
participant S3_UTILS as s3_utils
participant Pop as populate_request_additional_info_or_redirect

rect #E8F3FF
Client->>S3_REST: HTTP request
note right of S3_REST: Old flow called override_storage_class
S3_REST->>S3_UTILS: (removed) override_storage_class(req)
end

rect #E8FFE8
S3_REST->>Pop: populate_request_additional_info_or_redirect(req)
Pop-->>S3_REST: proceed / redirect
S3_REST->>Client: response
end


mermaid
sequenceDiagram
autonumber
participant Caller
participant Component as Glacier-related logic

note over Component: Old: storage_class === STORAGE_CLASS_GLACIER
Caller->>Component: check storage_class
Component-->>Caller: uses GLACIER_STORAGE_CLASSES.includes(storage_class) (new)

note right of Component: This broadens which classes are treated as "glacier" for restore/expiry logic

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Heterogeneous edits: API export changes, type updates, and control-flow removal across multiple modules.
  • Areas to focus on:
    • src/endpoint/s3/s3_utils.js exports and any callers relying on removed override_storage_class.
    • src/sdk/nb.d.ts — ensure consumers of the SDK accept DEEP_ARCHIVE.
    • GLACIER_STORAGE_CLASSES usage sites (namespace_fs.js, glacier.js, s3_get_object.js) for consistent behavior and edge cases.

Possibly related PRs

Suggested reviewers

  • guymguym
  • dannyzaken
  • jackyalbo
  • aayushchouhan09

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main changes: replacing GLACIER_DA with DEEP_ARCHIVE and updating the storage class override mechanism, directly matching the PR objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/endpoint/s3/s3_utils.js (1)

24-24: Consider renaming the constant to match its new value.

The constant STORAGE_CLASS_GLACIER_DA now holds the value 'DEEP_ARCHIVE', creating a semantic mismatch. While this maintains export compatibility, it may confuse developers.

Consider this refactor to improve clarity:

-const STORAGE_CLASS_GLACIER_DA = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"
+const STORAGE_CLASS_DEEP_ARCHIVE = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"

And update the export on line 832:

-exports.STORAGE_CLASS_GLACIER_DA = STORAGE_CLASS_GLACIER_DA;
+exports.STORAGE_CLASS_DEEP_ARCHIVE = STORAGE_CLASS_DEEP_ARCHIVE;

Alternatively, if maintaining the old export name is critical for API compatibility, you could keep both:

+const STORAGE_CLASS_DEEP_ARCHIVE = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"
-const STORAGE_CLASS_GLACIER_DA = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"
+const STORAGE_CLASS_GLACIER_DA = STORAGE_CLASS_DEEP_ARCHIVE; // Deprecated, use STORAGE_CLASS_DEEP_ARCHIVE
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 36c21ce and d58d998.

📒 Files selected for processing (3)
  • src/endpoint/s3/s3_rest.js (0 hunks)
  • src/endpoint/s3/s3_utils.js (2 hunks)
  • src/sdk/nb.d.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • src/endpoint/s3/s3_rest.js
🧰 Additional context used
🧬 Code graph analysis (1)
src/endpoint/s3/s3_utils.js (3)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
config.js (1)
  • config (7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-package-lock-validation
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/sdk/nb.d.ts (1)

20-20: Type definition updated to reflect DEEP_ARCHIVE storage class.

The change aligns with the runtime constant value update in s3_utils.js (line 24). TypeScript will now correctly validate 'DEEP_ARCHIVE' as a valid StorageClass value.

src/endpoint/s3/s3_utils.js (2)

384-387: Inline override logic correctly implements the new mechanism.

The override logic has been successfully moved inline from the removed override_storage_class function. When NSFS_GLACIER_FORCE_STORAGE_CLASS is enabled and storage class is missing or STANDARD, the function now returns GLACIER, achieving the PR objective of not overwriting the storage class HTTP header (by handling it during parsing instead).


383-393: ****

The review comment incorrectly assumes a breaking change that does not exist. The storage_class_enum schema only includes ['STANDARD', 'GLACIER', 'GLACIER_IR', 'Glacier', 'DEEP_ARCHIVE', 'INTELLIGENT_TIERING', 'ONEZONE_IA', 'STANDARD_IA']—'GLACIER_DA' was never a valid stored value. STORAGE_CLASS_GLACIER_DA has always been defined as 'DEEP_ARCHIVE', and no 'GLACIER_DA' string literals exist in the codebase. The parse_storage_class function correctly validates inputs against the defined constants and rejects unknown values. No data migration or backward compatibility handling is required.

Likely an incorrect or invalid review comment.

Signed-off-by: Utkarsh Srivastava <srivastavautkarsh8097@gmail.com>

drop support for GLACIER_IR and fix migration and restore path

Signed-off-by: Utkarsh Srivastava <srivastavautkarsh8097@gmail.com>
@tangledbytes tangledbytes force-pushed the utkarsh/fix/glacier_da-storage-class branch from d58d998 to 65c6c42 Compare November 13, 2025 07:36
@pull-request-size pull-request-size bot added size/M and removed size/S labels Nov 13, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
src/endpoint/s3/ops/s3_get_object.js (1)

44-50: Broadened Glacier gate via GLACIER_STORAGE_CLASSES.

Good change. Minor hardening: normalize via parse_storage_class to respect force flags uniformly.

-    if (s3_utils.GLACIER_STORAGE_CLASSES.includes(object_md.storage_class)) {
+    if (s3_utils.GLACIER_STORAGE_CLASSES.includes(
+        s3_utils.parse_storage_class(object_md.storage_class)
+    )) {
src/sdk/glacier.js (1)

333-335: Gate now covers all Glacier-like classes; fix stale comment.

Logic LGTM. Please update the note to “Returns undefined if storage_class is not in GLACIER_STORAGE_CLASSES.”

- * NOTE: Returns undefined if `user.storage_class` attribute is not
- * `GLACIER`
+ * NOTE: Returns undefined if storage_class is not one of GLACIER_STORAGE_CLASSES

Also applies to: 342-345

src/sdk/namespace_fs.js (1)

1366-1369: Auto-enqueue migrate WAL when Glacier-like class is uploaded.

Looks right. Consider guarding with NSFS_GLACIER_LOGS_ENABLED check here (optional; currently handled inside append_to_migrate_wal).

src/endpoint/s3/s3_utils.js (2)

24-25: Introduce DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES.

Good centralization. Consider Object.freeze(GLACIER_STORAGE_CLASSES) to prevent accidental mutation.

-const GLACIER_STORAGE_CLASSES = [
+const GLACIER_STORAGE_CLASSES = Object.freeze([
     STORAGE_CLASS_GLACIER,
     STORAGE_CLASS_DEEP_ARCHIVE,
-];
+]);

Also applies to: 47-52


389-397: parse_storage_class: normalize input for robustness.

Optional: trim/uppercase header before comparisons to avoid case/space pitfalls.

-function parse_storage_class(storage_class) {
+function parse_storage_class(storage_class) {
+    if (typeof storage_class === 'string') storage_class = storage_class.trim();
+    // AWS sends uppercase; normalizing defensively
+    if (typeof storage_class === 'string') storage_class = storage_class.toUpperCase();
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d58d998 and 65c6c42.

📒 Files selected for processing (6)
  • src/endpoint/s3/ops/s3_get_object.js (1 hunks)
  • src/endpoint/s3/s3_rest.js (0 hunks)
  • src/endpoint/s3/s3_utils.js (5 hunks)
  • src/sdk/glacier.js (1 hunks)
  • src/sdk/namespace_fs.js (7 hunks)
  • src/sdk/nb.d.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • src/endpoint/s3/s3_rest.js
🧰 Additional context used
🧬 Code graph analysis (2)
src/endpoint/s3/s3_utils.js (2)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
src/sdk/namespace_fs.js (2)
src/test/unit_tests/nsfs/test_nsfs_glacier_backend.js (4)
  • s3_utils (13-13)
  • params (223-230)
  • params (539-546)
  • params (570-578)
src/endpoint/s3/s3_utils.js (1)
  • storage_class (322-322)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
  • GitHub Check: run-package-lock-validation
🔇 Additional comments (7)
src/sdk/namespace_fs.js (5)

1080-1086: Read gate aligned with GLACIER_STORAGE_CLASSES.

Correct and consistent with restore semantics.


1310-1321: Copy-from Glacier objects requires restore and forces fallback.

Good parity with S3 behavior. No issues.


2300-2301: Restore reply now echoes actual storage_class from xattr.

Good improvement for accuracy.


3616-3617: Force-expire-on-get covers Glacier-like classes.

Matches new model. LGTM.


3544-3547: GLACIER_IR exclusion from NSFS is intentional by design.

The GLACIER_STORAGE_CLASSES constant is specifically defined for NSFS glacier support and contains only STORAGE_CLASS_GLACIER and STORAGE_CLASS_DEEP_ARCHIVESTORAGE_CLASS_GLACIER_IR is intentionally excluded. This is consistent across the codebase: in bucketspace_fs.js, only STORAGE_CLASS_GLACIER (not GLACIER_IR or DEEP_ARCHIVE) is advertised as a supported storage class when NSFS_GLACIER_ENABLED is configured. The exclusion is not an oversight but a deliberate design choice limiting NSFS glacier support to specific storage classes.

src/endpoint/s3/s3_utils.js (1)

838-838: Verify documentation updates for new public API exports.

The exports of STORAGE_CLASS_DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES at lines 838 and 882 expose new public API surface. These constants are actively used internally by SDK modules (glacier.js, namespace_fs.js, s3_get_object.js). Please confirm that:

  1. Any relevant developer or SDK documentation has been updated to document these exports
  2. If external SDK consumers depend on these constants, they have been notified
src/sdk/nb.d.ts (1)

20-20: StorageClass refactoring verified — no issues found.

The type definition and s3_utils constants are correctly aligned:

  • StorageClass union includes all four classes: STANDARD, GLACIER, GLACIER_IR, DEEP_ARCHIVE
  • s3_utils exports STORAGE_CLASS_DEEP_ARCHIVE with proper JSDoc type annotation
  • GLACIER_STORAGE_CLASSES array correctly includes both GLACIER and DEEP_ARCHIVE
  • Zero GLACIER_DA references remain in the codebase
  • Type consistency maintained throughout

Copy link
Member

@guymguym guymguym left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor comments. LGTM!

const OBJECT_ATTRIBUTES_UNSUPPORTED = Object.freeze(['Checksum', 'ObjectParts']);

/** @type {nb.StorageClass[]} */
const GLACIER_STORAGE_CLASSES = [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a small comment worth added here I think - that this list is the classes that require restore-object, which is why GLACIER_IR is not included.

if (!storage_class) return STORAGE_CLASS_STANDARD;
if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
if (!storage_class || storage_class === STORAGE_CLASS_STANDARD) {
if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) return STORAGE_CLASS_GLACIER;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can get more flexibility if the config specifies the forced storage class name (string), and then this function can be written to validate it too

function parse_storage_class(storage_class) {
    if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) {
        storage_class = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
    }
    if (!storage_class) return STORAGE_CLASS_STANDARD;
    if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
    if (storage_class === STORAGE_CLASS_GLACIER) return STORAGE_CLASS_GLACIER;
    if (storage_class === STORAGE_CLASS_DEEP_ARCHIVE) return STORAGE_CLASS_DEEP_ARCHIVE;
    if (storage_class === STORAGE_CLASS_GLACIER_IR) return STORAGE_CLASS_GLACIER_IR;
    throw new Error(`No such s3 storage class ${storage_class}`);
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants