-
Notifications
You must be signed in to change notification settings - Fork 626
feat: the CLOAK privacy solution #1737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughBumps project version to v4.6.3 and threads Validium mode, a version byte, and optional decryption key through coordinator, provertasks, libzkp FFI, Rust proving/verifier crates, rollup watcher/relayer, and tests; adds L1-message fetching and switches several L2 clients from ethclient to rpc.Client. Changes
Sequence Diagram(s)sequenceDiagram
participant API as Coordinator API
participant Config as Config
participant Verifier as Verifier
participant Utils as Utils
participant LibZKP as LibZKP
API->>Config: read cfg.L2.ValidiumMode
API->>Verifier: NewVerifier(cfg.Verifier, validiumMode)
Verifier->>Utils: Version(forkName, validiumMode)
Utils-->>Verifier: version (uint8)
Verifier->>LibZKP: init(ver_n)
LibZKP-->>Verifier: ready
Verifier-->>API: initialized
sequenceDiagram
participant Prover as ProverTask
participant Config as Config
participant Sequencer as Sequencer (cfg)
participant LibZKP as LibZKP
Prover->>Config: validiumMode()
alt validiumMode == true
Prover->>Sequencer: request Sequencer.DecryptionKey (hex)
Sequencer-->>Prover: hex key
Prover->>Prover: decode hex -> decryptionKey bytes
end
Prover->>LibZKP: GenerateUniversalTask(..., decryptionKey)
LibZKP-->>Prover: task result
sequenceDiagram
participant Watcher as L2Watcher
participant RPC as rpc.Client
participant Chain as L2 Node
participant DB as DB
Watcher->>RPC: scroll_getL1MessagesInBlock(block)
RPC-->>Watcher: encrypted L1 messages
Watcher->>Watcher: validate counts & queue indexes
Watcher->>DB: store block with replaced txs
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
crates/libzkp/src/tasks/batch.rs (3)
83-84: Clarify documentation for the version field.The comment describes this as "the version of the chunks in the batch," but the field actually represents the version for the entire batch task (encoding both domain and STF version). Consider updating the comment to reflect this more accurately.
- /// The version of the chunks in the batch, as per [`Version`]. + /// The version byte encoding domain and STF version for this batch, as per [`Version`]. pub version: u8,
220-236: Consider error propagation instead of unreachable! for unsupported version combinations.The
unreachable!at line 234 will panic if an unexpected (domain, stf_version) combination is encountered. For library code used in production, consider returning aResultwith a descriptive error instead, allowing the caller to handle the error gracefully.-impl BatchProvingTask { - fn build_guest_input(&self) -> BatchWitness { +impl BatchProvingTask { + fn build_guest_input(&self) -> Result<BatchWitness> { let version = Version::from(self.version); // ... existing code ... let reference_header = match (version.domain, version.stf_version) { (Domain::Scroll, STFVersion::V6) => { ReferenceHeader::V6(*self.batch_header.must_v6_header()) } (Domain::Scroll, STFVersion::V7) => { ReferenceHeader::V7(*self.batch_header.must_v7_header()) } (Domain::Scroll, STFVersion::V8) => { ReferenceHeader::V8(*self.batch_header.must_v8_header()) } (Domain::Validium, STFVersion::V1) => { ReferenceHeader::Validium(*self.batch_header.must_validium_header()) } - (domain, stf_version) => { - unreachable!("unsupported domain={domain:?},stf-version={stf_version:?}") - } + (domain, stf_version) => { + return Err(eyre::eyre!( + "unsupported domain={domain:?},stf-version={stf_version:?}" + )) + } }; - // ... rest of function returns BatchWitness + Ok(BatchWitness { + // ... fields ... + }) }Note: This change would require updating callers to handle the Result.
256-258: Validate consistency between version.fork and self.fork_name.Line 257 uses
version.fork(derived fromself.version) while the struct also hasself.fork_name(line 100). These should represent the same fork but come from different sources. Consider adding a validation check to ensure they're consistent.Add validation after line 131:
let version = Version::from(self.version); // Validate consistency between version-derived fork and explicit fork_name let expected_fork_name = format!("{:?}", version.fork).to_lowercase(); let actual_fork_name = self.fork_name.to_lowercase(); assert_eq!( expected_fork_name, actual_fork_name, "fork name mismatch: version.fork={:?}, task.fork_name={}", version.fork, self.fork_name );
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
crates/libzkp/src/tasks.rs(1 hunks)crates/libzkp/src/tasks/batch.rs(4 hunks)crates/libzkp/src/tasks/bundle.rs(3 hunks)crates/libzkp/src/tasks/chunk.rs(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/libzkp/src/tasks/bundle.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-16T02:09:51.657Z
Learnt from: noel2004
Repo: scroll-tech/scroll PR: 1736
File: crates/libzkp/src/verifier/universal.rs:35-45
Timestamp: 2025-09-16T02:09:51.657Z
Learning: The verify_stark_proof method in scroll-zkvm-verifier returns Result<(), Error> indicating success/failure, not Result<bool, Error>. When it succeeds, verification passed; when it fails, it returns an error.
Applied to files:
crates/libzkp/src/tasks.rs
🧬 Code graph analysis (3)
crates/libzkp/src/tasks.rs (2)
crates/libzkp/src/utils.rs (1)
panic_catch(43-53)crates/libzkp/src/proofs.rs (1)
metadata(188-196)
crates/libzkp/src/tasks/batch.rs (5)
common/types/message/message.go (1)
ChunkInfo(132-149)coordinator/internal/utils/version.go (1)
Version(16-38)crates/libzkp/src/tasks/bundle.rs (1)
precheck_and_build_metadata(60-72)crates/libzkp/src/tasks/batch/utils.rs (4)
to_blob(27-41)blob_to_kzg_commitment(44-48)get_versioned_hash(54-58)get_kzg_proof(66-74)crates/libzkp/src/tasks.rs (1)
check_aggregation_proofs(24-36)
crates/libzkp/src/tasks/chunk.rs (3)
common/types/message/message.go (1)
ChunkInfo(132-149)crates/libzkp/src/tasks/chunk_interpreter.rs (1)
try_from_with_interpret(27-31)crates/libzkp/src/verifier/universal.rs (1)
new(19-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (7)
crates/libzkp/src/tasks.rs (2)
17-17: LGTM: Import additions support version-based validation.The addition of
VersionandMultiVersionPublicInputsto the imports is appropriate and necessary for the refactoredcheck_aggregation_proofsfunction.
24-36: No issues found - call sites and trait integration verified.All concerns have been successfully addressed:
- Call sites updated correctly: Both
batch.rs(line 272) andbundle.rs(line 66) correctly pass the new signature—metadata slice andVersionparameter.- Trait method signature confirmed: The
validate()method is called withw[1].validate(&w[0], version), confirming the expected signature from the externalscroll_zkvm_typescrate.- Function is actively used: The function is not dead code; it's called in two locations within the tasks module.
crates/libzkp/src/tasks/batch.rs (5)
22-27: LGTM: Clean wrapper struct for Validium headers.The flattened serialization approach provides good compatibility with the existing header format while adding the batch_hash field.
35-77: LGTM: Validium variant properly integrated.The new Validium variant follows the existing pattern. The placement at the top of the enum is appropriate given the untagged deserialization behavior.
Note: The accessor methods use
panic!()for type mismatches, which is consistent with the existing codebase style but aggressive for library code. This is an existing pattern, not introduced by this change.
266-278: LGTM: Properly updated for version-based validation.The changes correctly pass the version to
check_aggregation_proofs, aligning with the version-based validation approach introduced across the codebase.
239-253: Block hash zeroing is intentional design, but the "patch" label warrants clarification.The code correctly shows intentional, domain-specific handling:
- Scroll domain (versions V6/V7/V8): explicitly zeros
prev_blockhashandpost_blockhash- Validium domain (version V1): preserves original block hashes
The conditional logic is properly implemented with matching domain-specific structures (different ReferenceHeader types, BatchHeader formats, and KZG handling). However, the inline comment labels this as a "patch," which suggests it may be a temporary workaround rather than permanent design.
Clarify:
- Is the "patch" label still accurate, or is this now permanent design?
- Document why Scroll domain specifically requires zeroed block hashes (e.g., circuit requirements, proof protocol constraints)
- Confirm this zeroing does not compromise verification or security properties for Scroll
The code itself is correct; this is about documenting the business logic rationale.
133-218: Fork/header pairing validation is correct; consider error handling for unreachable cases.The refactored logic correctly validates fork-to-header-type correspondence: V6 headers require EuclidV1 (asserted), V7_8 headers require EuclidV2 or Feynman (matched), and Validium domains properly bypass blob DA. The assertions validate internal invariants.
However, the
unreachable!macros at lines 167-171 and 174 assume these mismatches cannot occur. If there's any possibility of untrusted input or logic errors producing unexpected fork/header combinations, consider returning aResultinstead of panicking. This would allow graceful failure rather than process termination in production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
coordinator/internal/logic/verifier/mock.go (1)
11-12: Consider naming the ignored parameter for clarity.The second parameter is intentionally ignored in this mock implementation, but naming it (e.g.,
useValidiumMode) would improve readability and make the relationship to the real verifier implementation more explicit. Additionally, update the function comment to document the new parameter.Apply this diff to improve clarity:
-// NewVerifier Sets up a mock verifier. -func NewVerifier(cfg *config.VerifierConfig, _ bool) (*Verifier, error) { +// NewVerifier Sets up a mock verifier. +// The useValidiumMode parameter is ignored in this mock implementation. +func NewVerifier(cfg *config.VerifierConfig, useValidiumMode bool) (*Verifier, error) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
coordinator/internal/logic/verifier/mock.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
coordinator/internal/logic/verifier/mock.go (3)
coordinator/internal/logic/verifier/verifier.go (1)
NewVerifier(77-105)coordinator/internal/config/config.go (1)
VerifierConfig(75-79)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (1)
coordinator/internal/logic/verifier/mock.go (1)
22-40: LGTM!The mock verification methods remain unchanged and continue to provide appropriate test doubles for the real verifier. The simple logic (returning success except for explicitly invalid proofs) is suitable for mock testing purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
build/dockerfiles/coordinator-proxy.Dockerfile (1)
20-22: Consider minimizing installed utilities for production readiness.The utilities (vim, netcat-openbsd, net-tools, curl, jq) are useful for debugging but add unnecessary bloat for a production image. Consider creating separate debug and production variants, or removing unused tools if they're not actively needed by operators.
.github/workflows/docker.yml (1)
399-401: Consider adding multi-platform build support.Most other jobs in this workflow specify
platforms: linux/amd64,linux/arm64(see gas_oracle, rollup_relayer, blob_uploader, etc.). The coordinator-proxy job omits this. Verify whether single-platform deployment is intentional (like coordinator-api) or if multi-platform support should be added for consistency.If multi-platform support is desired, add to the build configuration:
- name: Build and push uses: docker/build-push-action@v4 env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} REPOSITORY: coordinator-proxy IMAGE_TAG: ${{ github.ref_name }} with: context: . file: ./build/dockerfiles/coordinator-proxy.Dockerfile + platforms: linux/amd64,linux/arm64 push: true tags: | scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} ${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/docker.yml(1 hunks)build/dockerfiles/coordinator-proxy.Dockerfile(1 hunks)build/dockerfiles/coordinator-proxy.Dockerfile.dockerignore(1 hunks)coordinator/internal/utils/codec_validium.go(1 hunks)crates/libzkp_c/src/lib.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- coordinator/internal/utils/codec_validium.go
🧰 Additional context used
🧬 Code graph analysis (1)
crates/libzkp_c/src/lib.rs (4)
crates/libzkp_c/src/utils.rs (1)
c_char_to_str(3-6)crates/l2geth/src/rpc_client.rs (1)
get_client(81-87)crates/l2geth/src/lib.rs (1)
get_client(14-19)crates/libzkp/src/lib.rs (1)
checkout_chunk_task(40-49)
🪛 actionlint (1.7.8)
.github/workflows/docker.yml
370-370: the runner of "docker/setup-qemu-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
372-372: the runner of "docker/setup-buildx-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
374-374: the runner of "docker/login-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
393-393: the runner of "docker/build-push-action@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (4)
build/dockerfiles/coordinator-proxy.Dockerfile.dockerignore (1)
1-8: LGTM!The .dockerignore patterns are appropriate and follow Docker best practices, keeping the build context lean.
build/dockerfiles/coordinator-proxy.Dockerfile (1)
1-26: LGTM!The multi-stage build is well-structured. The CGO_LDFLAGS propagation to the final image is correct and necessary for runtime compatibility. The version check on Line 24 provides basic validation of the compiled binary.
crates/libzkp_c/src/lib.rs (2)
155-156: LGTM: Function signature follows FFI conventionsThe addition of
decryption_keyanddecryption_key_lenparameters follows the same pattern asexpected_vk/expected_vk_len, maintaining consistency with the existing FFI design.
176-176: LGTM: Function call correctly updatedThe call to
libzkp::checkout_chunk_taskhas been properly updated to include thedecryption_keyparameter, matching the updated function signature shown in the relevant code snippets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
rollup/internal/controller/watcher/l2_watcher.go (1)
132-140: Critical: Don't assume L1 message txs occupy the first slots.The code counts L1 message transactions by iterating through all transactions (lines 107-112) but then assumes the first
counttransactions are L1 messages. This is incorrect—user transactions can precede L1 message transactions in a block. AccessingblockTxs[ii].AsL1MessageTx()whenblockTxs[ii]is not an L1 message will cause a panic or incorrect behavior.Apply this diff to collect actual L1 message indices first:
- for ii := 0; ii < count; ii++ { - // sanity check - if blockTxs[ii].AsL1MessageTx().QueueIndex != txs[ii].AsL1MessageTx().QueueIndex { - return fmt.Errorf("L1 message queue index mismatch at index %d: expected %d, got %d", ii, blockTxs[ii].AsL1MessageTx().QueueIndex, txs[ii].AsL1MessageTx().QueueIndex) - } - - log.Info("Replacing L1 message tx in validium mode", "index", ii, "queueIndex", txs[ii].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[ii].Hash().Hex(), "originalTxHash", txs[ii].Hash().Hex()) - blockTxs[ii] = txs[ii] - } + l1Indices := make([]int, 0, count) + for idx, tx := range blockTxs { + if tx.IsL1MessageTx() { + l1Indices = append(l1Indices, idx) + } + } + if len(l1Indices) != len(txs) { + return fmt.Errorf("L1 message count mismatch: expected %d, got %d", len(l1Indices), len(txs)) + } + for i, idx := range l1Indices { + if blockTxs[idx].AsL1MessageTx().QueueIndex != txs[i].AsL1MessageTx().QueueIndex { + return fmt.Errorf("L1 message queue index mismatch at position %d: expected %d, got %d", idx, blockTxs[idx].AsL1MessageTx().QueueIndex, txs[i].AsL1MessageTx().QueueIndex) + } + log.Info("Replacing L1 message tx in validium mode", "position", idx, "queueIndex", txs[i].AsL1MessageTx().QueueIndex, "decryptedTxHash", blockTxs[idx].Hash().Hex(), "originalTxHash", txs[i].Hash().Hex()) + blockTxs[idx] = txs[i] + }
🧹 Nitpick comments (1)
rollup/internal/controller/watcher/l2_watcher_test.go (1)
52-74: Consider using a dynamic block selection.The test uses a hardcoded block hash (
0xc80cf12883341827d71c08f734ba9a9d6da7e59eb16921d26e6706887e552c74) that may not exist on the target network specified byRPC_ENDPOINT_URL. Consider fetching a recent block dynamically or documenting which network this hash corresponds to.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
common/testcontainers/testcontainers.go(2 hunks)rollup/cmd/permissionless_batches/app/app.go(2 hunks)rollup/cmd/rollup_relayer/app/app.go(5 hunks)rollup/internal/controller/watcher/l2_watcher.go(5 hunks)rollup/internal/controller/watcher/l2_watcher_test.go(3 hunks)rollup/internal/controller/watcher/watcher_test.go(3 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-18T06:49:24.796Z
Learnt from: Thegaram
Repo: scroll-tech/scroll PR: 1746
File: rollup/internal/controller/sender/sender.go:637-640
Timestamp: 2025-10-18T06:49:24.796Z
Learning: In the file `rollup/internal/controller/sender/sender.go`, the resubmission logic in `createReplacingTransaction` does not convert V0 blob sidecars to V1 when resubmitting transactions after the Fusaka upgrade. This is an accepted edge case because it's unlikely to occur, geth is expected to handle it gracefully, and manual recovery is available if needed.
Applied to files:
rollup/cmd/rollup_relayer/app/app.gorollup/internal/controller/watcher/l2_watcher.go
📚 Learning: 2025-07-29T16:38:24.647Z
Learnt from: colinlyguo
Repo: scroll-tech/scroll PR: 1714
File: rollup/internal/controller/relayer/l2_relayer.go:1548-1555
Timestamp: 2025-07-29T16:38:24.647Z
Learning: In rollup/internal/controller/relayer/l2_relayer.go, the validateBatchFields function should error out when GetBatchByHash fails to find a parent batch. This is intentional behavior - missing parent batches represent genuine error conditions that should halt batch submission processing. Genesis batch handling occurs separately from normal batch validation flow.
Applied to files:
rollup/cmd/rollup_relayer/app/app.gorollup/cmd/permissionless_batches/app/app.go
🧬 Code graph analysis (3)
rollup/internal/controller/watcher/l2_watcher_test.go (1)
rollup/internal/controller/watcher/l2_watcher.go (2)
NewL2WatcherClient(45-64)L2WatcherClient(23-42)
rollup/cmd/rollup_relayer/app/app.go (5)
rollup/internal/config/l2.go (1)
L2Config(10-29)rollup/internal/controller/relayer/l2_relayer.go (1)
NewLayer2Relayer(121-206)rollup/internal/config/relayer.go (1)
RelayerConfig(63-88)rollup/internal/controller/watcher/l2_watcher.go (1)
NewL2WatcherClient(45-64)rollup/internal/utils/confirmation.go (1)
GetLatestConfirmedBlockNumber(18-56)
rollup/cmd/permissionless_batches/app/app.go (4)
rollup/internal/config/l2.go (1)
L2Config(10-29)rollup/internal/controller/watcher/l2_watcher.go (1)
NewL2WatcherClient(45-64)rollup/internal/config/config.go (1)
Config(20-25)rollup/internal/config/relayer.go (1)
RelayerConfig(63-88)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (9)
rollup/cmd/rollup_relayer/app/app.go (5)
15-15: LGTM: Import addition is correct.The
rpcpackage import is necessary for the newrpc.Dialpattern used at line 73.
73-77: LGTM: Dual client pattern correctly implemented.The switch from
ethclient.Dialtorpc.Dial+ethclient.NewClientwrapper is necessary because different downstream APIs have different client type requirements:
NewL2WatcherClientrequires the raw*rpc.ClientNewLayer2RelayerandGetLatestConfirmedBlockNumberrequire*ethclient.ClientThis pattern correctly satisfies both requirements.
105-105: LGTM: Correct client type passed to NewLayer2Relayer.The change to pass
l2ethClient(*ethclient.Client) instead ofl2clientcorrectly matches the function signature which expects*ethclient.Client.
119-119: LGTM: Watcher initialization updated correctly for Validium mode.The changes correctly:
- Pass the raw
*rpc.Client(l2client) which matches the expected function signature- Add the
validiumModeboolean parameter from configuration, properly positioned in the parameter listThis aligns with the CLOAK/Validium feature additions.
149-149: LGTM: Correct client type passed to GetLatestConfirmedBlockNumber.The change to pass
l2ethClientcorrectly satisfies theethClientinterface expected byGetLatestConfirmedBlockNumber.rollup/internal/controller/watcher/watcher_test.go (1)
67-69: LGTM!The refactor to obtain an RPC client first and then wrap it in an ethclient is clean and aligns with the broader RPC-based client handling changes across the PR.
rollup/cmd/permissionless_batches/app/app.go (1)
94-100: LGTM!The switch to RPC-based client creation and the addition of the
ValidiumModeparameter align correctly with the configuration structure and the broader changes in this PR.rollup/internal/controller/watcher/l2_watcher_test.go (1)
24-24: LGTM!The updated constructor calls correctly pass the
validiumModeparameter and use the RPC-based client, aligning with the broader refactor.Also applies to: 46-49
rollup/internal/controller/watcher/l2_watcher.go (1)
44-64: LGTM!The constructor correctly accepts an RPC client, wraps it in an ethclient, stores the RPC client reference for raw calls, and captures the
validiumModeflag for conditional logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
coordinator/Makefile (1)
40-40: Inconsistent path resolution:$(CURDIR)used only inlocalsetup, not other build targets.Lines 40 and 42 correctly switch to
$(CURDIR)for robust path resolution, but lines 29, 32, 35, 49, 52 (and test at line 18) still use$(PWD).$(CURDIR)is more reliable in scenarios involving symlinks or nested make invocations. Consider applying this consistently across all build targets.To ensure consistent path resolution, consider applying this pattern to all targets:
coordinator_api: $(LIBZKP_PATH) ## Builds the Coordinator api instance. - go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_api ./cmd/api + go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(CURDIR)/build/bin/coordinator_api ./cmd/api coordinator_cron: - go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_cron ./cmd/cron + go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(CURDIR)/build/bin/coordinator_cron ./cmd/cron coordinator_tool: - go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_tool ./cmd/tool + go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(CURDIR)/build/bin/coordinator_tool ./cmd/toolSimilarly, update mock targets (lines 49, 52), test target (line 18), and lint target (line 61).
Also applies to: 42-42
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
coordinator/Makefile(1 hunks)tests/prover-e2e/Makefile(2 hunks)tests/prover-e2e/README.md(1 hunks)tests/prover-e2e/cloak-xen/genesis.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/prover-e2e/README.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (4)
coordinator/Makefile (1)
40-40: Verify-fLflags are appropriate for config copying in local setup.The
-f(force overwrite) and-L(follow symlinks) flags are reasonable for a setup workflow, but ensure this behavior aligns with your local development practices. Ifconfig.jsonmay be a symlink to an external config or if overwriting should be guarded, consider the implications.tests/prover-e2e/Makefile (2)
28-45: Verify GOOSE migration directory and double-up invocation logic.Line 44 invokes
${GOOSE_CMD} upand then line 45 invokesGOOSE_MIGRATION_DIR=conf ${GOOSE_CMD} up-to 100. This runs two separate migration commands:
- One with the default migration directory (current directory or GOOSE_MIGRATION_DIR env default)
- One with migrations from
confdirectory up to version 100Confirm this dual-invocation is intentional. If migrations were moved to
conf/, consider consolidating to a single command:${GOOSE_CMD} up - GOOSE_MIGRATION_DIR=conf ${GOOSE_CMD} up-to 100 + GOOSE_MIGRATION_DIR=conf ${GOOSE_CMD} upOtherwise, document why two separate migrations are needed.
52-56: conf/ is a symbolic link—not a concrete directory with subdirectories.The Makefile correctly documents this on lines 17–18:
confis a target that instructs users to link eithersepoliaorcloak-xenasconf. Once this symlink is created (e.g.,ln -s sepolia conf), theconf/config.jsonpath resolves correctly. The config files exist atsepolia/config.jsonandcloak-xen/config.jsonand contain valid configurations. The codec flags (7 for Euclid, 8 for default) are properly passed as CLI arguments, not expected in the config file. The setup is intentional and correct.Likely an incorrect or invalid review comment.
tests/prover-e2e/cloak-xen/genesis.json (1)
1-123: Confirm genesis state root and L1 configuration values for validium test environment.Verification confirms that the genesis.json file is properly configured for validium mode testing:
✓ config.json explicitly sets
"validium_mode": true, confirming this is the intended test environment
✓ useZktrie is correctly set tofalsefor validium mode
✓ JSON structure is valid and all required genesis fields are present
✓ Pre-allocated system contracts and accounts are properly configuredHowever, the following values require manual verification:
- genesisStateRoot (
0x08d535cc60f40af5dd3b31e0998d7567c2d568b224bed2ba26070aeb078d1339) and missingHeaderFieldsSHA256 (0x9062e2fa1200dca63bee1d18d429572f134f5f0c98cb4852f62fc394e33cf6e6) are not validated against test execution code. Confirm these hashes match the expected computed values from the account allocations.- L1 configuration addresses (l1MessageQueueV2Address:
0xb5D7c8Ec8C7A72a0ed9782cB67Cc68b3915389a1, scrollChainAddress:0x283Eff377d7f38955c2aD1414DAE94e9A9b3270F) should match the L1 test environment setup if applicable for this validium test.Ensure these values are consistent with the test harness and any L1 deployment configuration before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
crates/libzkp/src/tasks/batch.rs (1)
347-348: Uncomment the assertion if the method is available.The commented-out assertion should be uncommented to verify that the computed batch hash matches the provided one, as
BatchHeaderValidiumshould have abatch_hash()method available.Apply this diff:
- // Sanity: computed batch hash equals the provided one (if method available) - // assert_eq!(v.header.batch_hash(), expected_batch_hash); + // Sanity: computed batch hash equals the provided one + assert_eq!(v.header.batch_hash(), expected_batch_hash);common/types/message/message.go (1)
116-128: AddUnmarshalJSONmethod to maintain round-trip compatibility.The
RawBytestype has a customMarshalJSONthat outputs a numeric array, but lacks the correspondingUnmarshalJSON. This breaks round-trip marshaling in Go and will cause failures when coordinator or other Go components try to unmarshal these payloads.A detailed implementation was suggested in a previous review. Please implement the
UnmarshalJSONmethod that accepts both the new numeric-array format and legacy base64 strings for backward compatibility.
🧹 Nitpick comments (5)
coordinator/internal/logic/provertask/batch_prover_task.go (3)
260-260: Remove commented code.The commented-out variable declaration serves no purpose and should be removed to keep the codebase clean.
Apply this diff:
- // var chunkInfos []*message.ChunkInfo
331-336: Document and validate the hardcoded byte offsets for KZG data extraction.The hardcoded offsets (64:112, 112:160) used to extract
KzgCommitmentandKzgProoffromBlobDataProofare fragile and difficult to maintain. While the comment describes the memory layout, these magic numbers should be defined as named constants.Consider defining constants at the package level:
const ( BlobDataProofKzgCommitmentOffset = 64 BlobDataProofKzgCommitmentEnd = 112 BlobDataProofKzgProofOffset = 112 BlobDataProofKzgProofEnd = 160 )Then use them:
-taskDetail.KzgProof = &message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[112:160]))} -taskDetail.KzgCommitment = &message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[64:112]))} +taskDetail.KzgCommitment = &message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[BlobDataProofKzgCommitmentOffset:BlobDataProofKzgCommitmentEnd]))} +taskDetail.KzgProof = &message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[BlobDataProofKzgProofOffset:BlobDataProofKzgProofEnd]))}
274-282: Fix variable naming inconsistency.The variable
taskBytesWithchunkProofshas inconsistent casing. The 'c' in "chunk" should be capitalized for consistency.Apply this diff:
- taskBytesWithchunkProofs, err := json.Marshal(taskDetail) + taskBytesWithChunkProofs, err := json.Marshal(taskDetail) if err != nil { return nil, fmt.Errorf("failed to marshal chunk proofs, taskID:%s err:%w", task.TaskID, err) } taskMsg := &coordinatorType.GetTaskSchema{ TaskID: task.TaskID, TaskType: int(message.ProofTypeBatch), - TaskData: string(taskBytesWithchunkProofs), + TaskData: string(taskBytesWithChunkProofs), HardForkName: hardForkName, }crates/libzkp/src/tasks/batch.rs (1)
219-226: Improve panic message for better debugging.The panic message "unexpected header type" could be more informative. Consider including the actual header type to aid debugging.
Apply this diff:
match &self.batch_header { BatchHeaderV::Validium(h) => assert_eq!( h.header.batch_hash(), h.batch_hash, "calculated batch hash match which from coordinator" ), - _ => panic!("unexpected header type"), + _ => panic!("unexpected header type: expected Validium, found {}", self.batch_header), }common/types/message/message.go (1)
138-138: Remove commented-out deprecated field.Based on previous review discussions confirming this field is deprecated with no references, the commented-out
TxBytesline should be removed to keep the codebase clean.Apply this diff:
IsPadding bool `json:"is_padding"` - // TxBytes []byte `json:"tx_bytes"` TxBytesHash common.Hash `json:"tx_data_digest"`
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
common/types/message/message.go(4 hunks)coordinator/internal/logic/provertask/batch_prover_task.go(2 hunks)crates/libzkp/src/lib.rs(2 hunks)crates/libzkp/src/tasks/batch.rs(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/libzkp/src/lib.rs
🧰 Additional context used
🧬 Code graph analysis (2)
crates/libzkp/src/tasks/batch.rs (6)
common/types/message/message.go (1)
ChunkInfo(131-148)coordinator/internal/utils/version.go (1)
Version(16-40)crates/libzkp/src/tasks/bundle.rs (1)
precheck_and_build_metadata(60-72)crates/libzkp/src/tasks/chunk.rs (1)
precheck_and_build_metadata(185-190)crates/libzkp/src/tasks/batch/utils.rs (4)
to_blob(27-41)blob_to_kzg_commitment(44-48)get_versioned_hash(54-58)get_kzg_proof(66-74)crates/libzkp/src/tasks.rs (1)
check_aggregation_proofs(24-36)
coordinator/internal/logic/provertask/batch_prover_task.go (6)
common/types/message/message.go (3)
OpenVMChunkProof(180-189)BatchTaskDetail(94-105)Byte48(51-53)coordinator/internal/types/get_task.go (1)
GetTaskSchema(12-19)rollup/internal/orm/batch.go (2)
Batch(25-75)Batch(83-85)coordinator/internal/orm/batch.go (2)
Batch(18-72)Batch(80-82)coordinator/internal/utils/version.go (1)
Version(16-40)coordinator/internal/utils/codec_validium.go (2)
CodecVersion(10-10)FromVersion(38-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (10)
coordinator/internal/logic/provertask/batch_prover_task.go (2)
312-317: Verify the early return behavior for unsupported codec versions.When the codec version is outside the expected range (V3, V4, V6, V7, V8), the function returns a partially initialized
taskDetail(with onlyVersion,ChunkProofs, andForkNamepopulated, but missingBatchHeaderand KZG-related fields). A past review questioned why this doesn't error.Please confirm whether returning a partial taskDetail is the intended behavior or if this should return an error instead.
299-299: Verification complete:version()method is correctly implemented.The
bp.version()method onBatchProverTaskis inherited fromBaseProverTaskand correctly delegates toutils.Version(hardForkName, b.validiumMode())at line 72 ofcoordinator/internal/logic/provertask/prover_task.go. ThevalidiumMode()helper properly retrieves the ValidiumMode setting from config. Version byte calculation is consistent across the codebase.crates/libzkp/src/tasks/batch.rs (3)
23-87: LGTM! Good use of Display for error messages.The
BatchHeaderValidiumWithHashstruct and theValidiumvariant addition toBatchHeaderVare well-structured. TheDisplayimplementation is particularly useful for providing informative error messages in theunreachable!macros throughout the code.
140-274: LGTM! Version-driven logic is well-structured.The version-driven approach to handle Validium vs non-Validium paths is well-implemented with appropriate assertions and checks. The reference header construction based on
(domain, stf_version)tuples provides good type safety, and the block hash zeroing for the Scroll domain is correctly implemented.
276-288: LGTM! Version parameter correctly threaded through.The
precheck_and_build_metadatamethod correctly passes the version tocheck_aggregation_proofs, which is consistent with the version-driven validation approach introduced in this PR.common/types/message/message.go (5)
41-48: LGTM! Version field additions support version-driven processing.The
VersionandPostMsgQueueHashfields added toChunkTaskDetailare well-placed and consistent with the version-driven approach introduced across the codebase.
94-105: LGTM! Pointer types for optional KZG fields are appropriate.The changes to
BatchTaskDetailare well-structured:
Versionfield supports version-driven processing- KZG fields changed to pointers (
*Byte48) withomitemptycorrectly models their optional natureChallengeDigestasinterface{}provides flexibility for different hash types
144-148: LGTM! New fields support block hash tracking and encryption.The additions of
PrevBlockhash,PostBlockhash, andEncryptionKeytoChunkInfoalign with:
- The block hash handling logic in the Rust code (zeroing for Scroll domain)
- The CLOAK privacy solution mentioned in the PR title
200-211: LGTM! EncryptionKey field appropriately added.The
EncryptionKeyfield addition toOpenVMBatchInfois correctly implemented. As clarified in past reviews, this holds the public encryption key (not the secret decryption key), which is appropriate for these public data structures.
261-272: LGTM! EncryptionKey consistently propagated.The
EncryptionKeyfield is correctly added toOpenVMBundleInfo, maintaining consistency with the similar addition toOpenVMBatchInfoand supporting the CLOAK privacy feature.
Purpose or design rationale of this PR
Describe your change. Make sure to answer these three questions: What does this PR do? Why does it do it? How does it do it?
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tagincommon/version.gobeen updated or have you addedbump-versionlabel to this PR?Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit
New Features
Configuration Updates
Chores