-
Notifications
You must be signed in to change notification settings - Fork 282
feat(L1 follower and rollup verifier): support CodecV7 in L1 follower mode and rollup verifier #1105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…retrieve data from L1
…s submitted in one transaction
…single transactions
WalkthroughThis pull request introduces versioning and enhanced error handling for committed batch metadata within the rollup event processing system. It adds new fields and types (e.g., Version, ChunkBlockRanges, LastL1MessageQueueHash) to distinguish between legacy (V0) and new (V7) metadata. The changes update encoding/decoding routines and test cases accordingly and modify commit batch functions (V1 and V7) to better handle blob hashes and block timestamps. Additionally, dependency versions have been updated, and rollup synchronization logic now validates batches with additional parent metadata. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant RCM as ReadCommittedBatchMeta
participant DecV7 as Decoder (V7)
participant DecV0 as Decoder (V0)
Caller->>RCM: Request batch metadata read
RCM->>DecV7: Attempt V7 decode
alt V7 decode successful
DecV7-->>RCM: Return V7 metadata
else V7 decode fails
RCM->>DecV0: Attempt V0 decode
DecV0-->>RCM: Return V0 metadata or error
end
RCM-->>Caller: Return metadata (or error)
sequenceDiagram
participant ES as Rollup Event Stream
participant CBS as CalldataBlobSource
participant Helper as getAndAppendCommitBatchDA
ES->>CBS: New rollup event arrives
CBS->>CBS: Check commit transaction hash
alt Transaction changed
CBS->>Helper: Process and append previous commit events
Helper-->>CBS: Return aggregated commit data
else Same transaction
CBS->>CBS: Append commit event to current batch
end
CBS-->>ES: Return processed batch data
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
…-use-code-from-l1-follower
…1-follower-mode-update-da-codec
…atch within a set of batches
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
…erifier-use-code-from-l1-follower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
rollup/rollup_sync_service/rollup_sync_service_test.go (1)
661-808
: Well-structured test coverage for CodecV7.The test function provides comprehensive coverage for CodecV7 batch validation, including:
- Single and multiple batch validation
- Message queue hash progression
- Parent batch hash linking
- Block number continuity
Consider adding test cases for:
- Error scenarios (invalid parent hash, incorrect message queue hash)
- Edge cases (empty blocks, maximum message count)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (3)
go.mod
(1 hunks)rollup/rollup_sync_service/rollup_sync_service_test.go
(15 hunks)rollup/rollup_sync_service/testdata/blockTrace_06.json
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- go.mod
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (10)
rollup/rollup_sync_service/rollup_sync_service_test.go (2)
256-257
: LGTM! Consistent updates to test metadata.The changes consistently update the
CommittedBatchMeta
instances across all test functions to use only theVersion
field, removing theBlobVersionedHashes
field. This aligns with the PR objectives of supporting CodecV7.Also applies to: 283-285, 320-322, 347-349, 383-385, 410-412, 447-449, 474-476, 505-507, 532-534, 558-560, 584-586, 620-622, 624-626, 628-630, 632-634
819-822
: LGTM! Clean helper function implementation.The helper function is concise and effectively serves its purpose in the CodecV7 tests.
rollup/rollup_sync_service/testdata/blockTrace_06.json (8)
1-9
: Coinbase Object Structure – Looks Consistent
The Coinbase section defines the expected fields (address, nonce, balance, keccakCodeHash, poseidonCodeHash, codeSize) using a mix of numeric and hex string formats. Verify that the numeric fields (like nonce and codeSize) match the expected types in downstream processing.
10-28
: Header Data Validation – Formatting and Null Field Check
The header provides key block metadata (parentHash, miner, stateRoot, transactionRoot, etc.) using consistent hexadecimal representations. Note that "baseFeePerGas" is null; please confirm that this is intentional (e.g. reflecting a pre-EIP-1559 state or a specific test scenario).
29-31
: Row Consumption Field – Empty Array as Expected
The "row_consumption" field is represented as an empty array. This appears correct for test data with no row consumption output. Ensure that consuming code can correctly handle an empty list.
32-63
: Transactions Array – Consistency and Duplicate txHash Review
Two transactions are provided with detailed fields (type, nonce, txHash, gas, gasPrice, from, to, chainId, value, data, isCreate, v, r, s). Note that both transactions share the same "txHash" value. Verify that duplicate txHash entries are intentional in this test scenario and that downstream logic is prepared to handle such cases if they occur.
65-118
: StorageTrace Block – Proofs and Case Sensitivity Considerations
The "storageTrace" object correctly lists "rootBefore", "rootAfter", and a "proofs" dictionary keyed by addresses. The proofs arrays appear comprehensive. Observe that some address keys use mixed-case formatting (e.g. "0x1a258d17bF244C4dF02d40343a7626A9D321e105"); ensure that the system’s address comparison logic accounts for potential case variations.
120-168
: StorageProofs Structure – Detailed and Well-Formed
The "storageProofs" section provides nested proofs per address and includes extra nested data (e.g. an "extraData" object with a "proofList"). The structure is detailed and appears to conform to the test data schema. Double-check that all nested keys and arrays match the expected format for the rollup verifier’s processing routines.
169-868
: ExecutionResults – Comprehensive Trace with Detailed StructLogs
This section captures the execution outcome including gas usage, failure indicator, sender and receiver details, post-execution account states, the contract bytecode, and an extensive "structLogs" array of EVM opcodes. In the structLogs, note that one entry (around line 835) registers an "error" with “out of gas” – confirm that this is an intentional negative test scenario. Overall, the diverse opcode logs provide a rich trace; however, given the volume, verify that the testing framework and log analyzers are optimized to handle such verbose data.
869-870
: Withdraw Trie Root – Zeroed Value Verification
The "withdraw_trie_root" field is set to a string of zeros. Ensure that a zeroed root is the desired output in this test case (e.g. indicating an absence of withdrawal data) and is consistent with other parts of the system’s expectations for this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
core/rawdb/accessors_rollup_event.go (2)
196-207
: Enhance version validation.While the version check exists, consider adding an upper bound check to ensure the version is not higher than the maximum supported version.
if encoding.CodecVersion(cbm7.Version) < encoding.CodecV7 { return nil, fmt.Errorf("unexpected committed batch metadata version: batch index %d, version %d", batchIndex, cbm7.Version) } +if encoding.CodecVersion(cbm7.Version) > encoding.MaxCodecVersion { + return nil, fmt.Errorf("unsupported committed batch metadata version: batch index %d, version %d", batchIndex, cbm7.Version) +}
209-219
: Consider adding version validation for V0.Similar to V7 validation, consider adding version validation for V0 to ensure data integrity.
if err = rlp.Decode(bytes.NewReader(data), cbm0); err != nil { return nil, fmt.Errorf("failed to decode committed batch metadata: batch index %d, err: %w", batchIndex, err) } +if encoding.CodecVersion(cbm0.Version) >= encoding.CodecV7 { + return nil, fmt.Errorf("unexpected version for V0 format: batch index %d, version %d", batchIndex, cbm0.Version) +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
core/rawdb/accessors_rollup_event.go
(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (4)
core/rawdb/accessors_rollup_event.go (4)
23-29
: LGTM! Well-structured metadata versioning.The
CommittedBatchMeta
struct is well-designed with clear versioning support and backward compatibility considerations. The comment for the new field clearly indicates its version requirement.
31-36
: LGTM! Good backward compatibility.The
committedBatchMetaV0
struct maintains backward compatibility while clearly documenting the unused field with a helpful comment.
162-184
: LGTM! Clean version-based encoding.The function elegantly handles different versions of metadata while maintaining backward compatibility. The error handling is robust and the version check is properly implemented.
187-194
: LGTM! Improved error handling.Good improvement in error handling by returning errors instead of using
log.Crit
. This allows better error recovery in the calling code.
6d5af23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
rollup/da_syncer/da/calldata_blob_source.go (1)
234-259
: Consider usingdefault
for CodecV7 and above.Based on a previous review comment, using
default
instead of an explicit case for V7 would make it easier to handle future ZK bug fixes that might require a new verifier version without changing the batch format.Apply this diff to improve future maintainability:
- default: // CodecVersion 7 and above + case 7:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rollup/da_syncer/da/calldata_blob_source.go
(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test
🔇 Additional comments (5)
rollup/da_syncer/da/calldata_blob_source.go (5)
11-11
: LGTM!The addition of the
common
package import is necessary for using theHash
type in the new implementation.
99-121
: LGTM! Well-structured implementation for batch processing.The implementation effectively tracks and processes commit events from the same transaction together, with clear comments explaining the logic. The helper function
getAndAppendCommitBatchDA
provides a clean way to process batched events.
134-144
: LGTM! Robust handling of commit events.The code correctly identifies and processes commit events from different transactions while maintaining the order of events. The implementation aligns with the comment from the previous review about handling multiple commit batch events belonging to the same transaction.
145-186
: LGTM! Consistent cleanup of pending commit events.The code ensures that any pending commit events are processed before handling revert or finalize events, maintaining data consistency.
191-233
: LGTM! Thorough validation of commit events.The implementation includes comprehensive validation of commit events, ensuring:
- Non-empty commit events list
- Matching transaction hashes
- Matching block numbers and hashes
- Sequential batch indices
ca408bc
1. Purpose or design rationale of this PR
This PR introduces
CodecV7
for theEuclidV2
hardfork to the L1 follower mode and rollup verifier. Both components reuse the same underlyingCalldataBlobSource
. Additionally, the rollup verifier is modified to support the new batch structure when verifying batches locally vs received from L1.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Refactor
Tests
Chore