-
Notifications
You must be signed in to change notification settings - Fork 69
Move block context to blob #649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Conflicts: # bindings/bin/rollup_deployed.hex # bindings/bindings/rollup.go # bindings/bindings/rollup_more.go # node/derivation/derivation.go # oracle/oracle/batch.go
Refactor derivation
Co-authored-by: vincent <vincent.ch.cn@gmail.com> Co-authored-by: chengwenxi <22697326+chengwenxi@users.noreply.github.com>
WalkthroughThis pull request standardizes dependencies and updates submodules to reference newer commits. It revises multiple smart contract files by fixing Solidity version pragmas to =0.8.24, and alters the batch processing logic in both contracts and node/executor components to use new parameters (lastBlockNumber and numL1Messages) instead of the deprecated blockContexts. In addition, the changes improve batch header encoding via new codec libraries, adjust deployment configuration and Docker base images, and modify prover and tx-submitter modules by refining error handling, metric registration, and utility functions. Changes
Sequence Diagram(s)sequenceDiagram
participant Executor
participant BatchCache
participant L2Client
participant RollupContract
Executor->>BatchCache: Retrieve current block bytes & txs
BatchCache-->>Executor: Return batch data (including lastBlockNumber, numL1Messages)
Executor->>L2Client: Call commitBatch(lastBlockNumber, numL1Messages, signatures)
L2Client-->>Executor: Batch commit confirmation
Executor->>RollupContract: Update state & metrics
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 golangci-lint (1.62.2)level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package zstd: could not load export data: no export data for "morph-l2/node/zstd"" ✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🔭 Outside diff range comments (4)
contracts/contracts/mock/TestUpgrade.sol (1)
51-53:⚠️ Potential issueFix incorrect variable update.
The function
setOtherVersionupdatesversioninstead ofotherVersion.Apply this diff to fix the bug:
function setOtherVersion(uint256 _version) public { - version = _version; + otherVersion = _version; }prover/crates/core/src/batch.rs (1)
81-82:⚠️ Potential issueCritical: Unwrapping Optional values could cause panic.
The
unwrap()calls onwithdraw_rootandsequencer_rootwill panic if these values areNone. Consider:
- Using
expect()with descriptive messages- Handling the
Nonecase gracefully- Adding documentation to clarify that these fields must be set before calling this method
Apply this diff to improve error handling:
-hasher.update(self.withdraw_root.unwrap().as_slice()); -hasher.update(self.sequencer_root.unwrap().as_slice()); +hasher.update(self.withdraw_root.expect("withdraw_root must be set before computing hash").as_slice()); +hasher.update(self.sequencer_root.expect("sequencer_root must be set before computing hash").as_slice());tx-submitter/services/rollup.go (1)
728-730:⚠️ Potential issueCritical: Update gas estimation to use new batch structure.
The gas estimation logic still uses the removed
BlockContextsfield, which will cause runtime errors. This needs to be updated to use the new batch structure.Apply this diff to fix the gas estimation:
- msgcnt := utils.ParseL1MessageCnt(batch.BlockContexts) + msgcnt := batch.NumL1Messages gas = r.RoughRollupGasEstimate(msgcnt) log.Info("rough estimate rollup tx gas", "gas", gas, "msgcnt", msgcnt)bindings/bindings/l2staking.go (1)
60-75: 💡 Verification agent❓ Verification inconclusive
Verify error handling in deployment function.
The DeployL2Staking function has good error handling but should verify the contract deployment transaction receipt.
🏁 Script executed:
#!/bin/bash # Description: Check for deployment verification patterns in other contracts ast-grep --pattern 'if tx := $_ err != nil { $$$ receipt, err := bind.WaitMined($_, tx) $$$ }'Length of output: 102
Action Required: Add transaction receipt verification in the DeployL2Staking function.
While the function handles ABI retrieval and deployment errors adequately, it does not verify that the deployment transaction has actually been mined. To ensure robust error handling, please add a verification step—such as calling
bind.WaitMinedon the deployment transaction—to confirm that the contract has been successfully deployed.
- Location:
bindings/bindings/l2staking.go, within theDeployL2Stakingfunction.- Suggested Change: After a successful call to
bind.DeployContractand prior to returning the contract instance, add a check like:receipt, err := bind.WaitMined(backend, tx) if err != nil { return common.Address{}, nil, nil, err } // Optionally: verify receipt.Status to confirm successful mining.Implementing this step will help ensure the deployed contract has been confirmed on-chain.
🧹 Nitpick comments (33)
node/core/batch_commit.go (1)
70-85: Use a cancellable context
Replacingcontext.Background()in the call toCommitBatchwith a passed-in or cancellable context can improve flexibility and allow graceful cancellations or timeouts.- if err = e.l2Client.CommitBatch(context.Background(), ð.RollupBatch{... + ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) + defer cancel() + if err = e.l2Client.CommitBatch(ctx, ð.RollupBatch{...node/core/batch_seal.go (2)
28-33: Log fallback to old batch version
When the upgraded path is skipped due to oversized payload (line 78), the code falls back to the old version without logging this decision. For transparency, add a log statement indicating fallback to version 0.+ e.logger.Info("Upgraded payload size too large, falling back to version 0.")
16-20: Finalize batch usage
Sealing a batch when the cache is empty understandably returns an error. Ensure external callers handle this condition gracefully to avoid silent failures in any scheduling or state machine logic.node/types/batch_header.go (1)
31-49: Confirm length vs. version checks
Thevalidate()method ensures batch headers match the expected length of each version. If future expansions require more dynamic handling of lengths, consider version-specific decoders for added flexibility.gas-oracle/app/src/da_scalar/calculate.rs (1)
83-109: Caution with underflow and range checks.
let block_count = last_block_num - first_block_num + 1;can overflow iflast_block_num < first_block_num.origin_batch.len() < 60 * block_count as usizeis properly checked, but consider explicit handling ifblock_countbecomes unexpectedly large or zero.- Logging and returning
Noneupon mismatches is valid; ensure upstream callers handleNonegracefully.node/core/batch.go (2)
105-109: Error handling is solid, but consider more context.When retrieving
totalL1MessagePoppedfails, an error is logged and returned. Providing additional context about the parent batch or block index might ease debugging.
158-159: Ignoring error fromBatchIndex().Using
index, _ := e.batchingCache.parentBatchHeader.BatchIndex()suppresses potential errors. If this index can actually fail, consider handling or logging the error to avoid silent fallback to the zero value.prover/bin/shadow-prove/src/shadow_rollup.rs (7)
23-23: Newl2_providerfield added toBatchSyncer
This extension to handle a Layer 2 provider makes sense to support full batch synchronization across L1 and L2. Ensure that all necessary error handling is present when making calls via this new provider.
80-81: Comment referencingBatchHeaderV1 additions
You added a comment describing new fields such aslastBlockNumber. This is a good addition for clarity. Recommend referencing the actual struct or spec as needed for easier maintenance.
192-192: Shifting thebatch_indexextraction
Skipping to the second-to-last log might be fragile if certain logs are out of order or incomplete. Confirm the ordering assumption.
205-209: Refactored call tobatch_blocks_inspect
Usingmatchto check forNoneis good, but add logs or more context if you suspect frequent errors.Would you like a script to scan for all
Nonereturns from this function for correlation or debugging?
215-215: Limit check on blocks
Rejecting large block ranges is sensible for performance reasons. However, consider logging how large the block range was to aid investigation.
225-236: Populatingbatch_infoand retrieving next transaction hash
Bundling lines 225-236 streamlines the retrieval ofbatch_index,start_block,end_block, and the final TX hash. No immediate issues seen. Consider adding logs for debugging.
279-323: Refactorbatch_blocks_inspectto also gather L2 transaction counts
This large chunk integrates L2 calls for block inspection.
- Check if partial errors on any block might cause partial data.
- If the provider is busy or times out, ensure you handle the error gracefully.
node/derivation/derivation.go (2)
100-102: Error handling when loadingbeforeMoveBlockCtxABI
The code returns an error immediately if the ABI load fails. This is correct, but consider more explicit user logs if the ABI file or resource is missing.
364-366: Handling legacy commit batch in newly extendedUnPackData
Ensure the logic difference betweenlegacyRollupABIcommitBatch vs.beforeMoveBlockCtxABIorrollupABIis correct. Also confirm that older skipping logic (likeskippedL1MessageBitmap) is still relevant if needed.contracts/contracts/test/Rollup.t.sol (3)
109-117: Revised commit withbatchDataInput
Lines 109-117 incorporate the enlargedBatchDataInput, referencinglastBlockNumber,numL1Messages, etc. Validate none of these fields remain uninitialized or cause future reverts.
181-189: Second batch references the updated batch header
Lines 181-189 show constructing the newbatchDataInput. Check you always pass the correct “parent batch header” to maintain consistency.
330-330:batch header length too smallrevert (another check)
Repetitive checks might be consolidated if you desire simpler code, but it’s not a big issue.oracle/oracle/batch.go (1)
13-13: Use explicit local alias or revert to standard naming convention.Replacing the alias
nodetypeswith a simpler or more descriptive import alias (or none) can improve clarity, especially since the file already imports multiple packages.contracts/contracts/l1/rollup/Rollup.sol (4)
255-255: Use ofuncheckedblock.Adding subsequent messages within an unchecked block is ok if all arithmetic is guaranteed safe. Confirm that any potential overflow from
batchDataInput.numL1Messagesis validated elsewhere.
262-266: Dynamic header length adjustment for version 1.The
_headerLengthlogic sets up flexible loading. Ensure that the system gracefully handles or logs unexpected lengths from external calls.
480-480: RestrictproveStatecall to active stakers.This is an important security improvement. Ensure that legitimate non-staker roles (like watchers or challengers) are not unintentionally blocked from relevant calls.
741-742: Refactor_commitBatchsignature for versioning.
- Clear parameter naming (
_lastBlockNumber,_numL1Messages) is beneficial.- If future versions add further data, consider an extensible approach or a separate function to avoid large param expansions.
Also applies to: 747-747
node/types/batch.go (2)
128-131: Potential high memory usage when building a combined payload
TxsPayloadV2appendsblockContextsandtxsPayloadinto a single slice. Consider checking the length or possible memory constraints if large data sets are appended.
157-170: Ensure repeated compression calls won’t degrade performance
WillExceedCompressedSizeLimitrepeatedly combines payloads and compresses them as new data arrives. This can become a performance bottleneck under heavy load. Consider caching or incremental compression strategies if needed.prover/bin/shadow-prove/src/lib.rs (1)
12-13: Revisit the block range designReplacing
Vec<u64>withstart_blockandend_blockis simpler, but ensure you don’t need the flexibility of multiple non-sequential blocks. If partial or disjoint ranges are anticipated, reconsider the original approach.prover/crates/morph-executor/client/src/lib.rs (2)
14-26: Optimize memory allocation for block context.The block context building could be more efficient:
- Pre-calculate exact capacity
- Use a single allocation
- let mut block_ctx: Vec<u8> = Vec::with_capacity(60); + const BLOCK_CTX_SIZE: usize = 8 + 8 + 32 + 8 + 2 + 2; // 60 bytes + let mut block_ctx: Vec<u8> = vec![0u8; BLOCK_CTX_SIZE];
40-40: Add detailed error message for batch data mismatch.The assertion should provide more context about the mismatch.
- assert_eq!(batch_data, batch_from_trace, "blob data mismatch!"); + assert_eq!( + batch_data, + batch_from_trace, + "Blob data mismatch! Expected length: {}, actual length: {}", + batch_data.len(), + batch_from_trace.len() + );node/types/batch_test.go (1)
62-71: Consider adding more test cases for BatchHeaderV1.While the basic functionality is tested, consider adding edge cases such as:
- Maximum block number
- Zero block number
- Invalid version transitions
prover/crates/morph-executor/host/src/lib.rs (1)
58-68: Consider pre-allocating tx_bytes vector.For better performance, consider pre-allocating the tx_bytes vector based on the number of transactions.
-let mut tx_bytes: Vec<u8> = vec![]; +let total_txs: usize = block_trace.iter().map(|trace| trace.transactions.len()).sum(); +let mut tx_bytes: Vec<u8> = Vec::with_capacity(total_txs * 256); // Assuming average tx sizeprover/bin/host/src/lib.rs (1)
148-148: Consider parameterizing the hardcoded value.The hardcoded value
1passed toBlobVerifier::verifymight be specific to test data. Consider:
- Making it a test parameter
- Adding a comment explaining its significance
- Using a named constant for better readability
-let (versioned_hash, batch_data) = BlobVerifier::verify(&blob_info, 1).unwrap(); +const TEST_NUM_BLOCKS: usize = 1; // Number of blocks in test blob +let (versioned_hash, batch_data) = BlobVerifier::verify(&blob_info, TEST_NUM_BLOCKS).unwrap();tx-submitter/services/rollup.go (1)
1154-1169: Consider extracting fee calculation logic into a helper function.The fee calculation logic is duplicated between blob and non-blob transactions. Consider extracting this into a helper function to improve maintainability and reduce duplication.
Consider refactoring like this:
+func calculateTxFee(tx *ethtypes.Transaction) uint64 { + if tx.Type() == ethtypes.BlobTxType { + // blob fee + fee := tx.BlobGasFeeCap().Uint64() * tx.BlobGas() + // tx fee + fee += tx.GasPrice().Uint64() * tx.Gas() + return fee + } + return tx.GasPrice().Uint64() * tx.Gas() +} + func sendTx(client iface.Client, txFeeLimit uint64, tx *ethtypes.Transaction) error { // fee limit if txFeeLimit > 0 { - var fee uint64 - // calc tx gas fee - if tx.Type() == ethtypes.BlobTxType { - // blob fee - fee = tx.BlobGasFeeCap().Uint64() * tx.BlobGas() - // tx fee - fee += tx.GasPrice().Uint64() * tx.Gas() - } else { - fee = tx.GasPrice().Uint64() * tx.Gas() - } + fee := calculateTxFee(tx)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (8)
bindings/go.sumis excluded by!**/*.sumcontracts/go.sumis excluded by!**/*.sumnode/go.sumis excluded by!**/*.sumops/l2-genesis/go.sumis excluded by!**/*.sumops/tools/go.sumis excluded by!**/*.sumoracle/go.sumis excluded by!**/*.sumprover/bin/shadow-prove/Cargo.lockis excluded by!**/*.locktx-submitter/go.sumis excluded by!**/*.sum
📒 Files selected for processing (96)
.gitmodules(1 hunks)Makefile(1 hunks)bindings/bin/l2staking_deployed.hex(1 hunks)bindings/bin/rollup_deployed.hex(1 hunks)bindings/bindings/l2staking.go(1 hunks)bindings/bindings/l2staking_more.go(1 hunks)bindings/bindings/rollup_more.go(1 hunks)bindings/go.mod(1 hunks)contracts/contracts/l1/rollup/IL1MessageQueue.sol(1 hunks)contracts/contracts/l1/rollup/IL1MessageQueueWithGasPriceOracle.sol(1 hunks)contracts/contracts/l1/rollup/IRollup.sol(1 hunks)contracts/contracts/l1/rollup/Rollup.sol(6 hunks)contracts/contracts/l2/staking/IDistribute.sol(1 hunks)contracts/contracts/l2/system/IMorphToken.sol(1 hunks)contracts/contracts/l2/system/MorphToken.sol(1 hunks)contracts/contracts/l2/system/WrappedEther.sol(1 hunks)contracts/contracts/libraries/ICrossDomainMessenger.sol(1 hunks)contracts/contracts/libraries/callbacks/IERC677Receiver.sol(1 hunks)contracts/contracts/libraries/callbacks/IGatewayCallback.sol(1 hunks)contracts/contracts/libraries/callbacks/IMessageDropCallback.sol(1 hunks)contracts/contracts/libraries/codec/BatchCodecV0.sol(0 hunks)contracts/contracts/libraries/codec/BatchHeaderCodecV0.sol(1 hunks)contracts/contracts/libraries/codec/BatchHeaderCodecV1.sol(1 hunks)contracts/contracts/libraries/common/IWhitelist.sol(1 hunks)contracts/contracts/libraries/token/IMorphERC1155Extension.sol(1 hunks)contracts/contracts/libraries/token/IMorphERC20Extension.sol(1 hunks)contracts/contracts/libraries/token/IMorphERC721Extension.sol(1 hunks)contracts/contracts/libraries/verifier/IRollupVerifier.sol(1 hunks)contracts/contracts/libraries/verifier/ISP1Verifier.sol(1 hunks)contracts/contracts/libraries/verifier/IZkEvmVerifier.sol(1 hunks)contracts/contracts/libraries/verifier/PlonkVerifier.sol(1 hunks)contracts/contracts/libraries/verifier/RollupVerifier.sol(1 hunks)contracts/contracts/libraries/verifier/SP1VerifierPlonk.sol(1 hunks)contracts/contracts/mock/TestUpgrade.sol(1 hunks)contracts/contracts/test/Rollup.t.sol(17 hunks)contracts/contracts/test/ZkEvmVerifierV1.t.sol(1 hunks)contracts/deploy/016-RollupInit.ts(1 hunks)contracts/go.mod(1 hunks)contracts/hardhat.config.ts(1 hunks)contracts/src/deploy-config/holesky.ts(1 hunks)contracts/src/deploy-config/l1.ts(1 hunks)contracts/src/deploy-config/qanetl1.ts(1 hunks)contracts/src/deploy-config/sepolia.ts(1 hunks)contracts/src/deploy-config/testnetl1.ts(1 hunks)gas-oracle/app/src/abi/Rollup.json(1 hunks)gas-oracle/app/src/da_scalar/calculate.rs(2 hunks)gas-oracle/app/src/da_scalar/l1_scalar.rs(7 hunks)go-ethereum(1 hunks)node/core/batch.go(13 hunks)node/core/batch_commit.go(1 hunks)node/core/batch_seal.go(1 hunks)node/db/keys.go(2 hunks)node/db/store.go(0 hunks)node/derivation/batch_info.go(3 hunks)node/derivation/derivation.go(6 hunks)node/derivation/derivation_test.go(2 hunks)node/go.mod(1 hunks)node/types/batch.go(4 hunks)node/types/batch_header.go(1 hunks)node/types/batch_test.go(2 hunks)node/types/blob.go(2 hunks)node/types/consensus_message.go(2 hunks)node/types/retryable_client.go(4 hunks)ops/devnet-morph/devnet/__init__.py(1 hunks)ops/docker/Dockerfile.l2-node(2 hunks)ops/docker/Dockerfile.l2-node-1(1 hunks)ops/docker/Dockerfile.l2-node-4(1 hunks)ops/docker/Dockerfile.staking-oracle(1 hunks)ops/docker/Dockerfile.submitter(2 hunks)ops/l2-genesis/cmd/genesis/cmd.go(1 hunks)ops/l2-genesis/go.mod(1 hunks)ops/l2-genesis/morph-chain-ops/genesis/layer_two_test.go(1 hunks)ops/tools/batchparse/main.go(1 hunks)ops/tools/go.mod(1 hunks)ops/tools/multisend/main.go(1 hunks)oracle/go.mod(1 hunks)oracle/oracle/batch.go(2 hunks)prover/bin/challenge/src/abi/Rollup.json(1 hunks)prover/bin/challenge/src/handler.rs(10 hunks)prover/bin/challenge/src/main.rs(1 hunks)prover/bin/challenge/src/metrics.rs(0 hunks)prover/bin/host/src/lib.rs(1 hunks)prover/bin/server/src/queue.rs(1 hunks)prover/bin/shadow-prove/Cargo.toml(1 hunks)prover/bin/shadow-prove/abi/Rollup.json(1 hunks)prover/bin/shadow-prove/src/lib.rs(1 hunks)prover/bin/shadow-prove/src/main.rs(4 hunks)prover/bin/shadow-prove/src/shadow_prove.rs(4 hunks)prover/bin/shadow-prove/src/shadow_rollup.rs(14 hunks)prover/crates/core/src/batch.rs(1 hunks)prover/crates/morph-executor/client/src/lib.rs(2 hunks)prover/crates/morph-executor/client/src/verifier/blob_verifier.rs(1 hunks)prover/crates/morph-executor/host/src/lib.rs(3 hunks)tx-submitter/go.mod(1 hunks)tx-submitter/services/rollup.go(1 hunks)tx-submitter/utils/utils.go(2 hunks)
💤 Files with no reviewable changes (3)
- node/db/store.go
- prover/bin/challenge/src/metrics.rs
- contracts/contracts/libraries/codec/BatchCodecV0.sol
✅ Files skipped from review due to trivial changes (31)
- contracts/contracts/libraries/callbacks/IMessageDropCallback.sol
- contracts/contracts/l2/staking/IDistribute.sol
- .gitmodules
- contracts/contracts/libraries/callbacks/IGatewayCallback.sol
- contracts/contracts/l2/system/IMorphToken.sol
- ops/docker/Dockerfile.l2-node-1
- contracts/contracts/libraries/callbacks/IERC677Receiver.sol
- contracts/contracts/libraries/codec/BatchHeaderCodecV0.sol
- contracts/contracts/libraries/token/IMorphERC20Extension.sol
- contracts/contracts/libraries/verifier/IRollupVerifier.sol
- ops/docker/Dockerfile.l2-node
- ops/l2-genesis/morph-chain-ops/genesis/layer_two_test.go
- contracts/contracts/libraries/token/IMorphERC721Extension.sol
- ops/docker/Dockerfile.staking-oracle
- contracts/contracts/libraries/verifier/ISP1Verifier.sol
- ops/docker/Dockerfile.submitter
- contracts/contracts/libraries/verifier/PlonkVerifier.sol
- ops/tools/multisend/main.go
- contracts/contracts/libraries/common/IWhitelist.sol
- ops/docker/Dockerfile.l2-node-4
- contracts/contracts/l2/system/MorphToken.sol
- go-ethereum
- contracts/contracts/l1/rollup/IL1MessageQueueWithGasPriceOracle.sol
- bindings/bindings/l2staking_more.go
- ops/tools/batchparse/main.go
- contracts/contracts/libraries/verifier/SP1VerifierPlonk.sol
- contracts/contracts/libraries/verifier/IZkEvmVerifier.sol
- ops/tools/go.mod
- contracts/contracts/libraries/token/IMorphERC1155Extension.sol
- contracts/contracts/libraries/verifier/RollupVerifier.sol
- bindings/bindings/rollup_more.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
node/derivation/batch_info.go
17-17: could not import morph-l2/node/zstd (-: build constraints exclude all Go files in zstd)
(typecheck)
🔇 Additional comments (138)
contracts/src/deploy-config/testnetl1.ts (1)
16-16: Verify the new program verification key.The program verification key has been updated consistently across all deployment configurations (testnet, qanet, mainnet, and holesky). Please verify:
- The origin and validity of the new key.
- Whether this change is related to the architectural change of moving block context to blob.
Could you provide documentation or context about why this verification key needed to be updated? This will help future maintainers understand the change.
contracts/src/deploy-config/qanetl1.ts (1)
19-19: LGTM! Key matches other environments.The program verification key matches the value used in other deployment configurations.
contracts/src/deploy-config/l1.ts (1)
20-20: LGTM! Key matches other environments.The program verification key matches the value used in other deployment configurations.
contracts/src/deploy-config/holesky.ts (1)
17-17: LGTM! Key matches other environments.The program verification key matches the value used in other deployment configurations.
contracts/src/deploy-config/sepolia.ts (1)
21-21:Details
❓ Verification inconclusive
Verify the consistency and authenticity of the new program verification key.
The
programVkeyhas been updated to a new value. Since this is a critical cryptographic key used in program verification, we should ensure:
- The new key is consistent across all deployment configurations
- The key was properly generated and is valid for its intended use
Run the following script to verify the consistency of the
programVkeyacross all deployment configurations:Could you also confirm:
- The source and generation process of this new verification key
- Whether this key has been properly tested in a non-production environment
🏁 Script executed:
#!/bin/bash # Description: Check if programVkey is consistent across all deployment configurations # Search for programVkey in all deployment config files echo "Checking programVkey consistency across deployment configs..." rg --type ts "programVkey.*0x0067523b285111b45b465b927d56449351e78d802252ccbf4bcf5eae588b7356" contracts/src/deploy-config/Length of output: 885
Ensure consistency and proper key generation for the program verification key.
The updated hexadecimal key is now consistently used across all deployment configurations, which is confirmed by the script output. However, please confirm the following:
- The source/generation process of the new verification key.
- That the key has been successfully tested in a non-production environment.
prover/bin/challenge/src/main.rs (2)
51-51: LGTM! Comment updated to reflect block context.The terminology change from "chunks len." to "blocks len." aligns with the PR objective of moving block context to blob.
48-57:Details
❓ Verification inconclusive
Verify the impact of removing txn_len metric.
The txn_len metric has been removed from registration. Please ensure that no monitoring systems or dashboards depend on this metric.
🏁 Script executed:
#!/bin/bash # Description: Search for any remaining references to txn_len metric in the codebase # Search for txn_len in Rust files rg -t rust 'txn_len' # Search for txn_len in monitoring configurations rg -g '*.{yaml,yml,json}' 'txn_len'Length of output: 544
Action Required: Verify Monitoring Dependencies for Removed txn_len Metric
The txn_len metric has been removed from registration in
prover/bin/challenge/src/main.rsas intended. Our search confirmed that there are no lingering references totxn_lenin the main code or configuration files. However, note that the similar metricshadow_txn_lenstill exists in theprover/bin/shadow-provemodule. Please ensure that any dashboards or monitoring systems previously relying ontxn_lenare either updated or confirmed not to be affected.node/core/batch_commit.go (2)
16-22: Validate no pending batch scenario
The early return is clear and prevents unnecessary processing when no batch is sealed. This is good for performance.
97-113: Clarify concurrency assumptions
commitSealedBatchresets shared state inbatchingCache. Ensure this method is called under proper locking or single-threaded usage to avoid race conditions in multi-threaded scenarios.node/core/batch_seal.go (1)
46-52: Check sidecar usage
Caching the sealed header and sidecar is a good approach. Consider adding test coverage to confirm that sidecar is accurately retrieved and consumed downstream without stale or missing data.node/types/batch_header.go (1)
139-148: Guard against misuse of LastBlockNumber
AccessingLastBlockNumber()in version 0 triggers an error, which is correct. Consider logging or handling this error upstream to avoid confusion, especially if version checks are overlooked in calling code.node/derivation/batch_info.go (3)
60-62: Struct field additions look consistent.New fields (
root,withdrawalRoot, andparentTotalL1MessagePopped) inBatchInfoappear to align with the overall refactor. Ensure any downstream usage is updated and tested to avoid inconsistencies.
83-93: Good validation of parent batch header fields.Fetching
BatchIndex()andTotalL1MessagePopped()fromparentBatchHeaderensures the correct state. The error handling is appropriate. Just confirm all references to these batch header fields across the codebase remain consistent with new logic (e.g., verifying no stale references to removed fields).
118-123: Potential off-by-one or length mismatch.You decode a fixed 60-byte chunk to populate
startBlockand computeblockCount = batch.LastBlockNumber - startBlock.Number + 1. Ensure there's no scenario where the batch’s data is truncated, or the block range is zero/negative. If an unexpected length arises, consider more explicit boundary checks or user-friendly error logs.gas-oracle/app/src/da_scalar/calculate.rs (2)
19-20: Returning a vector of byte vectors allows for clearer batching logic.Refactoring
extract_tx_payloadtoResult<Vec<Vec<u8>>, ScalarError>clarifies that multiple payloads can be collected and processed individually. Ensure callers are updated to handle multiple batches instead of a single concatenated payload.
66-73: Verify each origin batch is needed.Pushing the entire
origin_batchintobatch_bytesis valid when all provided blobs must be processed downstream. If some blobs can be optional or filtered out, confirm the logic here. Otherwise this is good for full-batch extraction.node/core/batch.go (5)
20-20: Updated pointers to byte-based batch headers.Changing
parentBatchHeaderandsealedBatchHeaderto*types.BatchHeaderBytesaligns with the new approach. Ensure all call sites that expected the olderBatchHeaderstruct are updated to decode or read bytes as needed.Also applies to: 31-31
83-96: Graceful fallback to genesis batch header.Handling an empty
parentBatchHeaderBytesby generating aGenesisBatchHeadermakes the initial batch determination clear. Just confirm that the genesis logic aligns with your chain’s actual first block state so no unexpected mismatches happen.
297-321: Migration toBatchHeaderV0.Returning
BatchHeaderV0inGenesisBatchHeaderand populating fields likeDataHash,BlobVersionedHash, etc., enforces consistency in versioned logic. Verify all references to older batch headers are updated to read fromBatchHeaderV0or converted appropriately to avoid mixing versions.
337-338: Copying 32 bytes is correct but ensure signer validation.Storing
blsData.Signerinto a[32]byteis appropriate, but also confirm that external data always provides a 32-byte signer key. Any mismatch would panic oncopy(signer[:], blsData.Signer).
351-353: Batch upgrade condition is straightforward.Checking
blockTime >= e.UpgradeBatchTimeis a clear approach. Ensure this logic lines up with any rollout plan so that partially upgraded batches aren’t inadvertently accepted.prover/bin/shadow-prove/src/shadow_rollup.rs (12)
3-3: New import forTransaction
This import likely provides an abstraction for transaction objects. No apparent issues.
37-38: Constructor updates
Lines 37-38 initialize the newl2_providerparameter. This is consistent with the struct extension in line 23. Check for potential misconfiguration ifl2_provideris empty or points to an invalid endpoint.
41-41: Constructingl1_rollup
The cloned provider pattern is correct for concurrency. Verify that subsequent function calls do not block or race on the same provider.
44-44: ReturningSelfwith new field
Simple addition of thel2_providerto the self-struct instantiation. Looks correct.
60-60: Passing&self.l2_providertoget_committed_batch
Exposing the L2 provider to the batch retrieval logic is coherent with the new feature. Verify the underlying calls do not break existing tests.
95-96: New parameters inBatchInfo:start_blockandend_block
Storing the block range inBatchInfois helpful for more precise batch data. Verify that future code expansions (e.g., partial batch commits) also integrate these fields properly.
186-189: Adding stricter log count check from< 2to< 3
Increasing the minimum log requirement could reduce false positives in batch detection, but also might skip valid edge cases. Confirm that no corner case logs are now incorrectly filtered out.
195-195: Conversion tou64
Ensure no overflow for large on-chain integer values. For extremely large batch indexes, au64might be insufficient.
211-211: Checkingblocksordering
This error path ensures the batch block range is increasing. This prevents further calculations from going negative, which is correct.
263-263: Accessingtx.input()
Line 263 ensuresdata = tx.input(). Ifdata.is_empty(), you return early. This is correct for empty commits.
364-366:#[tokio::test]fortest_sync_batch
These lines finalize the function's environment variables for L2 and enhance coverage by using the newl2_provider. Looks coherent.
373-375: Injectingl2_providerintoBatchSyncerconstructor in test
Testing the new L2-bound code is essential. Confirm that you mock or differentiate test RPC from production.node/derivation/derivation.go (2)
51-53: New fields:rollupABI,legacyRollupABI,beforeMoveBlockCtxABI
These fields handle different ABIs. Ensure the code path is well tested across the legacy vs. new vs. “before move block context” scenarios.
338-338: New condition forbeforeMoveBlockCtxABI.Methods["commitBatch"]
ThecommitBatchdetection for “before-move-block-ctx” scenario ensures older flows remain intact. Check that the fallback logic (furtherelse ifs) is always reachable.contracts/contracts/test/Rollup.t.sol (18)
8-9: Added imports forBatchHeaderCodecV0andBatchHeaderCodecV1
This introduction references two codec libraries. It is a clean approach for versioned batch header encoding/decoding. Ensure updates in future versions do not break existing tests.
78-78: ConstructingbatchHeader1withBatchHeaderCodecV0.BATCH_HEADER_LENGTH
Line 78 referencesBatchHeaderCodecV0. Good that you use the library’s constant to avoid mismatched sizes.
83-84: Settingmstorefields for batch header
You’re populating indexed memory offsets. Confirm each offset is correct per the spec, especially for slightly offset fields (likel1MessagePopped,totalL1MessagePopped).
95-96: IntroducinglastBlockNumberandnumL1Messages
These new typed parameters improve clarity over raw bytes. Ensure they match the contract’s updatedBatchDataInputstruct.
140-140: Comment references version 1 with multiple blocks
This explanatory comment helps new developers track the differences between version 0 and 1. Keep it updated if any new logic is introduced.
145-147: ConstructingbatchHeader2withBatchHeaderCodecV1.BATCH_HEADER_LENGTH
Similar to line 78, referencing the library constant for version 1 is consistent. Keep both references aligned with real library code.
164-164: WritinglastBlockNumberat offset 249
This offset must remain consistent with the new codex. Mistakes in indexing cause silent errors in the decoding. Thorough tests or invariants recommended.
179-179: EmittingCommitBatch(2, ...)
The event signature has changed for the new batch format. Confirm external listeners still parse it correctly.
275-275: Expect revert usage
Line 275 is an example of thorough negative testing. Keep these revert checks in sync with the final contract logic.
283-283:invalid versionrevert test
Ensuring the contract reverts for version 2 (unsupported) is crucial. Good coverage.
289-289:batch header length too smallrevert
Multiple checks for invalid lengths are good for defensive coding.
300-300:incorrect batch indexrevert
Verifying the mismatch between the passed index and the header’s internal index is an important safety check.
313-313:incorrect parent batch hashrevert
Continuing the same pattern of strict checks is consistent.
324-324:incorrect previous state rootrevert
Ensures the new batch can’t arbitrarily skip or replace a known state root.
337-337: Committing a batch with one chunk
Tests a success path for a single-chunk scenario. Good coverage.
345-345:batch already committedrevert
This guards double-commit attempts. Make sure the contract also forbids partial overrides in unexpected ways.
399-399: Committing another batch
Line 399 modifies the second call with updated new fields. This ensures multi-batch scenarios are tested thoroughly.
405-413: Reverting batches
Lines 405-413 revolve around verifying that only unfinalized batches can be reverted, must be from the end, etc. This thorough revert logic helps ensure finalization invariants.oracle/oracle/batch.go (1)
158-165: Successfully removedSkippedL1MessageBitmap.Omitting the
SkippedL1MessageBitmapfield simplifies the struct. Ensure analogous references throughout the system are also removed to avoid runtime errors.bindings/bin/rollup_deployed.hex (1)
1-2: Hex bytecode updated.The entire file has been replaced with new contract bytecode. Verify that the deployed contract matches the correct build of your updated Solidity sources.
You can do this by generating and comparing the on-chain bytecode or using a reproducible build pipeline for consistency checks.
contracts/contracts/l1/rollup/Rollup.sol (6)
7-7: Good introduction ofBatchHeaderCodecV1.The new import indicates support for updated batch headers. Confirm that older references to
BatchHeaderCodecV0do not inadvertently process version 1 data.
223-223: Version check ensures backward compatibility.Allowing both version 0 and 1 is helpful. Confirm all code paths are tested for both versions.
248-253: Storing the data hash for new batch logic.
_commitBatchreferencesbatchDataInput.lastBlockNumberandbatchDataInput.numL1Messages. This is a clear, more streamlined approach than the previous context-based approach.- Confirm your system logs and debugging information still provide enough block context if needed.
285-287: Conditionally storing the last block number for version >= 1.This is a good approach to maintain backward compatibility. Confirm older decoding logic does not expect the field for version 0.
299-299: Additional parameterbatchDataInput.lastBlockNumber.The new field is appended to
batchDataStore. Check for usage in subsequent finalization or revert logic to confirm that side effects are properly integrated.
723-732: Flexible batch version loading.Code cleanly defers to
BatchHeaderCodecV0.loadAndValidateorBatchHeaderCodecV1.loadAndValidate. This modular approach is sound.node/types/batch.go (4)
19-21: Validate the usage of duplicated ABIsYou've introduced
BeforeMoveBlockCtxABIhere while the file already references other ABIs (e.g.,LegacyRollupMetaData). Please confirm that references to each ABI are consistent and that none are inadvertently overshadowed.
27-27: Confirm thatuint16suffices forl1TxNumIf there's any possibility of exceeding 65,535 L1 transactions in a single batch, consider increasing this field to
uint32(or larger) to avoid overflow.
92-122: Check correctness ofDataHashV2andcalculateHashThis revised hashing logic incorporates last block height,
l1TxNum, and all L1 transaction hashes. Please ensure:
- The last 60-byte extraction is consistent with block context size.
- The block height parsing function is tested and validated.
- Potential edge cases (e.g., empty contexts) are properly handled.
147-155: Consider adding boundary checks incombinePayloadsWhile this is a straightforward concatenation approach, large or malformed inputs could risk memory strain. Confirm validity of
newBlockContextandnewTxPayloadlengths before combining.bindings/bin/l2staking_deployed.hex (1)
1-2: Verify the updated hex code for correctness and consistencyThis file’s contents have been fully replaced with new contract bytecode. Confirm that the deployed bytecode aligns with the expected compiled output, and that no extraneous changes or corruption occurred.
node/db/keys.go (2)
8-8: LGTM!The prefix variable follows the established naming convention and uses a unique, descriptive value.
25-28: LGTM!The function follows the established pattern, is well-documented, and promotes code reuse by leveraging the existing
encodeEnqueueIndexfunction.contracts/contracts/mock/TestUpgrade.sol (3)
3-3: LGTM!Locking to a specific Solidity version (0.8.24) improves build reproducibility and security.
37-40: Verify storage layout compatibility.The intentional change in storage variable order could affect storage layout compatibility during upgrades. Ensure this is the intended test scenario.
42-42: LGTM!The new variable
otherVersionis appropriately initialized and complements the existing version tracking.contracts/contracts/l2/system/WrappedEther.sol (1)
3-3: LGTM!Locking to a specific Solidity version (0.8.24) improves build reproducibility and security.
contracts/contracts/test/ZkEvmVerifierV1.t.sol (3)
36-36: LGTM!Removing the
viewmodifier is correct as the function modifies state throughverifyPlonk.
42-42: LGTM!The new name
testRevert_InValidProofbetter reflects the test's behavior of expecting a revert.
43-43: LGTM!Adding
vm.expectRevert()makes the test's expectations explicit and improves test clarity.contracts/contracts/libraries/ICrossDomainMessenger.sol (2)
62-78: LGTM! Well-documented interface extension.The new overloaded
sendMessagefunction withrefundAddressparameter maintains backward compatibility while adding new functionality. The documentation clearly explains all parameters.
3-3:Details
❓ Verification inconclusive
Verify the Solidity version compatibility.
The version has been pinned to exactly 0.8.24. Let's verify this is the latest stable version and check for any known security advisories.
🌐 Web query:
What is the latest stable version of Solidity and are there any known security issues with version 0.8.24?💡 Result:
The latest stable version of Solidity is v0.8.28, released on October 9, 2024[1][7]. This version introduces full support for transient storage state variables of value types and includes optimizations for memory usage and JSON interface improvements[1][7].
For Solidity v0.8.24 (released January 26, 2024), there are no critical security vulnerabilities explicitly listed in the provided sources[1][3][5][6]. However:
Key Points About v0.8.24:
1️⃣ Cancun Network Upgrade Support
Added Yul-level support for EIP-1153 (transient storage), EIP-4844 (blob transactions), and new opcodes likeMCOPY[1][5].2️⃣ Bug Fixes
- Fixed shifted native source locations in Yul AST debug info[1].
- Addressed SELFDESTRUCT behavior changes (EIP-6780)[1].
3️⃣ Security Considerations
While no direct vulnerabilities are mentioned for v0.8.24 itself:
- The Solidity team recommends always using the latest compiler version for security fixes[3][6].
- Earlier versions had bugs like incorrect Keccak caching (fixed in v0.8.3) and memory corruption issues (fixed in v0.7.x)[2][9].
Recommendation
Upgrade to v0.8.28 for:
- Full transient storage support including reference types (new in v0.8.x series)
- Reduced memory footprint during compilation
- Latest security patches[1][7]
For projects requiring v0.8.x compatibility, ensure you use at least v0.8.25 which includes Dencun hard-fork readiness and gas optimizations via
MCOPYopcode[1][4].Citations:
- 1: https://github.com/ethereum/solidity/releases
- 2: https://blog.oxor.io/exploring-the-bugs-and-features-of-solidity-compiler-versions-a-guide-for-smart-contract-fe04e852ea64
- 3: https://docs.soliditylang.org/zh-cn/v0.8.24/security-considerations.html
- 4: https://soliditylang.org/blog/category/releases/
- 5: https://forum.soliditylang.org/t/solidity-0-8-24-is-out/2112
- 6: https://docs.soliditylang.org/zh-cn/v0.8.24/
- 7: https://soliditylang.org/blog/2024/10/09/solidity-0.8.28-release-announcement/
- 8: https://github.com/kadenzipfel/smart-contract-vulnerabilities/blob/master/vulnerabilities/weak-sources-randomness.md
- 9: https://docs.soliditylang.org/en/latest/bugs.html
- 10: https://soliditylang.org
Solidity Version Verification & Upgrade Consideration
- The
pragma solidity =0.8.24;statement is confirmed to use a version that does not present any known critical security vulnerabilities.- Note that the latest stable release is v0.8.28, which includes further optimizations, bug fixes, and additional support (such as enhanced transient storage features).
- If backward compatibility is a priority or if there’s a specific need to remain on v0.8.24, the current setup is acceptable. Otherwise, please review whether updating to v0.8.28 might offer added benefits for the project.
contracts/contracts/libraries/codec/BatchHeaderCodecV1.sol (1)
33-42:Details
❓ Verification inconclusive
Verify memory bounds in assembly block.
The assembly block correctly manages memory pointer but should include additional safety checks:
- Ensure memory expansion doesn't exceed available memory
- Validate memory pointer alignment
🏁 Script executed:
#!/bin/bash # Search for similar memory management patterns in the codebase ast-grep --pattern 'assembly { $ptr := mload(0x40) $$$ mstore(0x40, add($ptr, $_)) }'Length of output: 93
Action: Ensure correct memory pointer alignment in the assembly block.
The inline assembly block in
BatchHeaderCodecV1.soluses the standard pattern of fetching the free memory pointer viamload(0x40)and updating it with the copied data’s length. However, since the batch header length (expected to be a fixed 257 bytes) is not a multiple of 32, simply adding the length without rounding up could leave the new free memory pointer misaligned. This misalignment may lead to issues in subsequent memory allocations if those routines assume 32-byte alignment.
- Issue Identified:
- The free memory pointer update (
mstore(0x40, add(batchPtr, length))) does not round the pointer to a multiple of 32.- This could cause problematic alignment if later operations expect a 32-byte aligned pointer.
- Recommendation:
- Consider updating the memory pointer using an expression that rounds up to the nearest multiple of 32 (for example, using something like
mstore(0x40, add(batchPtr, and(add(length, 31), not(31))))).- Additional Note:
- While the EVM handles memory expansion automatically, ensuring alignment is crucial for consistency with Solidity’s memory management conventions.
Please manually verify that the intended consuming parts of your codebase either account for or are unaffected by this potential misalignment.
prover/crates/morph-executor/client/src/verifier/blob_verifier.rs (1)
13-16: LGTM! Function signature updated correctly.The addition of
num_blocksparameter aligns with the changes in batch processing logic.node/types/batch_test.go (2)
13-61: LGTM! Comprehensive test coverage for BatchHeaderV0.The test thoroughly validates all fields of BatchHeaderV0 including version, batch index, L1 message counts, various hashes, and state roots.
74-82:Details
❓ Verification inconclusive
Verify ABI compatibility across versions.
The test ensures method IDs differ across versions, which is crucial for contract upgrades. However, consider adding assertions to verify specific expected method ID values.
🏁 Script executed:
#!/bin/bash # Description: Verify method IDs across different versions of the Rollup contract # Search for commitBatch function signatures rg -A 2 'function commitBatch\('Length of output: 2096
Enhance ABI Test with Expected Method ID Values
The current test in
node/types/batch_test.goverifies that thecommitBatchmethod IDs differ across ABI versions, which is a crucial check for contract upgrades. To further strengthen the test, please add assertions that compare each method ID against its expected constant (e.g., the 4-byte selector computed from the Solidity function signature). This extra validation will ensure that any inadvertent changes to the ABI are caught immediately.
- Compute the expected method IDs using a Keccak256 hash on the appropriate
commitBatchsignatures.- Update the test to compare the computed IDs from
LegacyRollupMetaData,BeforeMoveBlockCtxABI, andbindings.RollupMetaDataagainst these expected values.contracts/hardhat.config.ts (2)
13-22: LGTM! Well-organized task imports.The task imports are logically grouped and follow a clear pattern for deployment, testing, and upgrade operations.
107-118:Details
❓ Verification inconclusive
Verify Solidity compiler configuration.
The compiler is set to version 0.8.24 with Cancun EVM and optimizations enabled. Ensure this aligns with your deployment requirements.
🌐 Web query:
What is the latest stable version of Solidity compiler and does it support the Cancun EVM?💡 Result:
The latest stable version of the Solidity compiler is v0.8.28, released on October 8, 2024[1][4]. This version includes full support for transient storage state variables of value types and optimizations for compilation via IR[1][4].
Regarding Cancun EVM support:
- Cancun compatibility was first introduced in Solidity v0.8.24[2][5], with subsequent versions refining this support.
- As of Solidity v0.8.25 (released March 2024), Cancun became the default EVM version when compiling contracts[4][8].
- Current versions (including v0.8.28) maintain full Cancun support through:
- Native integration of EIP-1153 (transient storage)
- Built-in
mcopy()opcode support- Automatic bytecode optimization for
MCOPYoperations[1][5][8]To verify Cancun support in your environment:
solc --evm-version cancun your_contract.solThe compiler will automatically select Cancun features unless overridden[5][8]. Note that some Layer 2 solutions like Immutable zkEVM currently cap support at v0.8.25 while aligning with Ethereum's Cancun fork[10].
Citations:
- 1: https://github.com/ethereum/solidity/releases
- 2: https://forum.soliditylang.org/t/solidity-0-8-24-is-out/2112
- 3: https://docs.soliditylang.org/en/latest/contracts.html
- 4: https://soliditylang.org/blog/category/releases/
- 5: https://soliditylang.org/blog/2024/01/26/solidity-0.8.24-release-announcement/
- 6: https://forum.openzeppelin.com/t/going-to-production-with-solidity-0-8-0-or-0-7-6/7153
- 7: https://soliditylang.org
- 8: https://book.getfoundry.sh/reference/config/solidity-compiler
- 9: https://docs.soliditylang.org/en/latest/installing-solidity.html
- 10: https://docs.immutable.com/platform/zkevm/chain-differences/
Compiler Version & EVM Setting Verification
- The current configuration sets Solidity to v0.8.24 with Cancun EVM and optimizations enabled, which is valid since Cancun support was introduced in v0.8.24.
- Note that the latest stable version is v0.8.28, offering additional improvements—consider whether an upgrade may better suit your deployment needs.
- Please verify that using v0.8.24 aligns with your project requirements, or update accordingly if benefiting from the newer version is desired.
ops/l2-genesis/cmd/genesis/cmd.go (1)
118-123:Details
❓ Verification inconclusive
Verify backward compatibility of batch header serialization.
The change from
Encode()toBytes()might affect systems expecting the old format. Ensure all consumers of the genesis batch header are updated accordingly.
🏁 Script executed:
#!/bin/bash # Description: Search for other uses of genesis batch header encoding # Search for Encode() or Bytes() method calls on batch headers rg -A 2 '(Encode|Bytes)\(\).*batch.*header'Length of output: 45
Below is an updated review comment. Since the initial script produced no output (which might indicate that no other usages of the old serialization method were found), please verify manually that the new
Bytes()output is identical in structure to whatEncode()used to produce. Additionally, please review the implementation of theBatchHeadermethods to ensure that the change does not affect consumers who might expect the legacy format.
- File:
ops/l2-genesis/cmd/genesis/cmd.go(lines 118–123)- Action Required:
- Confirm that the switch from
Encode()toBytes()does not alter the serialized format.- Manually inspect the definition of the
BatchHeadertype (and itsBytes()andEncode()methods, if available) to check for differences.- Ensure that downstream systems or consumers relying on the old serialization method are either updated or compatible with the new output.
Please verify these points before merging.
tx-submitter/utils/utils.go (2)
41-42: LGTM! Simplified batch index extraction.The change improves efficiency by directly accessing the batch index instead of decoding the entire batch header.
124-127: LGTM! Consistent with earlier changes.The modification maintains consistency with the earlier changes in batch index extraction.
contracts/contracts/l1/rollup/IL1MessageQueue.sol (1)
3-3: LGTM! Fixed Solidity version for better stability.Locking to a specific version (0.8.24) instead of using a range (^0.8.16) ensures consistent behavior across deployments.
contracts/deploy/016-RollupInit.ts (1)
38-38:Details
❓ Verification inconclusive
Verify the impact of version upgrade.
The version change from 0 to 1 aligns with the new batch data structure changes. However, ensure that:
- All components are compatible with version 1
- Migration path exists for handling version 0 data
🏁 Script executed:
#!/bin/bash # Search for version-specific handling in the codebase rg -A 5 "version.*0.*|version.*1.*" --type ts --type solLength of output: 86
Action: Verify version upgrade compatibility and migration handling
Please ensure the following regarding the version upgrade from 0 to 1:
- Confirm that all components—both in TypeScript (e.g., deployment scripts, interfacing modules) and in Solidity contracts—are fully compatible with
version = 1.- Verify that a proper migration path exists for handling legacy (version 0) data. Automated searches for version-specific logic were inconclusive (note the error with Solidity file type filtering), so please perform a manual check or update the search commands (e.g., using glob patterns for Solidity files).
node/types/consensus_message.go (3)
150-156: LGTM! Well-structured block deserialization.The function provides a clean abstraction for block deserialization with proper error handling.
158-164: LGTM! Clean height extraction.The function reuses WrappedBlockFromBytes for consistency and provides proper error propagation.
166-171: LGTM! Robust validation for block context.The function includes proper length validation and clear error messaging.
node/types/retryable_client.go (2)
26-26: LGTM!The new error constant follows the existing naming convention and has a clear, descriptive message.
225-234: LGTM!The changes effectively simplify the error handling logic while maintaining clear intent through the comment. The inverted condition makes the code more maintainable by defaulting to retry for all errors except the specific case.
prover/bin/server/src/queue.rs (1)
120-120: LGTM!The change correctly uses
blocks.len()as the number of blocks parameter, maintaining consistency with the updatedBlobVerifier::verifymethod signature.contracts/contracts/l1/rollup/IRollup.sol (2)
11-12: LGTM! Clear and concise parameter documentation.The documentation for the new parameters
lastBlockNumberandnumL1Messagesis clear and accurately describes their purpose.
19-20: LGTM! Appropriate type choices for new parameters.The type choices are well-considered:
uint64forlastBlockNumberprovides sufficient range for block numbersuint16fornumL1Messagesis a reasonable size for counting L1 messages per batchprover/bin/shadow-prove/src/shadow_prove.rs (4)
96-96: LGTM! Clear block length calculation.The calculation
blocks_len = end_block - start_block + 1is correct and more straightforward than using array length.
108-108: LGTM! Updated log message for clarity.The log message now correctly references
start_blockinstead of the old blocks array.
123-124: LGTM! Direct use of block range in ProveRequest.The
ProveRequestinitialization now directly usesstart_blockandend_block, improving clarity and reliability.
157-157: LGTM! Proper type casting in waiting time calculation.The explicit type cast of
blocks_lentousizein themax_waiting_timecalculation ensures type safety.prover/bin/shadow-prove/src/main.rs (5)
37-37: LGTM! Added L2 RPC configuration.Added environment variable for L2 RPC endpoint, enabling Layer 2 network interaction.
45-49: LGTM! Clear provider initialization.Renamed provider to
l1_providerand addedl2_provider, improving clarity of each provider's purpose.
62-63: LGTM! Updated BatchSyncer initialization.BatchSyncer now correctly uses both L1 and L2 providers for cross-layer synchronization.
235-235: LGTM! Updated module path.Corrected the module path for
batch_header_inspectfunction.
269-269: LGTM! Updated BatchInfo initialization.BatchInfo now uses explicit block range with
start_blockandend_blockinstead of block array.node/derivation/derivation_test.go (4)
14-16: LGTM! Well-structured test data.Added
beforeMoveBctxDatatest data alongside existing test cases, maintaining consistent formatting.
24-25: LGTM! Added new ABI support.Added
beforeMoveBlockCtxABIretrieval with proper error handling.
27-29: LGTM! Updated Derivation struct initialization.The Derivation struct now includes all necessary ABIs for testing different scenarios.
42-48: LGTM! Clear test case implementation.Updated variable names to clearly indicate their purpose in the before-move-block-context scenario.
node/types/blob.go (1)
113-122: LGTM!The function correctly handles empty input and provides clear error messages. The use of zstd compression is appropriate for batch data.
ops/devnet-morph/devnet/__init__.py (1)
231-233: LGTM!Adding the
--no-recreateflag is a good optimization that prevents unnecessary container recreation during deployment.gas-oracle/app/src/da_scalar/l1_scalar.rs (4)
1-8: LGTM!The import statement is correctly updated to use the renamed function.
230-235: LGTM!The changes improve error handling with better logging and use a more descriptive function name.
349-394: LGTM!The changes enhance the function by:
- Adding validation for empty input
- Improving error messages
- Returning both total size and transaction count
400-440: LGTM!The test case provides good coverage for the new blob data handling functionality, verifying both total size and transaction count calculations.
prover/bin/challenge/src/handler.rs (3)
407-422: LGTM!The struct is simplified by replacing the
blocks_infovector with directstart_blockandend_blockfields, making it more efficient and easier to understand.
424-484: LGTM!The function is improved by:
- Using
batch_indexto compute block range- Utilizing
last_block_numberandnum_l1_messagesfrom batch data- Providing better error handling
572-573: LGTM!The error message formatting is improved for better consistency.
tx-submitter/services/rollup.go (2)
694-701: LGTM! The batch data structure changes improve type safety.The replacement of
BlockContextswith explicitLastBlockNumberandNumL1Messagesfields makes the data structure clearer and more type-safe.
715-735: LGTM! Comprehensive error handling and logging.The code demonstrates good practices with:
- Descriptive error messages with context
- Structured logging with appropriate log levels
- Graceful fallback to rough gas estimation when needed
bindings/bindings/l2staking.go (7)
1-3: LGTM! Important warning about auto-generated code.The warning comment clearly indicates this is auto-generated code that should not be manually edited.
47-50: LGTM! Contract metadata properly defined.The L2StakingMetaData struct contains the contract's ABI and bytecode, with deprecated variables properly marked.
78-97: LGTM! Clean separation of contract interfaces.The L2Staking struct properly separates read-only (Caller), write-only (Transactor) and event filtering (Filterer) interfaces.
1507-1517: LGTM! Consistent event iterator implementation.The event iterators (CommissionUpdated, Delegated, Initialized etc.) follow a consistent pattern with proper error handling and cleanup.
Also applies to: 1573-1583, 1809-1819
2802-2954: LGTM! Comprehensive event filtering capabilities.The event filtering system provides flexible options for monitoring contract events with proper subscription cleanup.
181-217:Details
❓ Verification inconclusive
Verify transaction receipt handling in Call/Transfer/Transact methods.
The base contract methods should verify transaction receipts and handle revert reasons.
🏁 Script executed:
#!/bin/bash # Description: Check for receipt verification in other contracts ast-grep --pattern 'receipt, err := bind.WaitMined($_, tx) if err != nil { return nil, fmt.Errorf("failed to get receipt: %v", err) } if receipt.Status == 0 { return nil, fmt.Errorf("transaction reverted")'Length of output: 212
Action Required: Manually Confirm Transaction Receipt Verification in Transaction Methods
The current implementation in
bindings/bindings/l2staking.go(lines 181–217) simply passes the call through to the underlying contract methods without any explicit handling of transaction receipts. In our codebase, methods likeCall,Transfer, andTransactshould verify the transaction receipt (using logic such as checkingreceipt.Statusafter a call tobind.WaitMined) and correctly handle revert reasons.
- Verify that after invoking these methods, the receipt is properly fetched and checked for a success status.
- If receipt verification is performed elsewhere or intentionally omitted, please document the reasoning.
- Consider including receipt verification logic similar to:
receipt, err := bind.WaitMined(ctx, tx) if err != nil { return nil, fmt.Errorf("failed to get receipt: %v", err) } if receipt.Status == 0 { return nil, fmt.Errorf("transaction reverted") }Please manually verify that the transaction methods propagate or incorporate receipt validation as intended.
1171-1189:Details
❓ Verification inconclusive
Verify input validation in state-changing methods.
The contract methods that modify state (AddStaker, DelegateStake, Initialize etc.) should validate inputs before sending transactions.
Also applies to: 1254-1273, 1317-1336
🏁 Script executed:
#!/bin/bash # Description: Check for input validation patterns ast-grep --pattern 'if $_ == nil || $_.Sign() <= 0 { return nil, fmt.Errorf("invalid amount") }'Length of output: 100
Action Required: Verify and Enhance Input Validation in State-Changing Methods
The current implementation for the state-changing methods (e.g.,
AddStaker, and similarly for methods at lines 1254–1273 and 1317–1336) shows no detected input validation (e.g., checks for nil values or non-positive amounts) before sending transactions. Please verify one of the following:
If input validation is expected on the client side:
Ensure that methods validate critical inputs (like ensuring a non-nil, correctly signed_nonceor other fields inTypesStakerInfo) before invoking transaction calls. An example pattern to consider is:if _nonce == nil || _nonce.Sign() <= 0 { return nil, fmt.Errorf("invalid _nonce") }If the validations are handled within the contract logic:
Add a comment clarifying that input integrity is enforced on-chain, so additional client-side checks are redundant.Please update the code or provide documentation to confirm that input validation is appropriately addressed.
prover/bin/shadow-prove/Cargo.toml (1)
13-18:Details
❓ Verification inconclusive
Verify compatibility with the major version upgrade of alloy.
The upgrade from alloy 0.3 to 0.8 is a major version bump that could introduce breaking changes.
🌐 Web query:
What are the breaking changes between alloy 0.3 and 0.8?💡 Result:
The breaking changes between Alloy versions 0.3 and 0.8 primarily involve API updates and structural modifications to improve functionality and compliance with newer standards:
Transaction Handling
- Typed Transaction Requirement: Version 0.8 mandates EIP-2718-compliant typed transactions12, replacing older transaction formats12.
Consensus Layer Changes
- Header Method Removal: Consensus-related header methods were deprecated12.
- Forkchoice Versioning: Introduced explicit forkchoice version tracking12.
RPC Modifications
- Block Bounds Relaxation: RPC
Blocktype bounds were expanded for flexibility12.- TransactionResponse Simplification: Removed duplicate
tomethod inTransactionResponse12.Type System Updates
- BlockHashOrNumber Restructuring: Swapped alias/struct naming conventions for clarity12.
- Recovery Traits: Added
RecoveredandRecoverabletraits for transaction handling12.New Features with Compatibility Impact
- EIP-7691 Implementation: Added support for new Ethereum improvement proposal12.
- reth Pooled TX Integration: Ported transaction pool types from reth implementation12.
Build/Dependency Changes
- Arbitrary Trait Support: Added
arbitrarycrate integration for fuzzing12.- Serde Optionality: Made serde dependencies optional in some modules12.
These changes require codebase adjustments when upgrading from 0.x versions below 0.7.x to 0.8.x12. Developers should review transaction handling patterns and consensus-layer integrations particularly closely.
Citations:
- 1: https://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1823/sr1823.pdf
- 2: https://en.wikipedia.org/wiki/Aluminium%E2%80%93magnesium%E2%80%93silicon_alloys
- 3: https://github.com/alloy-rs/alloy/blob/main/CHANGELOG.md
- 4: https://haynesintl.com/wp-content/uploads/2024/08/a-guide-to-the-metallurgical-corrosion-and-wear-characteristics.pdf
- 5: https://github.com/appcelerator/alloy/blob/master/CHANGELOG.md
- 6: https://www.spacematdb.com/spacemat/manudatasheets/TITANIUM%20ALLOY%20GUIDE.pdf
- 7: https://docs.giantswarm.io/changes/highlights/
- 8: https://www.azom.com/article.aspx?ArticleID=9345
- 9: https://nickelinstitute.org/media/8dac7de8840bc88/11005_modern-carburized-nickel-alloy-steels.pdf
- 10: https://www.specialmetals.com/documents/technical-bulletins/inconel/inconel-alloy-718.pdf
Action Required: Confirm Alloy v0.8 Integration Adjustments
The dependency upgrade from Alloy 0.3 to 0.8 introduces several breaking changes—most notably in transaction handling, consensus header handling, RPC modifications, and type system updates. Please review the following:
- Transaction Handling: Verify that your code now properly implements EIP-2718-compliant typed transactions.
- Consensus Layer: Ensure deprecated header methods are replaced and that any forkchoice version tracking is correctly integrated.
- RPC Modifications: Check that adjustments like the expanded block type bounds and the simplified
TransactionResponseare correctly handled.- Type System and New Features: Confirm that any changes in block hash handling, the addition of recovery traits, and new features (e.g., EIP-7691 support) are accommodated in your implementations.
Review all modules that use the Alloy crate features (e.g., "contract", "signer-local", "rpc-types-eth", and "json-rpc") to guarantee compatibility with these API updates. Detailed breaking changes can be found in the Alloy changelogs.
contracts/go.mod (1)
5-5:Details
❓ Verification inconclusive
Verify the stability of pinned dependency versions.
The updates to tendermint and go-ethereum are pinned to specific commits rather than release tags. Let's verify these commits:
Also applies to: 9-9
🏁 Script executed:
#!/bin/bash # Description: Check commit messages and changes for the updated dependencies # Check tendermint commit gh api repos/morph-l2/tendermint/commits/c84dfe5c8533 --jq '.commit.message' # Check go-ethereum commit gh api repos/morph-l2/go-ethereum/commits/3b9465a03a8c --jq '.commit.message'Length of output: 268
Verify Stability of Pinned Dependency Versions
The dependency update in
contracts/go.modpins specific commit hashes instead of stable release tags. We verified the commit messages:
Tendermint (commit: c84dfe5c8533)
→ Commit message: “skip proposing block if an error is encountered”
This indicates a targeted change addressing a specific error handling scenario.Go-ethereum (commit: 3b9465a03a8c)
→ Commit message: “Merge remote-tracking branch 'origin' into feature/block_context”
This merge commit from a feature branch may warrant extra scrutiny to ensure it’s sufficiently stable for production.Please confirm that these pinned commits have been reviewed and tested in your context. If possible, consider pinning to official release tags to guarantee long-term stability and easier maintenance.
bindings/go.mod (1)
5-5: LGTM! Dependency versions are consistent.The updated versions match those in contracts/go.mod, maintaining consistency across modules.
Also applies to: 7-7
ops/l2-genesis/go.mod (1)
5-5: LGTM! Dependency versions are consistent.The updated versions match those in other modules, maintaining consistency across the project.
Also applies to: 9-9
tx-submitter/go.mod (1)
5-5: LGTM! Dependency versions are consistent.The updated versions match those in other modules, maintaining consistency across the project.
Also applies to: 10-10
oracle/go.mod (1)
5-5:Details
❓ Verification inconclusive
Verify the stability of pre-release dependencies.
The updated dependencies point to specific commits rather than stable releases:
- tendermint: Using pre-release version v0.3.2-0.20250115141431
- go-ethereum: Using commit from December 26, 2024
Run the following script to check for any reported issues with these versions:
Also applies to: 10-10
🏁 Script executed:
#!/bin/bash # Description: Check for issues in the specified versions of tendermint and go-ethereum # Check tendermint repository echo "Checking tendermint repository..." gh api repos/morph-l2/tendermint/commits/v0.3.2-0.20250115141431-c84dfe5c8533/check-runs # Check go-ethereum repository echo "Checking go-ethereum repository..." gh api repos/morph-l2/go-ethereum/commits/v1.10.14-0.20241226065029-3b9465a03a8c/check-runsLength of output: 876
Verify the stability of pre-release dependencies.
The current dependency replacements in
oracle/go.modreference pseudo-versions for both tendermint and go-ethereum, which point to specific commits rather than stable releases. Notably, the GitHub API check failed with errors indicating that these pseudo-version strings cannot be directly resolved to commit SHAs. This is expected behavior with pseudo-versions, so please:
- Confirm that using these pseudo-versions (tendermint:
v0.3.2-0.20250115141431-c84dfe5c8533, go-ethereum:v1.10.14-0.20241226065029-3b9465a03a8c) is intentional.- Verify their stability and compatibility through alternative methods (e.g., reviewing upstream changes or running comprehensive tests), as the API check isn’t conclusive for pseudo-versions.
Makefile (1)
3-4: LGTM! Version updates are consistent with go.mod changes.The target versions in the Makefile align with the dependency updates in go.mod files, ensuring consistent versioning across modules.
node/go.mod (1)
5-5: LGTM! Version updates match other modules.The dependency updates are consistent with changes in oracle/go.mod and the Makefile target versions.
Also applies to: 13-13
gas-oracle/app/src/abi/Rollup.json (1)
612-619: LGTM! Improved type safety in BatchDataInput struct.The change from opaque
blockContextsbytes to explicitlastBlockNumberandnumL1Messagesfields improves type safety and makes the interface more explicit. This aligns with the PR objective of moving block context to blob.prover/bin/shadow-prove/abi/Rollup.json (1)
612-619: LGTM! ABI changes match gas oracle implementation.The BatchDataInput struct changes are identical to those in gas-oracle/app/src/abi/Rollup.json, maintaining consistency across components.
prover/bin/challenge/src/abi/Rollup.json (1)
612-619: LGTM! The parameter types are well-chosen for their purposes.The changes replace the
blockContextsbytes parameter with two specific parameters:
lastBlockNumberasuint64: Appropriate for block numbersnumL1Messagesasuint16: Provides a reasonable upper limit for L1 messages per batch
Footnotes
| if err = e.l2Client.CommitBatch(context.Background(), ð.RollupBatch{ | ||
| Version: uint(version), | ||
| Index: parentBatchIndex + 1, | ||
| Hash: hash, | ||
| ParentBatchHeader: *e.batchingCache.parentBatchHeader, | ||
| CurrentSequencerSetBytes: sequencerSetBytes, | ||
| BlockContexts: blockContexts, | ||
| PrevStateRoot: e.batchingCache.prevStateRoot, | ||
| PostStateRoot: e.batchingCache.postStateRoot, | ||
| WithdrawRoot: e.batchingCache.withdrawRoot, | ||
| Sidecar: e.batchingCache.sealedSidecar, | ||
| LastBlockNumber: e.batchingCache.lastPackedBlockHeight, | ||
| NumL1Messages: uint16(l1MessagePopped), | ||
| }, batchSigs); err != nil { | ||
| return fmt.Errorf("failed to commit batch to L2 client: %w", err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential overflow when casting to uint16
Casting l1MessagePopped to uint16 (line 82) may lead to truncation if the actual value exceeds 65535. If large values are possible, consider a larger type or add overflow checks to avoid silent data loss.
- NumL1Messages: uint16(l1MessagePopped),
+ if l1MessagePopped > math.MaxUint16 {
+ return fmt.Errorf("l1MessagePopped overflow: %d", l1MessagePopped)
+ }
+ NumL1Messages: uint16(l1MessagePopped),Committable suggestion skipped: line range outside the PR's diff.
| blockContexts := make([]*BlockContext, int(blockCount)) | ||
| for i := 0; i < int(blockCount); i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Integer overflow concern when creating slice.
blockCount is a uint64, but it's cast to int in make([]*BlockContext, int(blockCount)). In extremely large batch scenarios, this can overflow on 32-bit systems or if blockCount exceeds MaxInt. Consider handling this more safely:
if blockCount > (1<<31 - 1) {
return fmt.Errorf("blockCount too large: %d", blockCount)
}
-blockContexts := make([]*BlockContext, int(blockCount))
+blockContexts := make([]*BlockContext, int64(blockCount)) // or handle more robustlyAlso applies to: 180-186
| mstore(add(batchHeader2, 0x20), shl(248, 1)) // version | ||
| mstore(add(batchHeader2, add(0x20, 1)), shl(192, 2)) // batchIndex = 2 | ||
| mstore(add(batchHeader2, add(0x20, 9)), shl(192, 264)) // l1MessagePopped = 264 | ||
| mstore(add(batchHeader2, add(0x20, 17)), shl(192, 265)) // totalL1MessagePopped = 265 | ||
| mstore(add(batchHeader2, add(0x20, 25)), 0x81e7e4ffa69f7496f9377a1e4140fbb16fcc81d8086b44a1316fc77cc2b9a63b) // dataHash | ||
| mstore(add(batchHeader2, add(0x20, 25)), 0x688bd49eddb8f52296974c2a243b10b91b305edd68f46190df21a83c13a6e2ec) // dataHash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Complex mstore definitions
Manually writing memory offsets can be error-prone. If the library supports direct struct-based construction, prefer it. Otherwise, consider verifying each offset carefully.
| parentBatchHeader := nodetypes.BatchHeaderBytes(batch.ParentBatchHeader) | ||
| parentVersion, err := parentBatchHeader.Version() | ||
| if err != nil { | ||
| return fmt.Errorf("decode parent batch version error:%v", err) | ||
| } | ||
| if batch.Version == 1 && parentVersion == 0 { | ||
| parentBatchIndex, err := parentBatchHeader.BatchIndex() | ||
| if err != nil { | ||
| return fmt.Errorf("decode parent batch index error:%v", err) | ||
| } | ||
| parentBatch, err := o.l2Client.GetRollupBatchByIndex(o.ctx, parentBatchIndex) | ||
| if err != nil { | ||
| return fmt.Errorf("get parent batch error:%v", err) | ||
| } | ||
| batch.Sidecar = parentBatch.Sidecar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate parent batch logic and error handling.
- Retrieving the parent batch data based on version is a good step.
- Verify that
batch.Sidecaris always relevant and won't overshadow required data from the current batch, or become stale if parent data changes. - Ensure that calls to
GetRollupBatchByIndexhandle both existing and non-existing parent indices correctly.
As a follow-up, consider testing edge cases:
- If
parentBatchIndexreferences an invalid or finalized batch. - If a mismatch occurs between
batch.VersionandparentVersion.
Do you want me to generate a script to check references to all calls ofGetRollupBatchByIndex?
| for _, txHash := range l1TxHashes { | ||
| cks.l1TxHashes = append(cks.l1TxHashes, txHash.Bytes()...) | ||
| } | ||
| cks.l1TxNum += uint16(len(l1TxHashes)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate potential overflow in l1TxNum accumulation
Adding uint16(len(l1TxHashes)) could overflow if the batch is already close to the limit. If large L1Tx arrays are possible, guard or switch to a larger integer type.
| let num_blocks = input.l2_traces.len(); | ||
| let (versioned_hash, batch_data) = BlobVerifier::verify(&input.blob_info, num_blocks).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for empty traces.
The num_blocks calculation should handle empty traces gracefully.
- let num_blocks = input.l2_traces.len();
+ let num_blocks = input.l2_traces.len();
+ if num_blocks == 0 {
+ return Err(anyhow::anyhow!("No L2 traces provided"));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let num_blocks = input.l2_traces.len(); | |
| let (versioned_hash, batch_data) = BlobVerifier::verify(&input.blob_info, num_blocks).unwrap(); | |
| let num_blocks = input.l2_traces.len(); | |
| if num_blocks == 0 { | |
| return Err(anyhow::anyhow!("No L2 traces provided")); | |
| } | |
| let (versioned_hash, batch_data) = BlobVerifier::verify(&input.blob_info, num_blocks).unwrap(); |
| pub fn get_blob_data(block_trace: &Vec<BlockTrace>) -> [u8; BLOB_DATA_SIZE] { | ||
| // collect txns | ||
| let num_blocks = block_trace.len(); | ||
| let mut batch_from_trace: Vec<u8> = Vec::with_capacity(num_blocks * 60); | ||
| let mut tx_bytes: Vec<u8> = vec![]; | ||
| for trace in block_trace { | ||
| // BlockContext | ||
| // https://github.com/morph-l2/morph/blob/main/contracts/contracts/libraries/codec/BatchCodecV0.sol | ||
| let mut block_ctx: Vec<u8> = Vec::with_capacity(60); | ||
| block_ctx.extend_from_slice(&trace.number().to_be_bytes()); | ||
| block_ctx.extend_from_slice(&trace.timestamp().to::<u64>().to_be_bytes()); | ||
| block_ctx | ||
| .extend_from_slice(&trace.base_fee_per_gas().unwrap_or_default().to_be_bytes::<32>()); | ||
| block_ctx.extend_from_slice(&trace.gas_limit().to::<u64>().to_be_bytes()); | ||
| block_ctx.extend_from_slice(&(trace.transactions.len() as u16).to_be_bytes()); | ||
| block_ctx.extend_from_slice(&(trace.num_l1_txs() as u16).to_be_bytes()); | ||
| batch_from_trace.extend(block_ctx); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling for block context processing.
While the block context collection is well-structured, consider adding error handling for:
- Invalid block numbers
- Timestamp conversion failures
- Gas limit overflow scenarios
block_ctx.extend_from_slice(&trace.number().to_be_bytes());
-block_ctx.extend_from_slice(&trace.timestamp().to::<u64>().to_be_bytes());
+block_ctx.extend_from_slice(&trace.timestamp().to::<u64>().ok_or_else(||
+ anyhow!("Invalid timestamp conversion"))?.to_be_bytes());Committable suggestion skipped: line range outside the PR's diff.
| data_hasher.update(&traces.last().unwrap().number().to_be_bytes()); | ||
| let num_l1_txs: u16 = traces.iter().map(|x| x.num_l1_txs()).sum::<u64>() as u16; | ||
| data_hasher.update(&num_l1_txs.to_be_bytes()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify potential overflow in L1 transactions count.
The conversion of total L1 transactions to u16 could overflow if the sum exceeds 65535. Consider using a larger integer type or adding a validation check.
🏁 Script executed:
#!/bin/bash
# Description: Check if any batch has more than 65535 L1 transactions.
ast-grep --pattern 'fn num_l1_txs($$$) -> u64 {
$$$
}'Length of output: 1124
Attention: Validate L1 Transaction Count Conversion
In prover/crates/core/src/batch.rs at lines 34–36, the sum of L1 transactions (computed as a u64) is being cast to a u16. This conversion can silently truncate the value if the total exceeds 65,535, potentially causing an overflow. Please consider one of the following to mitigate this risk:
- Use a larger integer type: Replace
u16with a larger type (such asu32or retainu64) to safely handle broader ranges. - Add a validation check: Before casting, verify that the computed sum does not exceed
u16::MAXto prevent overflow.
| func DecodeTxsFromBytes(txsBytes []byte) (eth.Transactions, error) { | ||
| //if isEmptyBlob(blob) { | ||
| // return eth.Transactions{}, nil | ||
| //} | ||
| //data, err := RetrieveBlobBytes(blob) | ||
| //if err != nil { | ||
| // return nil, err | ||
| //} | ||
| //batchBytes, err := zstd.DecompressBatchBytes(data) | ||
| //if err != nil { | ||
| // return nil, err | ||
| //} | ||
| reader := bytes.NewReader(txsBytes) | ||
| txs := make(eth.Transactions, 0) | ||
| for { | ||
| var ( | ||
| firstByte byte | ||
| fullTxBytes []byte | ||
| innerTx eth.TxData | ||
| err error | ||
| ) | ||
| if err = binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| // if the blob byte array is completely consumed, then break the loop | ||
| if err == io.EOF { | ||
| break | ||
| } | ||
| return nil, err | ||
| } | ||
| // zero byte is found after valid tx bytes, break the loop | ||
| if firstByte == 0 { | ||
| break | ||
| } | ||
|
|
||
| switch firstByte { | ||
| case eth.AccessListTxType: | ||
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| return nil, err | ||
| } | ||
| innerTx = new(eth.AccessListTx) | ||
| case eth.DynamicFeeTxType: | ||
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| return nil, err | ||
| } | ||
| innerTx = new(eth.DynamicFeeTx) | ||
| default: | ||
| if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) | ||
| return nil, fmt.Errorf("not supported tx type: %d", firstByte) | ||
| } | ||
| innerTx = new(eth.LegacyTx) | ||
| } | ||
|
|
||
| // we support the tx types of LegacyTxType/AccessListTxType/DynamicFeeTxType | ||
| //if firstByte == eth.AccessListTxType || firstByte == eth.DynamicFeeTxType { | ||
| // // the firstByte here is used to indicate tx type, so skip it | ||
| // if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| // return nil, err | ||
| // } | ||
| //} else if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) | ||
| // return nil, fmt.Errorf("not supported tx type: %d", firstByte) | ||
| //} | ||
| fullTxBytes, err = extractInnerTxFullBytes(firstByte, reader) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| if err = rlp.DecodeBytes(fullTxBytes, innerTx); err != nil { | ||
| return nil, err | ||
| } | ||
| txs = append(txs, eth.NewTx(innerTx)) | ||
| } | ||
| return txs, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extract common transaction decoding logic and remove commented code.
The function has two issues:
-
Contains unnecessary commented-out code for blob handling.
-
Duplicates transaction decoding logic from
DecodeTxsFromBlob. -
Remove the commented-out code:
-//if isEmptyBlob(blob) {
-// return eth.Transactions{}, nil
-//}
-//data, err := RetrieveBlobBytes(blob)
-//if err != nil {
-// return nil, err
-//}
-//batchBytes, err := zstd.DecompressBatchBytes(data)
-//if err != nil {
-// return nil, err
-//}- Extract the common transaction decoding logic into a shared helper function:
func decodeTxsFromReader(reader *bytes.Reader) (eth.Transactions, error) {
txs := make(eth.Transactions, 0)
for {
var (
firstByte byte
fullTxBytes []byte
innerTx eth.TxData
err error
)
if err = binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
if err == io.EOF {
break
}
return nil, err
}
if firstByte == 0 {
break
}
switch firstByte {
case eth.AccessListTxType:
if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
return nil, err
}
innerTx = new(eth.AccessListTx)
case eth.DynamicFeeTxType:
if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
return nil, err
}
innerTx = new(eth.DynamicFeeTx)
default:
if firstByte <= 0xf7 {
return nil, fmt.Errorf("not supported tx type: %d", firstByte)
}
innerTx = new(eth.LegacyTx)
}
fullTxBytes, err = extractInnerTxFullBytes(firstByte, reader)
if err != nil {
return nil, err
}
if err = rlp.DecodeBytes(fullTxBytes, innerTx); err != nil {
return nil, err
}
txs = append(txs, eth.NewTx(innerTx))
}
return txs, nil
}Then update the functions to use the shared helper:
func DecodeTxsFromBytes(txsBytes []byte) (eth.Transactions, error) {
reader := bytes.NewReader(txsBytes)
return decodeTxsFromReader(reader)
}
func DecodeTxsFromBlob(blob *kzg4844.Blob) (eth.Transactions, error) {
if isEmptyBlob(blob) {
return eth.Transactions{}, nil
}
data, err := RetrieveBlobBytes(blob)
if err != nil {
return nil, err
}
batchBytes, err := zstd.DecompressBatchBytes(data)
if err != nil {
return nil, err
}
reader := bytes.NewReader(batchBytes)
return decodeTxsFromReader(reader)
}| func DecodeBlocksFromBlob(blob *kzg4844.Blob) (eth.Transactions, error) { | ||
| if isEmptyBlob(blob) { | ||
| return eth.Transactions{}, nil | ||
| } | ||
| data, err := RetrieveBlobBytes(blob) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| batchBytes, err := zstd.DecompressBatchBytes(data) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| reader := bytes.NewReader(batchBytes) | ||
| txs := make(eth.Transactions, 0) | ||
| for { | ||
| var ( | ||
| firstByte byte | ||
| fullTxBytes []byte | ||
| innerTx eth.TxData | ||
| err error | ||
| ) | ||
| if err = binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| // if the blob byte array is completely consumed, then break the loop | ||
| if err == io.EOF { | ||
| break | ||
| } | ||
| return nil, err | ||
| } | ||
| // zero byte is found after valid tx bytes, break the loop | ||
| if firstByte == 0 { | ||
| break | ||
| } | ||
|
|
||
| switch firstByte { | ||
| case eth.AccessListTxType: | ||
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| return nil, err | ||
| } | ||
| innerTx = new(eth.AccessListTx) | ||
| case eth.DynamicFeeTxType: | ||
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| return nil, err | ||
| } | ||
| innerTx = new(eth.DynamicFeeTx) | ||
| default: | ||
| if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) | ||
| return nil, fmt.Errorf("not supported tx type: %d", firstByte) | ||
| } | ||
| innerTx = new(eth.LegacyTx) | ||
| } | ||
|
|
||
| // we support the tx types of LegacyTxType/AccessListTxType/DynamicFeeTxType | ||
| //if firstByte == eth.AccessListTxType || firstByte == eth.DynamicFeeTxType { | ||
| // // the firstByte here is used to indicate tx type, so skip it | ||
| // if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | ||
| // return nil, err | ||
| // } | ||
| //} else if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) | ||
| // return nil, fmt.Errorf("not supported tx type: %d", firstByte) | ||
| //} | ||
| fullTxBytes, err = extractInnerTxFullBytes(firstByte, reader) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| if err = rlp.DecodeBytes(fullTxBytes, innerTx); err != nil { | ||
| return nil, err | ||
| } | ||
| txs = append(txs, eth.NewTx(innerTx)) | ||
| } | ||
| return txs, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove duplicate function.
This function is an exact duplicate of DecodeTxsFromBlob. Consider removing it to avoid code duplication and potential maintenance issues.
Apply this diff to remove the duplicate function and update any callers to use DecodeTxsFromBlob instead:
-func DecodeBlocksFromBlob(blob *kzg4844.Blob) (eth.Transactions, error) {
- if isEmptyBlob(blob) {
- return eth.Transactions{}, nil
- }
- data, err := RetrieveBlobBytes(blob)
- if err != nil {
- return nil, err
- }
- batchBytes, err := zstd.DecompressBatchBytes(data)
- if err != nil {
- return nil, err
- }
- reader := bytes.NewReader(batchBytes)
- txs := make(eth.Transactions, 0)
- for {
- var (
- firstByte byte
- fullTxBytes []byte
- innerTx eth.TxData
- err error
- )
- if err = binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
- // if the blob byte array is completely consumed, then break the loop
- if err == io.EOF {
- break
- }
- return nil, err
- }
- // zero byte is found after valid tx bytes, break the loop
- if firstByte == 0 {
- break
- }
-
- switch firstByte {
- case eth.AccessListTxType:
- if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
- return nil, err
- }
- innerTx = new(eth.AccessListTx)
- case eth.DynamicFeeTxType:
- if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
- return nil, err
- }
- innerTx = new(eth.DynamicFeeTx)
- default:
- if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247)
- return nil, fmt.Errorf("not supported tx type: %d", firstByte)
- }
- innerTx = new(eth.LegacyTx)
- }
-
- // we support the tx types of LegacyTxType/AccessListTxType/DynamicFeeTxType
- //if firstByte == eth.AccessListTxType || firstByte == eth.DynamicFeeTxType {
- // // the firstByte here is used to indicate tx type, so skip it
- // if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil {
- // return nil, err
- // }
- //} else if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247)
- // return nil, fmt.Errorf("not supported tx type: %d", firstByte)
- //}
- fullTxBytes, err = extractInnerTxFullBytes(firstByte, reader)
- if err != nil {
- return nil, err
- }
- if err = rlp.DecodeBytes(fullTxBytes, innerTx); err != nil {
- return nil, err
- }
- txs = append(txs, eth.NewTx(innerTx))
- }
- return txs, nil
-}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func DecodeBlocksFromBlob(blob *kzg4844.Blob) (eth.Transactions, error) { | |
| if isEmptyBlob(blob) { | |
| return eth.Transactions{}, nil | |
| } | |
| data, err := RetrieveBlobBytes(blob) | |
| if err != nil { | |
| return nil, err | |
| } | |
| batchBytes, err := zstd.DecompressBatchBytes(data) | |
| if err != nil { | |
| return nil, err | |
| } | |
| reader := bytes.NewReader(batchBytes) | |
| txs := make(eth.Transactions, 0) | |
| for { | |
| var ( | |
| firstByte byte | |
| fullTxBytes []byte | |
| innerTx eth.TxData | |
| err error | |
| ) | |
| if err = binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | |
| // if the blob byte array is completely consumed, then break the loop | |
| if err == io.EOF { | |
| break | |
| } | |
| return nil, err | |
| } | |
| // zero byte is found after valid tx bytes, break the loop | |
| if firstByte == 0 { | |
| break | |
| } | |
| switch firstByte { | |
| case eth.AccessListTxType: | |
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | |
| return nil, err | |
| } | |
| innerTx = new(eth.AccessListTx) | |
| case eth.DynamicFeeTxType: | |
| if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | |
| return nil, err | |
| } | |
| innerTx = new(eth.DynamicFeeTx) | |
| default: | |
| if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) | |
| return nil, fmt.Errorf("not supported tx type: %d", firstByte) | |
| } | |
| innerTx = new(eth.LegacyTx) | |
| } | |
| // we support the tx types of LegacyTxType/AccessListTxType/DynamicFeeTxType | |
| //if firstByte == eth.AccessListTxType || firstByte == eth.DynamicFeeTxType { | |
| // // the firstByte here is used to indicate tx type, so skip it | |
| // if err := binary.Read(reader, binary.BigEndian, &firstByte); err != nil { | |
| // return nil, err | |
| // } | |
| //} else if firstByte <= 0xf7 { // legacy tx first byte must be greater than 0xf7(247) | |
| // return nil, fmt.Errorf("not supported tx type: %d", firstByte) | |
| //} | |
| fullTxBytes, err = extractInnerTxFullBytes(firstByte, reader) | |
| if err != nil { | |
| return nil, err | |
| } | |
| if err = rlp.DecodeBytes(fullTxBytes, innerTx); err != nil { | |
| return nil, err | |
| } | |
| txs = append(txs, eth.NewTx(innerTx)) | |
| } | |
| return txs, nil | |
| } |
Co-authored-by: chengwenxi <22697326+chengwenxi@users.noreply.github.com> Co-authored-by: FletcherMan <fanciture@163.com> Co-authored-by: curryxbo <curryx1992@gmail.com> Co-authored-by: kukoomomo <kukoomomo@gmail.com> Co-authored-by: Kathy <22675649+anylots@users.noreply.github.com>
Summary by CodeRabbit
New Features
Refactor
Chores
Tests