-
Notifications
You must be signed in to change notification settings - Fork 625
perf(relayer): add sequencer submission strategy with blob‐fee history and target price #1659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
""" WalkthroughThe changes introduce a configurable, strategy-driven fee submission mechanism for Layer2 batch relaying, including new fee gating logic based on historical L1 blob fees and backlog thresholds. Configuration and ORM logic are updated to support querying batch counts and blob fee histories. The Layer2Relayer is extended with new fields and methods to implement these features. Changes
Sequence Diagram(s)sequenceDiagram
participant Relayer
participant BatchORM
participant L1BlockORM
Relayer->>BatchORM: GetFailedAndPendingBatchesCount()
BatchORM-->>Relayer: Return backlog count
alt Backlog < BacklogMax
Relayer->>L1BlockORM: GetBlobFeesInRange(lastFetchedBlock, latestBlock)
L1BlockORM-->>Relayer: Return blob fee history
Relayer->>Relayer: calculateTargetPrice(strategy, firstTime, feeHistory)
alt CurrentFee > TargetPrice and OldestBatch < TimeoutWindow
Relayer->>Relayer: Skip batch submission
else
Relayer->>Relayer: Submit batches
end
else Backlog >= BacklogMax
Relayer->>Relayer: Force submit oldest batches
end
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (1.64.8)level=warning msg="[runner] Can't run linter goanalysis_metalinter: buildir: failed to load package zstd: could not load export data: no export data for "github.com/scroll-tech/da-codec/encoding/zstd"" ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
019d1de
to
946d9dc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 2
🧹 Nitpick comments (2)
rollup/internal/controller/relayer/l2_relayer.go (1)
1188-1216
: Potential slice-building inefficiency & range query correctness infetchBlobFeeHistory
.Minor, non-blocking observations
•blocksAgo := windowSec / 12
silently floors – for window < 12 s the query span becomes 0. A guard could return early to avoid a full-table scan on very small windows.
• Thefilters
map uses two separatenumber >= ?
/<= ?
entries; GORM will treat duplicate keys oddly. Prefer a singleWhere("number BETWEEN ? AND ?", start, latest)
or two chainedWhere
s for clarity.
• You allocatehist := make([]*big.Int, len(recs))
; if the result is large this double-stores all fees (uint64
in record +big.Int
object). Consider returning[]uint64
unless arbitrary precision is strictly required.No functional bug here, but tightening these points will improve robustness.
rollup/conf/config.json (1)
57-58
: Timeout raised to 2 h – verify dependent services & dashboards.Doubling the submission window changes alerting expectations:
• Grafana/Prometheus alerts that fired after 5 min of no-submission must be updated.
• Any automation relying on fast batch availability (e.g. testnets) may need tuning.Please audit downstream tooling before merging.
🛑 Comments failed to post (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
290-336: 🛠️ Refactor suggestion
Unbounded query + hard-coded backlog limits – consider configurability & safety.
GetFailedAndPendingBatches(r.ctx, 0)
now issues an unlimited query. In a pathological backlog this could fetch tens of thousands of rows into memory in one go.
• Either keep a large, but bounded limit (e.g.backlogMax+cfg.BatchSubmission.MaxBatches
) or use a paginated cursor.
backlogMax := 75
is an in-code constant while the comment points atcfg.BatchSubmission.BacklogMax
.
• Promote it to configuration so operators can tune it without recompilation.Failing to address these may cause memory spikes and reduce observability in production.
🧰 Tools
🪛 golangci-lint (1.64.8)
313-313: strat.WindowSec undefined (type StrategyParams has no field or method WindowSec)
(typecheck)
327-327: strat.WindowSec undefined (type StrategyParams has no field or method WindowSec)
(typecheck)
🪛 GitHub Actions: Rollup
[error] 313-313: Build error: 'strat.WindowSec' undefined (type StrategyParams has no field or method WindowSec)
68-82:
⚠️ Potential issueCompilation breaks –
WindowSec
field missing fromStrategyParams
.
ProcessPendingBatches
,fetchBlobFeeHistory
, andcalculateTargetPrice
all referencestrat.WindowSec
, yet the struct declared here does not define that field.
The build currently fails (typecheck: strat.WindowSec undefined
).
Add the field (and initialise it inbestParams
) or stop referencing it and pass the window size explicitly.type StrategyParams struct { - BaselineType string // "pct_min" or "ewma" - BaselineParam float64 // percentile (0–1) or α for EWMA - Gamma float64 // relaxation γ - Beta float64 // relaxation β - RelaxType string // "exponential" or "sigmoid" + WindowSec uint64 // sampling window (seconds) + BaselineType string // "pct_min" or "ewma" + BaselineParam float64 // percentile (0–1) or α for EWMA + Gamma float64 // relaxation γ + Beta float64 // relaxation β + RelaxType string // "exponential" or "sigmoid" } var bestParams = map[uint64]StrategyParams{ - 2 * 3600: {BaselineType: "pct_min", BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: "exponential"}, - 5 * 3600: {BaselineType: "pct_min", BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: "sigmoid"}, - 12 * 3600: {BaselineType: "pct_min", BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: "sigmoid"}, + 2 * 3600: {WindowSec: 2 * 3600, BaselineType: "pct_min", BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: "exponential"}, + 5 * 3600: {WindowSec: 5 * 3600, BaselineType: "pct_min", BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: "sigmoid"}, + 12 * 3600: {WindowSec: 12 * 3600, BaselineType: "pct_min", BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: "sigmoid"}, }This restores type-checking, keeps the single source-of-truth for the window size and allows downstream code to compile.
Be sure to rungo test ./...
or the CI pipeline after applying the patch.
Will push new unit tests at |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1659 +/- ##
===========================================
- Coverage 40.68% 40.52% -0.16%
===========================================
Files 225 225
Lines 18419 18562 +143
===========================================
+ Hits 7493 7523 +30
- Misses 10195 10301 +106
- Partials 731 738 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
57e1f9a
to
4f84e9e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
rollup/internal/controller/relayer/l2_relayer.go (6)
97-104
: Consider adding validation for strategy parametersThe
StrategyParams
struct holds important configuration that impacts batch submission behavior, but there's no validation to ensure the parameters are within reasonable bounds.Consider adding a validation method to check that parameters like BaselineParam (0-1), Gamma, and Beta are within expected ranges:
type StrategyParams struct { BaselineType BaselineType // "pct_min" or "ewma" BaselineParam float64 // percentile (0–1) or α for EWMA Gamma float64 // relaxation γ Beta float64 // relaxation β RelaxType RelaxType // Exponential or Sigmoid + + // Validate checks if the parameters are within valid ranges + func (s StrategyParams) Validate() error { + if s.BaselineType == PctMin && (s.BaselineParam < 0 || s.BaselineParam > 1) { + return fmt.Errorf("BaselineParam must be between 0 and 1 for PctMin, got %f", s.BaselineParam) + } + if s.BaselineType == EWMA && (s.BaselineParam <= 0 || s.BaselineParam >= 1) { + return fmt.Errorf("BaselineParam (alpha) must be between 0 and 1 for EWMA, got %f", s.BaselineParam) + } + if s.Gamma < 0 { + return fmt.Errorf("Gamma must be non-negative, got %f", s.Gamma) + } + if s.Beta < 0 { + return fmt.Errorf("Beta must be non-negative, got %f", s.Beta) + } + return nil + } }
106-111
: Hard-coded strategy parameters could be made configurableThe strategy parameters for different time windows are hard-coded in the
bestParams
map, which might limit flexibility.Consider making these settings configurable through configuration files or environment variables, allowing for easier tuning without code changes:
-// bestParams maps your 2h/5h/12h windows to their best rules. -var bestParams = map[uint64]StrategyParams{ - 2 * 3600: {BaselineType: PctMin, BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: Exponential}, - 5 * 3600: {BaselineType: PctMin, BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: Sigmoid}, - 12 * 3600: {BaselineType: PctMin, BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: Sigmoid}, -}Add to the configuration structure instead, and load default values if not specified.
192-211
: Add error logging for initialization failuresThe strategy selection and fee history initialization are crucial for the relayer's operation, but errors during these steps could be more informative.
Consider adding more detailed logging before returning errors to aid in troubleshooting:
// pick and validate our submission strategy windowSec := uint64(cfg.BatchSubmission.TimeoutSec) strategy, ok := bestParams[windowSec] if !ok { + log.Error("Unsupported batch submission timeout", "windowSec", windowSec, "supported", maps.Keys(bestParams)) return nil, fmt.Errorf( "unsupported BatchSubmission.TimeoutSec: %d (must be one of %v)", windowSec, maps.Keys(bestParams), ) } layer2Relayer.batchStrategy = strategy latest, err := layer2Relayer.l1BlockOrm.GetLatestL1BlockHeight(ctx) if err != nil { + log.Error("Failed to get latest L1 block height", "err", err) return nil, fmt.Errorf("failed to get latest L1 block height: %v", err) } layer2Relayer.lastFetchedBlock = latest - uint64(layer2Relayer.cfg.BatchSubmission.TimeoutSec)/12 // start ~window seconds ago if _, err = layer2Relayer.fetchBlobFeeHistory(uint64(layer2Relayer.cfg.BatchSubmission.TimeoutSec)); err != nil { + log.Error("Initial blob fee load failed", "err", err) return nil, fmt.Errorf("initial blob‐fee load failed: %w", err) }
351-360
: Consider refactoring skipSubmitByFee logic into a separate functionThe fee-skipping logic in ProcessPendingBatches could be more readable and maintainable if refactored.
Consider extracting the decision logic into a more descriptive method:
- if backlogCount <= backlogMax { - oldest := dbBatches[0].CreatedAt - if skip, msg := r.skipSubmitByFee(oldest); skip { - log.Debug(msg) - return - } - // if !skip, we fall through and submit immediately - } + if backlogCount <= backlogMax && r.shouldDelaySubmissionForBetterFee(dbBatches[0].CreatedAt) { + return + } + // If backlog exceeds max, or we shouldn't delay, we fall through and submit immediately // Add a new method: +func (r *Layer2Relayer) shouldDelaySubmissionForBetterFee(oldestBatchTime time.Time) bool { + skip, msg := r.skipSubmitByFee(oldestBatchTime) + if skip { + log.Debug(msg) + return true + } + return false +}
1209-1238
: Optimize blob fee history storage and retrievalThe current implementation rebuilds the fee history on each call, which could be inefficient for frequent calls with large window sizes.
Consider implementing a time-based cache or a more efficient data structure like a circular buffer:
func (r *Layer2Relayer) fetchBlobFeeHistory(windowSec uint64) ([]*big.Int, error) { latest, err := r.l1BlockOrm.GetLatestL1BlockHeight(r.ctx) if err != nil { return nil, fmt.Errorf("GetLatestL1BlockHeight: %w", err) } from := r.lastFetchedBlock + 1 //if new blocks if from <= latest { raw, err := r.l1BlockOrm.GetBlobFeesInRange(r.ctx, from, latest) if err != nil { return nil, fmt.Errorf("GetBlobFeesInRange: %w", err) } // 2) append them for _, v := range raw { r.feeHistory = append(r.feeHistory, new(big.Int).SetUint64(v)) r.lastFetchedBlock++ } } maxLen := int(windowSec / 12) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } // return a copy out := make([]*big.Int, len(r.feeHistory)) copy(out, r.feeHistory) return out, nil }Also, consider adding a timestamp or block number to each fee entry to better track the age of each sample.
1289-1317
: Add retry logic for blob fee history retrievalThe
skipSubmitByFee
method immediately falls back to immediate submission if blob fee history is unavailable, which could be temporary.Consider adding retry logic for fetching the blob fee history:
func (r *Layer2Relayer) skipSubmitByFee(oldest time.Time) (bool, string) { windowSec := uint64(r.cfg.BatchSubmission.TimeoutSec) - hist, err := r.fetchBlobFeeHistory(windowSec) + var hist []*big.Int + var err error + + // Try fetching the blob fee history with retries + maxRetries := 3 + for i := 0; i < maxRetries; i++ { + hist, err = r.fetchBlobFeeHistory(windowSec) + if err == nil && len(hist) > 0 { + break + } + + if i < maxRetries-1 { + log.Debug("Retrying blob fee history fetch", "attempt", i+1, "err", err) + time.Sleep(time.Second) + } + } + if err != nil || len(hist) == 0 { return false, fmt.Sprintf( "blob‐fee history unavailable or empty; fallback to immediate batch submission – err=%v, history_length=%d", err, len(hist), ) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/orm/batch.go
(1 hunks)rollup/internal/orm/l1_block.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- rollup/internal/orm/l1_block.go
- rollup/internal/orm/batch.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (5)
rollup/internal/controller/relayer/l2_relayer.go (5)
38-47
: Well-structured enum implementation for relaxation functionsGood use of iota for enum values with proper documentation. This follows Go best practices for defining constants.
49-60
: Clear enum implementation for baseline typesThe enum is well-documented and follows Go best practices using iota.
92-95
: Good field additions to support the fee strategyThese new fields appropriately store the needed state for the fee-based submission strategy.
156-157
: LGTM: Added L1Block ORM initializationThe L1Block ORM is properly initialized to allow for fetching blob fee data.
1240-1287
:❓ Verification inconclusive
Add error checking for float64 conversions
The
calculateTargetPrice
function converts between big.Int and float64 without checking for potential loss of precision or overflow.Check for potential precision issues in the float64 conversions:
Consider adding error handling for these conversions and potentially using big.Float for the entire calculation to maintain precision.
🏁 Script executed:
#!/bin/bash # Check if there are any extremely large blob fee values in the codebase that might cause precision issues rg -A 2 -B 2 "BlobBaseFee" rg -A 2 -B 2 "blobFee"Length of output: 14319
Verify float64 conversion precision in calculateTargetPrice
The
calculateTargetPrice
function in
rollup/internal/controller/relayer/l2_relayer.go
converts*big.Int
→*big.Float
→float64
(ignoring theAccuracy
return) and later back to*big.Int
viaInt(nil)
. In edge cases (values >2⁵³ or requiring sub-gwei precision) this can silently round or overflow.Please verify that:
- Blob-fee history values (in Gwei) always stay below the 2⁵³ threshold so conversion is exact.
- You’re comfortable discarding the
Accuracy
flag returned byFloat64()
.If higher precision is needed, consider:
- Checking the
Accuracy
value fromFloat64()
and handling “Below”/“Above” cases.- Performing the entire relaxation and scaling using
big.Float
, only callingInt(nil)
on the final result.Location for review:
- rollup/internal/controller/relayer/l2_relayer.go →
calculateTargetPrice
(lines 1240–1287)
4f84e9e
to
201007c
Compare
201007c
to
d5af194
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1272-1280
:⚠️ Potential issueAdd bounds checking for relaxation formulas.
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }
🧹 Nitpick comments (6)
rollup/internal/controller/relayer/l2_relayer.go (6)
107-111
: Consider adding validation for strategy parameters.While the map of predefined parameters looks good, consider adding validation logic to ensure the parameters are within reasonable bounds (e.g., BaselineParam should be between 0-1 for percentiles).
var bestParams = map[uint64]StrategyParams{ 2 * 3600: {BaselineType: PctMin, BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: Exponential}, 5 * 3600: {BaselineType: PctMin, BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: Sigmoid}, 12 * 3600: {BaselineType: PctMin, BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: Sigmoid}, } + +// validateStrategyParams ensures that strategy parameters are within reasonable bounds +func validateStrategyParams(s StrategyParams) error { + if s.BaselineType == PctMin && (s.BaselineParam < 0 || s.BaselineParam > 1) { + return fmt.Errorf("PctMin baseline parameter must be between 0 and 1, got %f", s.BaselineParam) + } + if s.BaselineType == EWMA && (s.BaselineParam <= 0 || s.BaselineParam >= 1) { + return fmt.Errorf("EWMA alpha parameter must be between 0 and 1 exclusive, got %f", s.BaselineParam) + } + return nil +}
344-359
: Hardcoded backlog threshold should be configurable.The backlog threshold is using the configuration value
r.cfg.BatchSubmission.BacklogMax
which is good, but consider documenting in the comments what this threshold represents and how it affects the batch submission behavior.- // if backlog outgrow max size, force‐submit enough oldest batches + // If backlog exceeds the maximum allowed size (BacklogMax), we force-submit + // batches regardless of the current blob fee, to prevent excessive batch buildup. + // This ensures the system can recover from prolonged high-fee periods. backlogCount, err := r.batchOrm.GetFailedAndPendingBatchesCount(r.ctx) if err != nil { log.Error("Failed to fetch pending L2 batches", "err", err) return }
1243-1248
: Consider optimizing big.Float to float64 conversion.The conversion from
big.Int
tofloat64
viabig.Float
is correct but could be optimized for large batches.data := make([]float64, n) + divider := big.NewFloat(1e9) for i, v := range history { - f, _ := new(big.Float).Quo(new(big.Float).SetInt(v), big.NewFloat(1e9)).Float64() + f, _ := new(big.Float).Quo(new(big.Float).SetInt(v), divider).Float64() data[i] = f }
1306-1311
: Add metric for submission skipping due to fees.It would be valuable to track how often batch submissions are being skipped due to high fees. This would help monitor the effectiveness of the fee-based submission strategy.
// if current fee > target and still inside the timeout window, skip if current.Cmp(target) > 0 && time.Since(oldest) < time.Duration(windowSec)*time.Second { + r.metrics.rollupL2RelayerBatchSubmissionSkippedForFee.Inc() return true, fmt.Sprintf( "blob‐fee above target & window not yet passed; current=%s target=%s age=%s", current.String(), target.String(), time.Since(oldest), ) }
1180-1180
: Remove debug print statement.The
fmt.Println
statement should be removed or replaced with proper logging.- fmt.Println("packing finalizeBundlePostEuclidV2NoProof", len(dbBatch.BatchHeader), dbBatch.CodecVersion, dbBatch.BatchHeader, new(big.Int).SetUint64(endChunk.TotalL1MessagesPoppedBefore+endChunk.TotalL1MessagesPoppedInChunk), common.HexToHash(dbBatch.StateRoot), common.HexToHash(dbBatch.WithdrawRoot)) + log.Debug("Packing finalizeBundlePostEuclidV2NoProof", + "header_length", len(dbBatch.BatchHeader), + "codec_version", dbBatch.CodecVersion, + "messages_popped", endChunk.TotalL1MessagesPoppedBefore+endChunk.TotalL1MessagesPoppedInChunk, + "state_root", dbBatch.StateRoot, + "withdraw_root", dbBatch.WithdrawRoot)
1207-1236
:❓ Verification inconclusive
Consider optimizing blob fee history retrieval.
The current implementation queries the database frequently to update the fee history. This could become a performance bottleneck as the system scales. Consider:
- Using a time-based cache with periodic updates instead of querying on every check
- Adding batch size limits to prevent fetching too many records at once
- Implementing a more efficient data structure like a circular buffer for the fee history
Let's check how frequently this method might be called:
🏁 Script executed:
#!/bin/bash # Find where fetchBlobFeeHistory is called rg -A 2 "fetchBlobFeeHistory" --no-ignoreLength of output: 1170
I’ll pull the exact call sites and surrounding context to see how often and where this is invoked.
🏁 Script executed:
#!/bin/bash # Locate all calls and show 5 lines of context around them rg -n "fetchBlobFeeHistory" rollup/internal/controller/relayer/l2_relayer.go rg -C 5 "fetchBlobFeeHistory" rollup/internal/controller/relayer/l2_relayer.goLength of output: 1992
Optimize blob fee history retrieval
The
fetchBlobFeeHistory
method (rollup/internal/controller/relayer/l2_relayer.go:1207–1236) is invoked twice:
- During relayer initialization (line 208)
- Inside
skipSubmitByFee
(around line 1293) on each batch‐submission checkEach call performs two DB queries (latest block height + fee range), which may become a bottleneck as submissions scale. Consider:
• Caching the recent base-fee history and refreshing on a timer rather than querying every check
• Imposing a hard limit on fetched rows or time window to bound each query
• Switching to a fixed-size ring (circular buffer) in memory to cap history growth and reduce slicing overhead
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (5)
rollup/conf/config.json
(1 hunks)rollup/internal/config/relayer.go
(1 hunks)rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/orm/batch.go
(1 hunks)rollup/internal/orm/l1_block.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- rollup/internal/config/relayer.go
🚧 Files skipped from review as they are similar to previous changes (3)
- rollup/conf/config.json
- rollup/internal/orm/l1_block.go
- rollup/internal/orm/batch.go
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer.go (2)
rollup/internal/orm/l1_block.go (3)
L1Block
(14-31)L1Block
(39-41)NewL1Block
(34-36)rollup/internal/config/relayer.go (1)
BatchSubmission
(34-43)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (6)
rollup/internal/controller/relayer/l2_relayer.go (6)
38-47
: Good use of Golang iota for enumeration.The enum implementation for
RelaxType
usingiota
is clean and follows Go best practices for creating type-safe enumeration constants.
49-60
: Good documentation of enum types.The explanation comments for each baseline type clearly communicate the purpose and behavior of these constants. This improves code readability and maintainability.
97-104
: Clear structure definition for strategy parameters.The
StrategyParams
struct provides a well-organized way to encapsulate the fee submission strategy configuration with appropriate field types.
192-210
: Good strategy parameter validation in constructor.The constructor properly validates that the configured timeout window is one of the supported values and returns a helpful error message if not. This is a good practice to fail fast on invalid configuration.
1287-1315
: Good fallback behavior in skipSubmitByFee.The method properly handles error cases by falling back to immediate submission if fee history is unavailable or empty. This ensures the system remains operational even when fee data is missing.
1252-1268
: Robust implementation of baseline calculation algorithms.The implementation of the percentile minimum (PctMin) and exponentially weighted moving average (EWMA) calculations is well structured and follows standard algorithmic approaches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
1229-1276
:⚠️ Potential issueAdd bounds checking for relaxation formulas
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }
1195-1227
: 🛠️ Refactor suggestionAdd minimum history length check
The fee history window could be too small to make meaningful decisions if there aren't enough samples.
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) { + log.Warn("Insufficient blob fee history samples for reliable decision making", + "samples", len(r.feeHistory), + "desired", maxLen) + }
🧹 Nitpick comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
108-112
: Consider extracting best parameters to configurationWhile the bestParams map is a good approach for providing default strategies, consider making these values configurable through the configuration file rather than hardcoding them in the source code.
This would allow tuning the parameters without requiring a code change and redeployment.
1278-1306
: Consider caching calculated target pricesThe target price calculation is potentially expensive, especially with larger fee histories. Consider caching the result for a short period to reduce computational overhead when
skipSubmitByFee
is called frequently.Add a cache for the target price with a short expiration to avoid recalculating too frequently:
type targetPriceCache struct { price *big.Int oldest time.Time calculated time.Time expiry time.Duration } // Cache the target price calculation result if time.Since(r.targetCache.calculated) < r.targetCache.expiry && r.targetCache.oldest == oldest { target = r.targetCache.price } else { target = calculateTargetPrice(windowSec, r.batchStrategy, oldest, hist) r.targetCache = targetPriceCache{ price: target, oldest: oldest, calculated: time.Now(), expiry: 5 * time.Second, // Configurable cache duration } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/orm/batch.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- rollup/internal/orm/batch.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (10)
rollup/internal/controller/relayer/l2_relayer.go (10)
37-46
: Good use of iota for enum definitionUsing iota for RelaxType enum values follows Go convention and makes the code more maintainable.
48-48
: Use constant instead of magic numberGood use of a named constant for secondsPerBlock rather than hardcoding the value throughout the code.
50-61
: Well-documented enum definitionClear documentation for the BaselineType enum that explains the purpose of each option.
93-96
: New fields added to support fee-based submission strategyThe new fields properly track the state needed for implementing the fee-based batch submission strategy.
98-106
: Well-structured strategy parametersThe StrategyParams struct is well organized with appropriate fields for fee baseline calculation and relaxation functions.
157-157
: New ORM field initializationThe l1BlockOrm field is properly initialized in the constructor.
193-195
: Proper initialization of fee historyThe code correctly preloads the blob fee history during initialization and returns an error if it fails.
330-334
: Backlog check for forcing submissionGood approach to check the backlog count for determining whether to force-submit batches. This prevents excessive queue buildup.
339-353
: Well-structured batch submission decision logicThe code clearly separates the conditions for deciding when to submit batches:
- Check backlog against the maximum threshold
- Check if the oldest batch exceeds the timeout window
- Use price-based strategy if neither of the above conditions are met
This provides a balanced approach between cost optimization and ensuring timely submissions.
1297-1302
: Good debugging information in error messageThe error message provides useful debugging information including the current fee, target fee, and time since the oldest batch, which will help with troubleshooting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1229-1276
: 🛠️ Refactor suggestionImplement target price calculation with baseline and relaxation.
The implementation correctly:
- Handles empty history array
- Applies either percentile or EWMA baseline calculation
- Applies relaxation curves (exponential or sigmoid)
- Converts between Gwei and Wei for calculations
Consider adding bounds checking to prevent potential numerical instability with the exponential and sigmoid functions.
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoidTerm := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoidTerm > 10 { + sigmoidTerm = 10 + } + adjusted = baseline * (1 + sigmoidTerm) default: adjusted = baseline }
🧹 Nitpick comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1195-1227
: Implement fetchBlobFeeHistory to maintain sliding window of blob fees.The method:
- Correctly bootstraps on first call
- Fetches only new blocks since last call
- Maintains a sliding window of appropriate size
You might consider adding a minimum history length check to ensure enough samples for reliable decision-making.
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) { + log.Debug("Limited blob fee history samples for decision making", + "samples", len(r.feeHistory), + "desired", maxLen) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer.go (2)
rollup/internal/orm/l1_block.go (3)
L1Block
(14-31)L1Block
(39-41)NewL1Block
(34-36)rollup/internal/config/relayer.go (1)
BatchSubmission
(34-43)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (11)
rollup/internal/controller/relayer/l2_relayer.go (11)
37-46
: Good job defining the RelaxType enumeration clearly.The enum is well-documented with descriptive comments for each relaxation function type.
48-48
: Good addition of secondsPerBlock constant.Extracting this as a constant improves readability and maintainability.
50-61
: BaselineType enumeration is well-defined.Clear structure with descriptive comments explaining each baseline type's purpose.
76-76
: Adding l1BlockOrm field is appropriate.This field is necessary to fetch blob fee data for the new strategy.
93-96
: New Layer2Relayer fields support the fee history strategy.The fields are well named and appropriately scoped for the new functionality.
98-105
: StrategyParams struct is well-designed.The struct provides a clear structure for fee-submission rules with appropriate fields for each parameter.
107-112
: The bestParams map provides good defaults for different windows.Presets for 2h, 5h, and 12h windows allow for flexible configuration based on timeout duration.
157-157
: Properly initializing l1BlockOrm in the Layer2Relayer constructor.The ORM field is initialized correctly using the provided database connection.
193-195
: Initial blob-fee history load is a good practice.Loading initial fee history at startup ensures the relayer has data available for the first batch submission decision.
329-352
: Check batches backlog and implement fee-based submission strategy.The code correctly implements the logic to:
- Check backlog count against configurable threshold
- Enforce submission when the oldest batch exceeds the timeout
- Skip submission when current blob fees are above target and timeout hasn't elapsed
There's a seamless integration with the existing batch submission logic.
1278-1306
: Implement skipSubmitByFee for batch submission control.The function:
- Correctly fetches blob fee history
- Calculates target price based on strategy
- Compares current fee to target price
- Allows submission when either fee conditions are favorable or timeout window has elapsed
The error reporting is detailed and will help with debugging submission decisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to add a high level comment to ProcessPendingBatches
:
We submit batches if:
- the first batch is too old -> forceSubmit
- backlogCount > r.cfg.BatchSubmission.BacklogMax -> forceSubmit
- we have at least min count AND price is good
I committed a small refactor as I wanted to try it locally first
I'm okay with merging this once all comments except #1659 (comment) are resolved. Other things we should consider in the next version:
|
…eeHistory from constructor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
1228-1232
: Consider adding minimum history length check.The fee history window could be too small to make meaningful decisions if there aren't enough samples.
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) { + log.Warn("Insufficient blob fee history samples for reliable decision making", + "samples", len(r.feeHistory), + "desired", maxLen) + }
1237-1301
: 🛠️ Refactor suggestionAdd bounds checking for relaxation formulas.
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }
🧹 Nitpick comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
1203-1235
: Consider caching intermediate results in fee history fetching.The
fetchBlobFeeHistory
method queries the database for new blocks on each call. While you do maintain a sliding window, there's potential for optimization by caching the results of expensive calculations derived from this history.Consider implementing a time-based or update-based cache refreshing strategy to avoid redundant calculations, especially since this method might be called frequently:
func (r *Layer2Relayer) fetchBlobFeeHistory(windowSec uint64) ([]*big.Int, error) { + // Only update if it's been more than X seconds since last update + if r.lastFeeHistoryUpdate != nil && time.Since(*r.lastFeeHistoryUpdate) < 5*time.Second { + return r.feeHistory, nil + } + latest, err := r.l1BlockOrm.GetLatestL1BlockHeight(r.ctx) if err != nil { return nil, fmt.Errorf("GetLatestL1BlockHeight: %w", err) } // bootstrap on first call if r.lastFetchedBlock == 0 { // start window r.lastFetchedBlock = latest - windowSec/secondsPerBlock } from := r.lastFetchedBlock + 1 //if new blocks if from <= latest { raw, err := r.l1BlockOrm.GetBlobFeesInRange(r.ctx, from, latest) if err != nil { return nil, fmt.Errorf("GetBlobFeesInRange: %w", err) } // append them for _, v := range raw { r.feeHistory = append(r.feeHistory, new(big.Int).SetUint64(v)) r.lastFetchedBlock++ } + // Update timestamp of last update + now := time.Now() + r.lastFeeHistoryUpdate = &now } maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } return r.feeHistory, nil }
1238-1250
: PctMin implementation could be more efficient.The PctMin calculation creates a full copy of the history array and sorts it on every call, which could be inefficient for large history windows.
Consider optimizing this by using a selection algorithm (like quickselect) to find the k-th smallest element without full sorting:
case PctMin: - // make a copy, sort by big.Int.Cmp, then pick the percentile element - sorted := make([]*big.Int, n) - copy(sorted, history) - sort.Slice(sorted, func(i, j int) bool { - return sorted[i].Cmp(sorted[j]) < 0 - }) - idx := int(strategy.BaselineParam * float64(n-1)) - if idx < 0 { - idx = 0 - } - baseline, _ = new(big.Float). - Quo(new(big.Float).SetInt(sorted[idx]), big.NewFloat(1e9)). - Float64() + // Use a more efficient selection algorithm for finding the percentile element + idx := int(strategy.BaselineParam * float64(n-1)) + if idx < 0 { + idx = 0 + } + + // For small arrays, simple sort is fine + if n <= 100 { + sorted := make([]*big.Int, n) + copy(sorted, history) + sort.Slice(sorted, func(i, j int) bool { + return sorted[i].Cmp(sorted[j]) < 0 + }) + baseline, _ = new(big.Float). + Quo(new(big.Float).SetInt(sorted[idx]), big.NewFloat(1e9)). + Float64() + } else { + // For larger arrays, use selection algorithm to find k-th smallest + // Implementation of quickselect or similar algorithm + percentileValue := findKthSmallest(history, idx) + baseline, _ = new(big.Float). + Quo(new(big.Float).SetInt(percentileValue), big.NewFloat(1e9)). + Float64() + }This optimization would be helpful for the long windows (like 12h) where the history could contain thousands of elements.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer.go (2)
rollup/internal/orm/l1_block.go (3)
L1Block
(14-31)L1Block
(39-41)NewL1Block
(34-36)rollup/internal/config/relayer.go (1)
BatchSubmission
(34-43)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (12)
rollup/internal/controller/relayer/l2_relayer.go (12)
37-46
: Good implementation of the relaxation function enum.The enum implementation using iota provides a clean way to define different relaxation functions, with clear documentation of each type's purpose. This makes the code more maintainable and follows Go best practices.
48-48
: Good use of a named constant for block time.Defining
secondsPerBlock
as a constant improves code readability and maintainability. This value is used throughout the calculation logic, making it easy to update if the network parameters change.
50-61
: Well-defined baseline type enumeration.The
BaselineType
enumeration clearly defines the supported baseline calculation methods with good documentation. Using Go's iota for this enum is the correct approach.
93-96
: Good storage structure for fee history data.The added fields provide proper state management for the fee submission strategy, with clear separation of concerns between storing history data and strategy parameters.
98-105
: Well-designed strategy parameter structure.The
StrategyParams
struct provides a clean way to encapsulate the different parameters needed for the fee submission strategy. This approach makes it easy to create and manage different strategy configurations.
108-112
: Appreciate the pre-configured strategy parameters for different time windows.The use of a map to store optimal parameters for different time windows is a good approach. These parameters appear to be derived from trace tests as mentioned in the PR description, which is a data-driven way to optimize the strategy.
157-157
: Good integration of L1Block ORM.The addition of the L1BlockOrm initialization in the constructor enables querying L1 block data including blob fees, which is essential for the new submission strategy.
318-322
: Clear documentation of the extended batch submission policy.The updated comments accurately describe the multiple conditions under which batches are submitted, making the implementation intent clear.
330-336
: Good implementation of backlog management.The code properly fetches the backlog count and handles potential database errors. This ensures the relayer can make informed decisions based on the backlog size.
339-349
: Good implementation of force submission conditions.The code defines two clear conditions for forcing batch submission:
- When the oldest batch exceeds the timeout window
- When the backlog exceeds the configured maximum
This ensures the system doesn't indefinitely delay batch submissions during sustained high fee periods.
350-361
: Good implementation of fee-based submission gating.The code properly calls
skipSubmitByFee
to determine if the current blob fee is favorable for submission. The error handling is well-designed, with appropriate logging and a fallback to immediate submission if the fee check fails.
1303-1331
: Well-implemented fee-based submission decision logic.The
skipSubmitByFee
function properly implements the strategy described in the PR objectives. It fetches the latest blob fee history, calculates a target price based on the batch age, and makes a decision based on the current fee compared to the target.The error messages are informative, including the actual values that influenced the decision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1237-1301
: 🛠️ Refactor suggestionAdd bounds checking for relaxation formulas to prevent extreme values.
The exponential and sigmoid relaxation formulas could produce extremely large values for certain parameter combinations, potentially causing numerical instability.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }Additionally, be aware of potential precision loss in the float64 conversions. This is likely negligible for the expected fee ranges but could become an issue if blob fees grow to extreme values.
🧹 Nitpick comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1203-1235
: Consider adding a minimum history check for reliable fee analysis.The function maintains a sliding window of blob fees, but there's no check to ensure there's sufficient data for making reliable fee decisions. If the window is too small or empty, the analysis may not be representative.
Add a minimum history length check and warning:
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + minSamples := int(math.Max(3, float64(maxLen)*0.1)) + if len(r.feeHistory) < minSamples { + log.Warn("Insufficient blob fee history samples for reliable decision making", + "samples", len(r.feeHistory), + "desired", minSamples) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
rollup/internal/controller/relayer/l2_relayer.go
(9 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer.go (2)
rollup/internal/orm/l1_block.go (3)
L1Block
(14-31)L1Block
(39-41)NewL1Block
(34-36)rollup/internal/config/relayer.go (1)
BatchSubmission
(34-43)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (10)
rollup/internal/controller/relayer/l2_relayer.go (10)
37-61
: Well-structured enums for the relaxation and baseline types.The enums are properly defined using Go's iota with clear documentation for each type. This provides a solid foundation for the fee submission strategy implementation.
63-96
: LGTM for the Layer2Relayer struct extensions.The additions to the Layer2Relayer struct are well-organized and properly support the new blob fee tracking functionality with clear purpose for each field.
98-105
: Good design for the StrategyParams struct.The StrategyParams struct provides a clean way to encapsulate all the parameters needed for the fee submission strategy, making it easily configurable.
107-112
: Well-tuned preset parameters for different time windows.The bestParams map provides sensible defaults for various timeout windows based on trace testing as mentioned in the PR description.
156-158
: LGTM for ORM initialization.The l1BlockOrm is properly initialized in the constructor to provide access to the L1 block data.
168-171
: Clear strategy initialization based on configured timeout.The batchStrategy is properly initialized from the bestParams map based on the configured timeout, ensuring that the optimal strategy is applied.
318-322
: Clear documentation of the batch submission conditions.The updated documentation clearly explains the conditions under which batches are submitted, making the code more maintainable.
331-348
: Well-structured backlog handling logic.The code properly checks the backlog count and forces submission if it exceeds the configured maximum, helping to prevent excessive delays in batch processing.
350-361
: Good implementation of fee-based submission gating.The code properly integrates the fee-based submission strategy by calling skipSubmitByFee and handling the result appropriately, including detailed logging when skipping submission.
1303-1331
: Well-designed fee skipping logic with good error reporting.The skipSubmitByFee function provides a clear implementation of the fee-based gating mechanism with descriptive error messages when skipping submission, making debugging easier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
1227-1231
: Consider adding minimum history length check.The fee history window could be too small to make meaningful decisions if there aren't enough samples.
Add a check for minimum history length:
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + minSamples := int(math.Max(3, float64(maxLen)*0.1)) + if len(r.feeHistory) < minSamples { + log.Warn("Insufficient blob fee history samples for reliable decision making", + "samples", len(r.feeHistory), + "desired", maxLen, + "minimum", minSamples) + }
1235-1299
: 🛠️ Refactor suggestionAdd bounds checking for relaxation formulas.
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }
🧹 Nitpick comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1258-1275
: Consider optimizing the EWMA calculation.The EWMA calculation creates many temporary
big.Float
objects, which could be optimized.Reuse float objects where possible:
case EWMA: one := big.NewFloat(1) alpha := big.NewFloat(strategy.BaselineParam) oneMinusAlpha := new(big.Float).Sub(one, alpha) + curr := new(big.Float) + term1 := new(big.Float) + term2 := new(big.Float) // start from first history point ewma := new(big.Float). Quo(new(big.Float).SetInt(history[0]), big.NewFloat(1e9)) for i := 1; i < n; i++ { - curr := new(big.Float). + curr. Quo(new(big.Float).SetInt(history[i]), big.NewFloat(1e9)) - term1 := new(big.Float).Mul(alpha, curr) - term2 := new(big.Float).Mul(oneMinusAlpha, ewma) - ewma = new(big.Float).Add(term1, term2) + term1.Mul(alpha, curr) + term2.Mul(oneMinusAlpha, ewma) + ewma.Add(term1, term2) } baseline, _ = ewma.Float64()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
common/version/version.go
(1 hunks)rollup/internal/controller/relayer/l2_relayer.go
(9 hunks)rollup/internal/orm/batch.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- common/version/version.go
🚧 Files skipped from review as they are similar to previous changes (1)
- rollup/internal/orm/batch.go
🔇 Additional comments (10)
rollup/internal/controller/relayer/l2_relayer.go (10)
37-46
: Good use of enumeration for relaxation functions.The enum
RelaxType
is well structured with clear comments explaining the purpose of each constant. Using iota for auto-incrementing enum values is a Go best practice.
48-48
: Good use of a constant for seconds per block.Extracting the value to a named constant improves readability and maintenance.
50-61
: Well-documented enum for baseline types.The
BaselineType
enum is clearly defined with detailed comments that explain each strategy, making the code more maintainable.
93-96
: Good structure additions to Layer2Relayer.The new fields are well-named and have clear purposes:
lastFetchedBlock
tracks progressfeeHistory
stores the sliding window of blob feesbatchStrategy
holds the configured submission strategy parameters
98-105
: Well-designed strategy parameters struct.The
StrategyParams
struct encapsulates all the parameters needed for fee-based submission strategies with clear field names and appropriate types.
107-112
: Strategy tuning parameters are well-organized.Using a map to associate different window durations with their optimal strategy parameters is a clean approach. The values appear to be derived from empirical testing, which is good for performance optimization.
157-169
: Constructor initialization looks good.The initialization of the new fields in the constructor is appropriate, with the
batchStrategy
selected based on the configured timeout window.
331-348
: Backlog and timeout-based forcing is well implemented.The code properly checks for two conditions to force submission:
- If the oldest batch exceeds the timeout window
- If the backlog count exceeds the configured maximum
This ensures that batches are eventually processed even if fee conditions remain unfavorable.
350-361
: Good fee-gating implementation with error handling.The fee-gating logic with
skipSubmitByFee
is well integrated, with appropriate error handling and logging. If the fee check fails, it falls back to immediate submission, which is a safe approach.
1301-1329
: Well-structured skipSubmitByFee implementation.The method clearly handles the logic for determining when to skip batch submission based on fee conditions. The error message provides useful information for debugging and monitoring.
Purpose or design rationale of this PR
Implements a new sequencer submission strategy in the L2 relayer that decides when to publish batches based on recent blob‐fee conditions rather than immediately. It consists of:
fetchBlobFeeHistory(windowSec uint64)
– pulls the lastwindowSec
seconds of blob‐fee samples from thel1_block
table.calculateTargetPrice(strat StrategyParams, firstTime time.Time, history []*big.Int)
– computes a dynamic “target” blob fee by applying either a percentile or EWMA baseline, then a relaxation curve (exponential or sigmoid) that increases toward the window deadline. Estimates perwindowSec
obtained from tests performed on traces.Integration into
ProcessPendingBatches
– before sending a commit transaction, the relayer compares the most recent blob fee to the target and only proceeds if the price is at or below the target, or if the configured time window has elapsed.Resulting behavior:
Cost optimization: by delaying submission until blob fees are favorable, the protocol can reduce L1 data‐fee costs.
Flexibility & safety: the relaxation curve ensures that batches aren’t starved indefinitely and will be submitted by the deadline even if prices remain high.
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit