-
Notifications
You must be signed in to change notification settings - Fork 621
perf(relayer): add sequencer submission strategy with blob‐fee history and target price #1659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughThe changes introduce a configurable, strategy-driven fee submission mechanism for Layer2 batch relaying, including new fee gating logic based on historical L1 blob fees and backlog thresholds. Configuration and ORM logic are updated to support querying batch counts and blob fee histories. The Layer2Relayer is extended with new fields and methods to implement these features. Changes
Sequence Diagram(s)sequenceDiagram
participant Relayer
participant L1BlockORM
participant BatchORM
Relayer->>BatchORM: GetFailedAndPendingBatchesCount()
BatchORM-->>Relayer: Return backlog count
alt Backlog < BacklogMax
Relayer->>L1BlockORM: GetBlobFeesInRange(lastFetchedBlock, latestBlock)
L1BlockORM-->>Relayer: Return blob fee history
Relayer->>Relayer: calculateTargetPrice(strategy, firstTime, feeHistory)
alt CurrentFee > TargetPrice and OldestBatch < TimeoutWindow
Relayer->>Relayer: Skip batch submission
else
Relayer->>Relayer: Submit batches
end
else Backlog >= BacklogMax
Relayer->>Relayer: Force submit oldest batches
end
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. Note ⚡️ Faster reviews with cachingCodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
019d1de
to
946d9dc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 2
🧹 Nitpick comments (2)
rollup/internal/controller/relayer/l2_relayer.go (1)
1188-1216
: Potential slice-building inefficiency & range query correctness infetchBlobFeeHistory
.Minor, non-blocking observations
•blocksAgo := windowSec / 12
silently floors – for window < 12 s the query span becomes 0. A guard could return early to avoid a full-table scan on very small windows.
• Thefilters
map uses two separatenumber >= ?
/<= ?
entries; GORM will treat duplicate keys oddly. Prefer a singleWhere("number BETWEEN ? AND ?", start, latest)
or two chainedWhere
s for clarity.
• You allocatehist := make([]*big.Int, len(recs))
; if the result is large this double-stores all fees (uint64
in record +big.Int
object). Consider returning[]uint64
unless arbitrary precision is strictly required.No functional bug here, but tightening these points will improve robustness.
rollup/conf/config.json (1)
57-58
: Timeout raised to 2 h – verify dependent services & dashboards.Doubling the submission window changes alerting expectations:
• Grafana/Prometheus alerts that fired after 5 min of no-submission must be updated.
• Any automation relying on fast batch availability (e.g. testnets) may need tuning.Please audit downstream tooling before merging.
🛑 Comments failed to post (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
290-336: 🛠️ Refactor suggestion
Unbounded query + hard-coded backlog limits – consider configurability & safety.
GetFailedAndPendingBatches(r.ctx, 0)
now issues an unlimited query. In a pathological backlog this could fetch tens of thousands of rows into memory in one go.
• Either keep a large, but bounded limit (e.g.backlogMax+cfg.BatchSubmission.MaxBatches
) or use a paginated cursor.
backlogMax := 75
is an in-code constant while the comment points atcfg.BatchSubmission.BacklogMax
.
• Promote it to configuration so operators can tune it without recompilation.Failing to address these may cause memory spikes and reduce observability in production.
🧰 Tools
🪛 golangci-lint (1.64.8)
313-313: strat.WindowSec undefined (type StrategyParams has no field or method WindowSec)
(typecheck)
327-327: strat.WindowSec undefined (type StrategyParams has no field or method WindowSec)
(typecheck)
🪛 GitHub Actions: Rollup
[error] 313-313: Build error: 'strat.WindowSec' undefined (type StrategyParams has no field or method WindowSec)
68-82:
⚠️ Potential issueCompilation breaks –
WindowSec
field missing fromStrategyParams
.
ProcessPendingBatches
,fetchBlobFeeHistory
, andcalculateTargetPrice
all referencestrat.WindowSec
, yet the struct declared here does not define that field.
The build currently fails (typecheck: strat.WindowSec undefined
).
Add the field (and initialise it inbestParams
) or stop referencing it and pass the window size explicitly.type StrategyParams struct { - BaselineType string // "pct_min" or "ewma" - BaselineParam float64 // percentile (0–1) or α for EWMA - Gamma float64 // relaxation γ - Beta float64 // relaxation β - RelaxType string // "exponential" or "sigmoid" + WindowSec uint64 // sampling window (seconds) + BaselineType string // "pct_min" or "ewma" + BaselineParam float64 // percentile (0–1) or α for EWMA + Gamma float64 // relaxation γ + Beta float64 // relaxation β + RelaxType string // "exponential" or "sigmoid" } var bestParams = map[uint64]StrategyParams{ - 2 * 3600: {BaselineType: "pct_min", BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: "exponential"}, - 5 * 3600: {BaselineType: "pct_min", BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: "sigmoid"}, - 12 * 3600: {BaselineType: "pct_min", BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: "sigmoid"}, + 2 * 3600: {WindowSec: 2 * 3600, BaselineType: "pct_min", BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: "exponential"}, + 5 * 3600: {WindowSec: 5 * 3600, BaselineType: "pct_min", BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: "sigmoid"}, + 12 * 3600: {WindowSec: 12 * 3600, BaselineType: "pct_min", BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: "sigmoid"}, }This restores type-checking, keeps the single source-of-truth for the window size and allows downstream code to compile.
Be sure to rungo test ./...
or the CI pipeline after applying the patch.
Will push new unit tests at |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1659 +/- ##
===========================================
- Coverage 40.68% 40.55% -0.13%
===========================================
Files 225 225
Lines 18419 18546 +127
===========================================
+ Hits 7493 7522 +29
- Misses 10195 10286 +91
- Partials 731 738 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
dbBatches, err := r.batchOrm.GetFailedAndPendingBatches(r.ctx, r.cfg.BatchSubmission.MaxBatches) | ||
allBatches, err := r.batchOrm.GetFailedAndPendingBatches(r.ctx, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to limit at r.cfg.BatchSubmission.MaxBatches
, otherwise we might need to read too many records.
If you just need it to check backlog size, you can add another count query for that.
if len(allBatches) < r.cfg.BatchSubmission.MinBatches || len(allBatches) == 0 { | ||
log.Debug("Not enough pending batches to submit", "count", len(allBatches), "minBatches", r.cfg.BatchSubmission.MinBatches, "maxBatches", r.cfg.BatchSubmission.MaxBatches) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current batch submission trigger is "either collect MinBatches
or timeout". Here you change it, what's the rationale for that? Is your point that with such a large window we should never have <MinBatches
after timeout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is: we don't necessarily want to commit even if we reach MinBatches
. We still want to wait and commit at the best time. Or after a timeout we force the submission. The timeout is variable though, influenced by the timeout window (max) and backlog of batches. Worst case, we commit at an expensive price towards the end of the window.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed this to keep the original semantic of "either collect MinBatches or timeout" (adding extra conditions if not in the timeout on top of having collected enough batches.
// return if not hitting target price | ||
if backlogCount <= backlogMax { | ||
windowSec := uint64(r.cfg.BatchSubmission.TimeoutSec) | ||
strat, ok := bestParams[windowSec] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can rename to strategy
to make the linter happy.
recs, err := r.l1BlockOrm.GetL1Blocks(r.ctx, filters) | ||
if err != nil { | ||
return nil, fmt.Errorf("GetL1Blocks: %w", err) | ||
} | ||
hist := make([]*big.Int, len(recs)) | ||
for i, b := range recs { | ||
hist[i] = new(big.Int).SetUint64(b.BlobBaseFee) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to just query the field (BlobBaseFee
) and not the whole block.
Also, performance-wise this might be problematic (e.g. query 3600 blocks every couple of seconds). You might need to add a db index. Or better, maintain the fee history (or the moving average) in memory instead of querying blocks again every time.
backlogCount := len(allBatches) | ||
backlogMax := 75 //r.cfg.BatchSubmission.BacklogMax | ||
|
||
if len(allBatches) < r.cfg.BatchSubmission.MinBatches || len(allBatches) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if len(allBatches) < r.cfg.BatchSubmission.MinBatches || len(allBatches) == 0 { | |
if len(allBatches) < r.cfg.BatchSubmission.MinBatches { |
I think the second check is not necessary. Also with Peter's suggestion to do a count query you can do this check at the very beginning (and keep the backlogMax
stuff below it for better separation and readability.
if len(allBatches) < r.cfg.BatchSubmission.MinBatches || len(allBatches) == 0 { | ||
log.Debug("Not enough pending batches to submit", "count", len(allBatches), "minBatches", r.cfg.BatchSubmission.MinBatches, "maxBatches", r.cfg.BatchSubmission.MaxBatches) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is: we don't necessarily want to commit even if we reach MinBatches
. We still want to wait and commit at the best time. Or after a timeout we force the submission. The timeout is variable though, influenced by the timeout window (max) and backlog of batches. Worst case, we commit at an expensive price towards the end of the window.
frac := age / float64(windowSec) | ||
var adjusted float64 | ||
switch strat.RelaxType { | ||
case "exponential": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to use constants for this. can be either string constants or better with iota
modifier a Go style enum
|
||
// return if not hitting target price | ||
if backlogCount <= backlogMax { | ||
windowSec := uint64(r.cfg.BatchSubmission.TimeoutSec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this parameter can only change by restart, I think it would be better to assign the strategy in the constructor instead of this clunky few lines being here. Either startup can fail or set a reasonable default like you do here.
} | ||
|
||
// return if not hitting target price | ||
if backlogCount <= backlogMax { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
somehow this code block is very ugly. Maybe you can put the logic into a function and then just return with the error message that you want to log. In that case we can continue and submit immediately (and log the error message). this way we can use the Go typical guard style
var baseline float64 | ||
switch strat.BaselineType { | ||
case "pct_min": | ||
sort.Float64s(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably also relatively expensive after the repeated DB query. Like Peter said would be better to keep this in memory.
Maybe something like this construction https://github.com/scroll-tech/go-ethereum/blob/e62c6f08e2cb233ca0e14d01bc3f41f2edaea369/rollup/da_syncer/batch_queue.go#L20
with a sorted heap and a shrinkingmap in memory could work.
Every time you check you would only need to verify which min/max blocknumbers you expect to be in the map. if they are not there then you query them from DB and add them (this way also after startup should be handled automatically). if an entry is too old you delete it out of the heap and map.
57e1f9a
to
4f84e9e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
rollup/internal/controller/relayer/l2_relayer.go (6)
97-104
: Consider adding validation for strategy parametersThe
StrategyParams
struct holds important configuration that impacts batch submission behavior, but there's no validation to ensure the parameters are within reasonable bounds.Consider adding a validation method to check that parameters like BaselineParam (0-1), Gamma, and Beta are within expected ranges:
type StrategyParams struct { BaselineType BaselineType // "pct_min" or "ewma" BaselineParam float64 // percentile (0–1) or α for EWMA Gamma float64 // relaxation γ Beta float64 // relaxation β RelaxType RelaxType // Exponential or Sigmoid + + // Validate checks if the parameters are within valid ranges + func (s StrategyParams) Validate() error { + if s.BaselineType == PctMin && (s.BaselineParam < 0 || s.BaselineParam > 1) { + return fmt.Errorf("BaselineParam must be between 0 and 1 for PctMin, got %f", s.BaselineParam) + } + if s.BaselineType == EWMA && (s.BaselineParam <= 0 || s.BaselineParam >= 1) { + return fmt.Errorf("BaselineParam (alpha) must be between 0 and 1 for EWMA, got %f", s.BaselineParam) + } + if s.Gamma < 0 { + return fmt.Errorf("Gamma must be non-negative, got %f", s.Gamma) + } + if s.Beta < 0 { + return fmt.Errorf("Beta must be non-negative, got %f", s.Beta) + } + return nil + } }
106-111
: Hard-coded strategy parameters could be made configurableThe strategy parameters for different time windows are hard-coded in the
bestParams
map, which might limit flexibility.Consider making these settings configurable through configuration files or environment variables, allowing for easier tuning without code changes:
-// bestParams maps your 2h/5h/12h windows to their best rules. -var bestParams = map[uint64]StrategyParams{ - 2 * 3600: {BaselineType: PctMin, BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: Exponential}, - 5 * 3600: {BaselineType: PctMin, BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: Sigmoid}, - 12 * 3600: {BaselineType: PctMin, BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: Sigmoid}, -}Add to the configuration structure instead, and load default values if not specified.
192-211
: Add error logging for initialization failuresThe strategy selection and fee history initialization are crucial for the relayer's operation, but errors during these steps could be more informative.
Consider adding more detailed logging before returning errors to aid in troubleshooting:
// pick and validate our submission strategy windowSec := uint64(cfg.BatchSubmission.TimeoutSec) strategy, ok := bestParams[windowSec] if !ok { + log.Error("Unsupported batch submission timeout", "windowSec", windowSec, "supported", maps.Keys(bestParams)) return nil, fmt.Errorf( "unsupported BatchSubmission.TimeoutSec: %d (must be one of %v)", windowSec, maps.Keys(bestParams), ) } layer2Relayer.batchStrategy = strategy latest, err := layer2Relayer.l1BlockOrm.GetLatestL1BlockHeight(ctx) if err != nil { + log.Error("Failed to get latest L1 block height", "err", err) return nil, fmt.Errorf("failed to get latest L1 block height: %v", err) } layer2Relayer.lastFetchedBlock = latest - uint64(layer2Relayer.cfg.BatchSubmission.TimeoutSec)/12 // start ~window seconds ago if _, err = layer2Relayer.fetchBlobFeeHistory(uint64(layer2Relayer.cfg.BatchSubmission.TimeoutSec)); err != nil { + log.Error("Initial blob fee load failed", "err", err) return nil, fmt.Errorf("initial blob‐fee load failed: %w", err) }
351-360
: Consider refactoring skipSubmitByFee logic into a separate functionThe fee-skipping logic in ProcessPendingBatches could be more readable and maintainable if refactored.
Consider extracting the decision logic into a more descriptive method:
- if backlogCount <= backlogMax { - oldest := dbBatches[0].CreatedAt - if skip, msg := r.skipSubmitByFee(oldest); skip { - log.Debug(msg) - return - } - // if !skip, we fall through and submit immediately - } + if backlogCount <= backlogMax && r.shouldDelaySubmissionForBetterFee(dbBatches[0].CreatedAt) { + return + } + // If backlog exceeds max, or we shouldn't delay, we fall through and submit immediately // Add a new method: +func (r *Layer2Relayer) shouldDelaySubmissionForBetterFee(oldestBatchTime time.Time) bool { + skip, msg := r.skipSubmitByFee(oldestBatchTime) + if skip { + log.Debug(msg) + return true + } + return false +}
1209-1238
: Optimize blob fee history storage and retrievalThe current implementation rebuilds the fee history on each call, which could be inefficient for frequent calls with large window sizes.
Consider implementing a time-based cache or a more efficient data structure like a circular buffer:
func (r *Layer2Relayer) fetchBlobFeeHistory(windowSec uint64) ([]*big.Int, error) { latest, err := r.l1BlockOrm.GetLatestL1BlockHeight(r.ctx) if err != nil { return nil, fmt.Errorf("GetLatestL1BlockHeight: %w", err) } from := r.lastFetchedBlock + 1 //if new blocks if from <= latest { raw, err := r.l1BlockOrm.GetBlobFeesInRange(r.ctx, from, latest) if err != nil { return nil, fmt.Errorf("GetBlobFeesInRange: %w", err) } // 2) append them for _, v := range raw { r.feeHistory = append(r.feeHistory, new(big.Int).SetUint64(v)) r.lastFetchedBlock++ } } maxLen := int(windowSec / 12) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } // return a copy out := make([]*big.Int, len(r.feeHistory)) copy(out, r.feeHistory) return out, nil }Also, consider adding a timestamp or block number to each fee entry to better track the age of each sample.
1289-1317
: Add retry logic for blob fee history retrievalThe
skipSubmitByFee
method immediately falls back to immediate submission if blob fee history is unavailable, which could be temporary.Consider adding retry logic for fetching the blob fee history:
func (r *Layer2Relayer) skipSubmitByFee(oldest time.Time) (bool, string) { windowSec := uint64(r.cfg.BatchSubmission.TimeoutSec) - hist, err := r.fetchBlobFeeHistory(windowSec) + var hist []*big.Int + var err error + + // Try fetching the blob fee history with retries + maxRetries := 3 + for i := 0; i < maxRetries; i++ { + hist, err = r.fetchBlobFeeHistory(windowSec) + if err == nil && len(hist) > 0 { + break + } + + if i < maxRetries-1 { + log.Debug("Retrying blob fee history fetch", "attempt", i+1, "err", err) + time.Sleep(time.Second) + } + } + if err != nil || len(hist) == 0 { return false, fmt.Sprintf( "blob‐fee history unavailable or empty; fallback to immediate batch submission – err=%v, history_length=%d", err, len(hist), ) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/orm/batch.go
(1 hunks)rollup/internal/orm/l1_block.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- rollup/internal/orm/l1_block.go
- rollup/internal/orm/batch.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: tests
🔇 Additional comments (5)
rollup/internal/controller/relayer/l2_relayer.go (5)
38-47
: Well-structured enum implementation for relaxation functionsGood use of iota for enum values with proper documentation. This follows Go best practices for defining constants.
49-60
: Clear enum implementation for baseline typesThe enum is well-documented and follows Go best practices using iota.
92-95
: Good field additions to support the fee strategyThese new fields appropriately store the needed state for the fee-based submission strategy.
156-157
: LGTM: Added L1Block ORM initializationThe L1Block ORM is properly initialized to allow for fetching blob fee data.
1240-1287
:❓ Verification inconclusive
Add error checking for float64 conversions
The
calculateTargetPrice
function converts between big.Int and float64 without checking for potential loss of precision or overflow.Check for potential precision issues in the float64 conversions:
Consider adding error handling for these conversions and potentially using big.Float for the entire calculation to maintain precision.
🏁 Script executed:
#!/bin/bash # Check if there are any extremely large blob fee values in the codebase that might cause precision issues rg -A 2 -B 2 "BlobBaseFee" rg -A 2 -B 2 "blobFee"Length of output: 14319
Verify float64 conversion precision in calculateTargetPrice
The
calculateTargetPrice
function in
rollup/internal/controller/relayer/l2_relayer.go
converts*big.Int
→*big.Float
→float64
(ignoring theAccuracy
return) and later back to*big.Int
viaInt(nil)
. In edge cases (values >2⁵³ or requiring sub-gwei precision) this can silently round or overflow.Please verify that:
- Blob-fee history values (in Gwei) always stay below the 2⁵³ threshold so conversion is exact.
- You’re comfortable discarding the
Accuracy
flag returned byFloat64()
.If higher precision is needed, consider:
- Checking the
Accuracy
value fromFloat64()
and handling “Below”/“Above” cases.- Performing the entire relaxation and scaling using
big.Float
, only callingInt(nil)
on the final result.Location for review:
- rollup/internal/controller/relayer/l2_relayer.go →
calculateTargetPrice
(lines 1240–1287)
var adjusted float64 | ||
switch strategy.RelaxType { | ||
case Exponential: | ||
adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) | ||
case Sigmoid: | ||
adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) | ||
default: | ||
adjusted = baseline | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add bounds checking for relaxation formulas
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType {
case Exponential:
- adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1)))
+ exponent := strategy.Beta * (frac - 1)
+ // Prevent extreme values that could cause overflow
+ if exponent > 20 {
+ exponent = 20
+ }
+ adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent))
case Sigmoid:
- adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5))))
+ sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5)))
+ // Ensure the adjustment stays within reasonable bounds
+ if sigmoid > 10 {
+ sigmoid = 10
+ }
+ adjusted = baseline * (1 + sigmoid)
default:
adjusted = baseline
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
var adjusted float64 | |
switch strategy.RelaxType { | |
case Exponential: | |
adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) | |
case Sigmoid: | |
adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) | |
default: | |
adjusted = baseline | |
} | |
var adjusted float64 | |
switch strategy.RelaxType { | |
case Exponential: | |
exponent := strategy.Beta * (frac - 1) | |
// Prevent extreme values that could cause overflow | |
if exponent > 20 { | |
exponent = 20 | |
} | |
adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) | |
case Sigmoid: | |
sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) | |
// Ensure the adjustment stays within reasonable bounds | |
if sigmoid > 10 { | |
sigmoid = 10 | |
} | |
adjusted = baseline * (1 + sigmoid) | |
default: | |
adjusted = baseline | |
} |
🤖 Prompt for AI Agents
In rollup/internal/controller/relayer/l2_relayer.go around lines 1274 to 1282,
the exponential and sigmoid relaxation formulas can produce extremely large
values that may cause numerical instability. Add bounds checking to limit the
maximum and minimum values of the computed adjusted variable after applying the
formulas. Implement safeguards such as clamping the result within a reasonable
range to prevent overflow or underflow issues.
4f84e9e
to
201007c
Compare
201007c
to
d5af194
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1272-1280
:⚠️ Potential issueAdd bounds checking for relaxation formulas.
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }
🧹 Nitpick comments (6)
rollup/internal/controller/relayer/l2_relayer.go (6)
107-111
: Consider adding validation for strategy parameters.While the map of predefined parameters looks good, consider adding validation logic to ensure the parameters are within reasonable bounds (e.g., BaselineParam should be between 0-1 for percentiles).
var bestParams = map[uint64]StrategyParams{ 2 * 3600: {BaselineType: PctMin, BaselineParam: 0.10, Gamma: 0.4, Beta: 8, RelaxType: Exponential}, 5 * 3600: {BaselineType: PctMin, BaselineParam: 0.30, Gamma: 0.6, Beta: 20, RelaxType: Sigmoid}, 12 * 3600: {BaselineType: PctMin, BaselineParam: 0.50, Gamma: 0.5, Beta: 20, RelaxType: Sigmoid}, } + +// validateStrategyParams ensures that strategy parameters are within reasonable bounds +func validateStrategyParams(s StrategyParams) error { + if s.BaselineType == PctMin && (s.BaselineParam < 0 || s.BaselineParam > 1) { + return fmt.Errorf("PctMin baseline parameter must be between 0 and 1, got %f", s.BaselineParam) + } + if s.BaselineType == EWMA && (s.BaselineParam <= 0 || s.BaselineParam >= 1) { + return fmt.Errorf("EWMA alpha parameter must be between 0 and 1 exclusive, got %f", s.BaselineParam) + } + return nil +}
344-359
: Hardcoded backlog threshold should be configurable.The backlog threshold is using the configuration value
r.cfg.BatchSubmission.BacklogMax
which is good, but consider documenting in the comments what this threshold represents and how it affects the batch submission behavior.- // if backlog outgrow max size, force‐submit enough oldest batches + // If backlog exceeds the maximum allowed size (BacklogMax), we force-submit + // batches regardless of the current blob fee, to prevent excessive batch buildup. + // This ensures the system can recover from prolonged high-fee periods. backlogCount, err := r.batchOrm.GetFailedAndPendingBatchesCount(r.ctx) if err != nil { log.Error("Failed to fetch pending L2 batches", "err", err) return }
1243-1248
: Consider optimizing big.Float to float64 conversion.The conversion from
big.Int
tofloat64
viabig.Float
is correct but could be optimized for large batches.data := make([]float64, n) + divider := big.NewFloat(1e9) for i, v := range history { - f, _ := new(big.Float).Quo(new(big.Float).SetInt(v), big.NewFloat(1e9)).Float64() + f, _ := new(big.Float).Quo(new(big.Float).SetInt(v), divider).Float64() data[i] = f }
1306-1311
: Add metric for submission skipping due to fees.It would be valuable to track how often batch submissions are being skipped due to high fees. This would help monitor the effectiveness of the fee-based submission strategy.
// if current fee > target and still inside the timeout window, skip if current.Cmp(target) > 0 && time.Since(oldest) < time.Duration(windowSec)*time.Second { + r.metrics.rollupL2RelayerBatchSubmissionSkippedForFee.Inc() return true, fmt.Sprintf( "blob‐fee above target & window not yet passed; current=%s target=%s age=%s", current.String(), target.String(), time.Since(oldest), ) }
1180-1180
: Remove debug print statement.The
fmt.Println
statement should be removed or replaced with proper logging.- fmt.Println("packing finalizeBundlePostEuclidV2NoProof", len(dbBatch.BatchHeader), dbBatch.CodecVersion, dbBatch.BatchHeader, new(big.Int).SetUint64(endChunk.TotalL1MessagesPoppedBefore+endChunk.TotalL1MessagesPoppedInChunk), common.HexToHash(dbBatch.StateRoot), common.HexToHash(dbBatch.WithdrawRoot)) + log.Debug("Packing finalizeBundlePostEuclidV2NoProof", + "header_length", len(dbBatch.BatchHeader), + "codec_version", dbBatch.CodecVersion, + "messages_popped", endChunk.TotalL1MessagesPoppedBefore+endChunk.TotalL1MessagesPoppedInChunk, + "state_root", dbBatch.StateRoot, + "withdraw_root", dbBatch.WithdrawRoot)
1207-1236
:❓ Verification inconclusive
Consider optimizing blob fee history retrieval.
The current implementation queries the database frequently to update the fee history. This could become a performance bottleneck as the system scales. Consider:
- Using a time-based cache with periodic updates instead of querying on every check
- Adding batch size limits to prevent fetching too many records at once
- Implementing a more efficient data structure like a circular buffer for the fee history
Let's check how frequently this method might be called:
🏁 Script executed:
#!/bin/bash # Find where fetchBlobFeeHistory is called rg -A 2 "fetchBlobFeeHistory" --no-ignoreLength of output: 1170
I’ll pull the exact call sites and surrounding context to see how often and where this is invoked.
🏁 Script executed:
#!/bin/bash # Locate all calls and show 5 lines of context around them rg -n "fetchBlobFeeHistory" rollup/internal/controller/relayer/l2_relayer.go rg -C 5 "fetchBlobFeeHistory" rollup/internal/controller/relayer/l2_relayer.goLength of output: 1992
Optimize blob fee history retrieval
The
fetchBlobFeeHistory
method (rollup/internal/controller/relayer/l2_relayer.go:1207–1236) is invoked twice:
- During relayer initialization (line 208)
- Inside
skipSubmitByFee
(around line 1293) on each batch‐submission checkEach call performs two DB queries (latest block height + fee range), which may become a bottleneck as submissions scale. Consider:
• Caching the recent base-fee history and refreshing on a timer rather than querying every check
• Imposing a hard limit on fetched rows or time window to bound each query
• Switching to a fixed-size ring (circular buffer) in memory to cap history growth and reduce slicing overhead
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (5)
rollup/conf/config.json
(1 hunks)rollup/internal/config/relayer.go
(1 hunks)rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/orm/batch.go
(1 hunks)rollup/internal/orm/l1_block.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- rollup/internal/config/relayer.go
🚧 Files skipped from review as they are similar to previous changes (3)
- rollup/conf/config.json
- rollup/internal/orm/l1_block.go
- rollup/internal/orm/batch.go
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer.go (2)
rollup/internal/orm/l1_block.go (3)
L1Block
(14-31)L1Block
(39-41)NewL1Block
(34-36)rollup/internal/config/relayer.go (1)
BatchSubmission
(34-43)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (6)
rollup/internal/controller/relayer/l2_relayer.go (6)
38-47
: Good use of Golang iota for enumeration.The enum implementation for
RelaxType
usingiota
is clean and follows Go best practices for creating type-safe enumeration constants.
49-60
: Good documentation of enum types.The explanation comments for each baseline type clearly communicate the purpose and behavior of these constants. This improves code readability and maintainability.
97-104
: Clear structure definition for strategy parameters.The
StrategyParams
struct provides a well-organized way to encapsulate the fee submission strategy configuration with appropriate field types.
192-210
: Good strategy parameter validation in constructor.The constructor properly validates that the configured timeout window is one of the supported values and returns a helpful error message if not. This is a good practice to fail fast on invalid configuration.
1287-1315
: Good fallback behavior in skipSubmitByFee.The method properly handles error cases by falling back to immediate submission if fee history is unavailable or empty. This ensures the system remains operational even when fee data is missing.
1252-1268
: Robust implementation of baseline calculation algorithms.The implementation of the percentile minimum (PctMin) and exponentially weighted moving average (EWMA) calculations is well structured and follows standard algorithmic approaches.
maxLen := int(windowSec / 12) | ||
if len(r.feeHistory) > maxLen { | ||
r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding minimum history length check.
The fee history window could be too small to make meaningful decisions if there aren't enough samples.
maxLen := int(windowSec / 12)
if len(r.feeHistory) > maxLen {
r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:]
}
+
+ // Ensure we have sufficient data for meaningful analysis
+ if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) {
+ log.Warn("Insufficient blob fee history samples for reliable decision making",
+ "samples", len(r.feeHistory),
+ "desired", maxLen)
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
maxLen := int(windowSec / 12) | |
if len(r.feeHistory) > maxLen { | |
r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] | |
} | |
maxLen := int(windowSec / 12) | |
if len(r.feeHistory) > maxLen { | |
r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] | |
} | |
// Ensure we have sufficient data for meaningful analysis | |
if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) { | |
log.Warn("Insufficient blob fee history samples for reliable decision making", | |
"samples", len(r.feeHistory), | |
"desired", maxLen) | |
} |
🤖 Prompt for AI Agents
In rollup/internal/controller/relayer/l2_relayer.go around lines 1228 to 1231,
add a minimum length check for r.feeHistory before trimming it to maxLen. Ensure
that the feeHistory slice retains at least a certain minimum number of samples
to maintain meaningful data for decision-making. Implement logic to compare
maxLen with this minimum threshold and use the larger value to slice feeHistory
accordingly.
rollup/internal/orm/batch.go
Outdated
// GetFailedAndPendingBatches retrieves batches with failed or pending status up to the specified limit. | ||
// The returned batches are sorted in ascending order by their index. | ||
func (o *Batch) GetFailedAndPendingBatches(ctx context.Context, limit int) ([]*Batch, error) { | ||
if limit <= 0 { | ||
return nil, errors.New("limit must be greater than zero") | ||
if limit < 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert these changes if not necessary anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
} | ||
|
||
// return if not hitting target price | ||
if backlogCount <= r.cfg.BatchSubmission.BacklogMax { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now this function is much easier to read and reason about. One thing that we might consider but is probably okay for now: we always do the calculation, even if we don't have at least MinBatches
. Only to then afterwards exit bc we don't have enough batches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I changed the code to do this, but something that I did while doing that is only setting forceSubmit to true if the oldest batch by index must be force submitted, unlike now that this is check for all the batches in dbBatches. I argue this is ok as the smallest index must be the one that must be submitted next. What do you think?
log.Debug(msg) | ||
return | ||
} | ||
// if !skip, we fall through and submit immediately |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we don't log the reason of !skip
. Instead of a msg as a string I think it's better to use an error type and just extract the message
} | ||
} | ||
|
||
maxLen := int(windowSec / 12) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use a constant for 12
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
} | ||
layer2Relayer.batchStrategy = strategy | ||
|
||
latest, err := layer2Relayer.l1BlockOrm.GetLatestL1BlockHeight(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of having duplicate code can't we simply make fetchBlobFeeHistory
recover itself if there's no previous state?
As far as I can see we only need to check if layer2Relayer.lastFetchedBlock == 0
and then set it accordinlgy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. Done.
} | ||
// return a copy | ||
out := make([]*big.Int, len(r.feeHistory)) | ||
copy(out, r.feeHistory) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why copy here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, this is not needed. I thought because of the sort in calculate target price when using PctMin but the sort already takes a different slice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
1229-1276
:⚠️ Potential issueAdd bounds checking for relaxation formulas
The exponential and sigmoid formulas could lead to extremely large values without proper bounds checking.
Add protection against potential numerical issues:
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoid := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoid > 10 { + sigmoid = 10 + } + adjusted = baseline * (1 + sigmoid) default: adjusted = baseline }
1195-1227
: 🛠️ Refactor suggestionAdd minimum history length check
The fee history window could be too small to make meaningful decisions if there aren't enough samples.
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) { + log.Warn("Insufficient blob fee history samples for reliable decision making", + "samples", len(r.feeHistory), + "desired", maxLen) + }
🧹 Nitpick comments (2)
rollup/internal/controller/relayer/l2_relayer.go (2)
108-112
: Consider extracting best parameters to configurationWhile the bestParams map is a good approach for providing default strategies, consider making these values configurable through the configuration file rather than hardcoding them in the source code.
This would allow tuning the parameters without requiring a code change and redeployment.
1278-1306
: Consider caching calculated target pricesThe target price calculation is potentially expensive, especially with larger fee histories. Consider caching the result for a short period to reduce computational overhead when
skipSubmitByFee
is called frequently.Add a cache for the target price with a short expiration to avoid recalculating too frequently:
type targetPriceCache struct { price *big.Int oldest time.Time calculated time.Time expiry time.Duration } // Cache the target price calculation result if time.Since(r.targetCache.calculated) < r.targetCache.expiry && r.targetCache.oldest == oldest { target = r.targetCache.price } else { target = calculateTargetPrice(windowSec, r.batchStrategy, oldest, hist) r.targetCache = targetPriceCache{ price: target, oldest: oldest, calculated: time.Now(), expiry: 5 * time.Second, // Configurable cache duration } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)rollup/internal/orm/batch.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- rollup/internal/orm/batch.go
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (10)
rollup/internal/controller/relayer/l2_relayer.go (10)
37-46
: Good use of iota for enum definitionUsing iota for RelaxType enum values follows Go convention and makes the code more maintainable.
48-48
: Use constant instead of magic numberGood use of a named constant for secondsPerBlock rather than hardcoding the value throughout the code.
50-61
: Well-documented enum definitionClear documentation for the BaselineType enum that explains the purpose of each option.
93-96
: New fields added to support fee-based submission strategyThe new fields properly track the state needed for implementing the fee-based batch submission strategy.
98-106
: Well-structured strategy parametersThe StrategyParams struct is well organized with appropriate fields for fee baseline calculation and relaxation functions.
157-157
: New ORM field initializationThe l1BlockOrm field is properly initialized in the constructor.
193-195
: Proper initialization of fee historyThe code correctly preloads the blob fee history during initialization and returns an error if it fails.
330-334
: Backlog check for forcing submissionGood approach to check the backlog count for determining whether to force-submit batches. This prevents excessive queue buildup.
339-353
: Well-structured batch submission decision logicThe code clearly separates the conditions for deciding when to submit batches:
- Check backlog against the maximum threshold
- Check if the oldest batch exceeds the timeout window
- Use price-based strategy if neither of the above conditions are met
This provides a balanced approach between cost optimization and ensuring timely submissions.
1297-1302
: Good debugging information in error messageThe error message provides useful debugging information including the current fee, target fee, and time since the oldest batch, which will help with troubleshooting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1229-1276
: 🛠️ Refactor suggestionImplement target price calculation with baseline and relaxation.
The implementation correctly:
- Handles empty history array
- Applies either percentile or EWMA baseline calculation
- Applies relaxation curves (exponential or sigmoid)
- Converts between Gwei and Wei for calculations
Consider adding bounds checking to prevent potential numerical instability with the exponential and sigmoid functions.
switch strategy.RelaxType { case Exponential: - adjusted = baseline * (1 + strategy.Gamma*math.Exp(strategy.Beta*(frac-1))) + exponent := strategy.Beta * (frac - 1) + // Prevent extreme values that could cause overflow + if exponent > 20 { + exponent = 20 + } + adjusted = baseline * (1 + strategy.Gamma*math.Exp(exponent)) case Sigmoid: - adjusted = baseline * (1 + strategy.Gamma/(1+math.Exp(-strategy.Beta*(frac-0.5)))) + sigmoidTerm := strategy.Gamma / (1 + math.Exp(-strategy.Beta*(frac-0.5))) + // Ensure the adjustment stays within reasonable bounds + if sigmoidTerm > 10 { + sigmoidTerm = 10 + } + adjusted = baseline * (1 + sigmoidTerm) default: adjusted = baseline }
🧹 Nitpick comments (1)
rollup/internal/controller/relayer/l2_relayer.go (1)
1195-1227
: Implement fetchBlobFeeHistory to maintain sliding window of blob fees.The method:
- Correctly bootstraps on first call
- Fetches only new blocks since last call
- Maintains a sliding window of appropriate size
You might consider adding a minimum history length check to ensure enough samples for reliable decision-making.
maxLen := int(windowSec / secondsPerBlock) if len(r.feeHistory) > maxLen { r.feeHistory = r.feeHistory[len(r.feeHistory)-maxLen:] } + + // Ensure we have sufficient data for meaningful analysis + if len(r.feeHistory) < int(math.Max(3, float64(maxLen)*0.1)) { + log.Debug("Limited blob fee history samples for decision making", + "samples", len(r.feeHistory), + "desired", maxLen) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
rollup/internal/controller/relayer/l2_relayer.go
(8 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/controller/relayer/l2_relayer.go (2)
rollup/internal/orm/l1_block.go (3)
L1Block
(14-31)L1Block
(39-41)NewL1Block
(34-36)rollup/internal/config/relayer.go (1)
BatchSubmission
(34-43)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (11)
rollup/internal/controller/relayer/l2_relayer.go (11)
37-46
: Good job defining the RelaxType enumeration clearly.The enum is well-documented with descriptive comments for each relaxation function type.
48-48
: Good addition of secondsPerBlock constant.Extracting this as a constant improves readability and maintainability.
50-61
: BaselineType enumeration is well-defined.Clear structure with descriptive comments explaining each baseline type's purpose.
76-76
: Adding l1BlockOrm field is appropriate.This field is necessary to fetch blob fee data for the new strategy.
93-96
: New Layer2Relayer fields support the fee history strategy.The fields are well named and appropriately scoped for the new functionality.
98-105
: StrategyParams struct is well-designed.The struct provides a clear structure for fee-submission rules with appropriate fields for each parameter.
107-112
: The bestParams map provides good defaults for different windows.Presets for 2h, 5h, and 12h windows allow for flexible configuration based on timeout duration.
157-157
: Properly initializing l1BlockOrm in the Layer2Relayer constructor.The ORM field is initialized correctly using the provided database connection.
193-195
: Initial blob-fee history load is a good practice.Loading initial fee history at startup ensures the relayer has data available for the first batch submission decision.
329-352
: Check batches backlog and implement fee-based submission strategy.The code correctly implements the logic to:
- Check backlog count against configurable threshold
- Enforce submission when the oldest batch exceeds the timeout
- Skip submission when current blob fees are above target and timeout hasn't elapsed
There's a seamless integration with the existing batch submission logic.
1278-1306
: Implement skipSubmitByFee for batch submission control.The function:
- Correctly fetches blob fee history
- Calculates target price based on strategy
- Compares current fee to target price
- Allows submission when either fee conditions are favorable or timeout window has elapsed
The error reporting is detailed and will help with debugging submission decisions.
Purpose or design rationale of this PR
Implements a new sequencer submission strategy in the L2 relayer that decides when to publish batches based on recent blob‐fee conditions rather than immediately. It consists of:
fetchBlobFeeHistory(windowSec uint64)
– pulls the lastwindowSec
seconds of blob‐fee samples from thel1_block
table.calculateTargetPrice(strat StrategyParams, firstTime time.Time, history []*big.Int)
– computes a dynamic “target” blob fee by applying either a percentile or EWMA baseline, then a relaxation curve (exponential or sigmoid) that increases toward the window deadline. Estimates perwindowSec
obtained from tests performed on traces.Integration into
ProcessPendingBatches
– before sending a commit transaction, the relayer compares the most recent blob fee to the target and only proceeds if the price is at or below the target, or if the configured time window has elapsed.Resulting behavior:
Cost optimization: by delaying submission until blob fees are favorable, the protocol can reduce L1 data‐fee costs.
Flexibility & safety: the relaxation curve ensures that batches aren’t starved indefinitely and will be submitted by the deadline even if prices remain high.
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit