-
Notifications
You must be signed in to change notification settings - Fork 817
Add planner filter #4318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add planner filter #4318
Conversation
Signed-off-by: Albert <[email protected]>
Testing done for these changes To test these changes I collected Prometheus metrics for a couple of days without any compaction. After generating some uncompacted blocks, I saved them to be able to replicate the same compaction multiple times. I downloaded blocks from s3 to be able to replicate testing completed and not delete blocks in s3.
|
Signed-off-by: Albert <[email protected]>
Sorry but we have begun the process of cutting a new release; please rebase from |
Signed-off-by: Albert <[email protected]>
@@ -146,6 +149,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) { | |||
"If 0, blocks will be deleted straight away. Note that deleting blocks immediately can cause query failures.") | |||
f.DurationVar(&cfg.TenantCleanupDelay, "compactor.tenant-cleanup-delay", 6*time.Hour, "For tenants marked for deletion, this is time between deleting of last block, and doing final cleanup (marker files, debug files) of the tenant.") | |||
f.BoolVar(&cfg.BlockDeletionMarksMigrationEnabled, "compactor.block-deletion-marks-migration-enabled", true, "When enabled, at compactor startup the bucket will be scanned and all found deletion marks inside the block location will be copied to the markers global location too. This option can (and should) be safely disabled as soon as the compactor has successfully run at least once.") | |||
f.BoolVar(&cfg.PlannerFilterEnabled, "compactor.planner-filter-enabled", false, "Filter and plan blocks within PlannerFilter instead of through Thanos planner and grouper.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the config option should be that specific (what this CLI flag describes is an internal implementation detail). The whole purpose of the #4272 proposal is to introduce a different sharding strategy for the compactor. To keep it consistent with other Cortex services, the config option could be compactor.sharding-strategy
with values default
(the current one) and shuffle-sharding
(the new one you're working on).
level.Info(c.logger).Log("msg", "Compactor using planner filter") | ||
|
||
// Create a new planner filter | ||
f, err := NewPlannerFilter( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is the right way to build it. It's not the responsability of the metadata fetcher to run the planning and filter out blocks belonging to other shards. It's not how the compactor was designed. You should build this feature working on the compactor grouper and planner.
Signed-off-by: Albert [email protected]
What this PR does:
Implements generation of parallelize plans for the proposal outlined in #4272. Currently the parallelizable plans are generated but only the first plan in the plans list is selected to run.
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]