From ebb1835aba1ad4e36787cc9327210365023ecf02 Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Mon, 27 Mar 2023 14:45:04 -0700 Subject: [PATCH 1/6] prepare 1.15.0-rc release (#5235) Signed-off-by: Ben Ye --- CHANGELOG.md | 8 ++++---- VERSION | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7d95c3d1af6..94e9572f131 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,9 +4,9 @@ ## 1.15.0 in progress -* [CHANGE] Storage: Make Max exemplars config per tenant instead of global configuration. #5016 +* [CHANGE] Storage: Make Max exemplars config per tenant instead of global configuration. #5080 #5122 * [CHANGE] Alertmanager: Local file disclosure vulnerability in OpsGenie configuration has been fixed. #5045 -* [CHANGE] Rename oltp_endpoint to otlp_endpoint to match opentelemetry spec and lib name. #5067 +* [CHANGE] Rename oltp_endpoint to otlp_endpoint to match opentelemetry spec and lib name. #5068 * [CHANGE] Distributor/Ingester: Log warn level on push requests when they have status code 4xx. Do not log if status is 429. #5103 * [CHANGE] Tracing: Use the default OTEL trace sampler when `-tracing.otel.exporter-type` is set to `awsxray`. #5141 * [CHANGE] Ingester partial error log line to debug level. #5192 @@ -29,7 +29,7 @@ * [FEATURE] Added `snappy-block` as an option for grpc compression #5215 * [FEATURE] Enable experimental out-of-order samples support. Added 2 new configs `ingester.out_of_order_time_window` and `blocks-storage.tsdb.out_of_order_cap_max`. #4964 * [ENHANCEMENT] Querier: limit series query to only ingesters if `start` param is not specified. #4976 -* [ENHANCEMENT] Query-frontend/scheduler: add a new limit `frontend.max-outstanding-requests-per-tenant` for configuring queue size per tenant. Started deprecating two flags `-query-scheduler.max-outstanding-requests-per-tenant` and `-querier.max-outstanding-requests-per-tenant`, and change their value default to 0. Now if both the old flag and new flag are specified, the old flag's queue size will be picked. #5005 +* [ENHANCEMENT] Query-frontend/scheduler: add a new limit `frontend.max-outstanding-requests-per-tenant` for configuring queue size per tenant. Started deprecating two flags `-query-scheduler.max-outstanding-requests-per-tenant` and `-querier.max-outstanding-requests-per-tenant`, and change their value default to 0. Now if both the old flag and new flag are specified, the old flag's queue size will be picked. #4991 * [ENHANCEMENT] Query-tee: Add `/api/v1/query_exemplars` API endpoint support. #5010 * [ENHANCEMENT] Let blocks_cleaner delete blocks concurrently(default 16 goroutines). #5028 * [ENHANCEMENT] Query Frontend/Query Scheduler: Increase upper bound to 60s for queue duration histogram metric. #5029 @@ -49,7 +49,7 @@ * [BUGFIX] Fixed no compact block got grouped in shuffle sharding grouper. #5055 * [BUGFIX] Fixed ingesters with less tokens stuck in LEAVING. #5061 * [BUGFIX] Tracing: Fix missing object storage span instrumentation. #5074 -* [BUGFIX] Ingester: Ingesters returning empty response for metadata APIs. #5081 +* [BUGFIX] Ingester: Fix Ingesters returning empty response for metadata APIs. #5081 * [BUGFIX] Ingester: Fix panic when querying metadata from blocks that are being deleted. #5119 * [BUGFIX] Ring: Fix case when dynamodb kv reaches the limit of 25 actions per batch call. #5136 * [BUGFIX] Query-frontend: Fix shardable instant queries do not produce sorted results for `sort`, `sort_desc`, `topk`, `bottomk` functions. #5148, #5170 diff --git a/VERSION b/VERSION index 850e742404b..6a385e1a358 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.14.0 +1.15.0-rc.0 From af49d70cb1fdb2330c17e0998f98e726211d1780 Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Sat, 1 Apr 2023 11:14:02 -0700 Subject: [PATCH 2/6] Cherry-pick fixes to release 1.15 branch (#5241) * Batch Iterator optimization (#5237) * Batch Opmization Signed-off-by: Alan Protasio * Add test bacj Signed-off-by: Alan Protasio * Testing Multiples scrape intervals Signed-off-by: Alan Protasio * no assimption Signed-off-by: Alan Protasio * Using max chunk ts Signed-off-by: Alan Protasio * test with scrape 10 Signed-off-by: Alan Protasio * rename method Signed-off-by: Alan Protasio * comments Signed-off-by: Alan Protasio * using next Signed-off-by: Alan Protasio * change test name Signed-off-by: Alan Protasio * changelog/comments Signed-off-by: Alan Protasio --------- Signed-off-by: Alan Protasio Signed-off-by: Ben Ye * Store Gateway: Convert metrics from summary to histograms (#5239) * Convert following metrics from summary to histogram cortex_bucket_store_series_blocks_queried cortex_bucket_store_series_data_fetched cortex_bucket_store_series_data_size_touched_bytes cortex_bucket_store_series_data_size_fetched_bytes cortex_bucket_store_series_data_touched cortex_bucket_store_series_result_series Signed-off-by: Friedrich Gonzalez * Update changelog Signed-off-by: Friedrich Gonzalez * fix changelog Signed-off-by: Friedrich Gonzalez --------- Signed-off-by: Friedrich Gonzalez Signed-off-by: Ben Ye * update changelog Signed-off-by: Ben Ye * Catch context error in the s3 bucket client (#5240) Signed-off-by: Xiaochao Dong (@damnever) Signed-off-by: Ben Ye * bump RC version Signed-off-by: Ben Ye --------- Signed-off-by: Alan Protasio Signed-off-by: Ben Ye Signed-off-by: Friedrich Gonzalez Signed-off-by: Xiaochao Dong (@damnever) Co-authored-by: Alan Protasio Co-authored-by: Friedrich Gonzalez Co-authored-by: Xiaochao Dong --- CHANGELOG.md | 3 + VERSION | 2 +- pkg/querier/batch/batch.go | 14 + pkg/querier/batch/batch_test.go | 59 +++- pkg/querier/batch/chunk.go | 4 + pkg/querier/batch/chunk_test.go | 4 +- pkg/querier/batch/merge.go | 8 + pkg/querier/batch/non_overlapping.go | 4 + pkg/storage/bucket/s3/bucket_client.go | 2 +- pkg/storage/bucket/s3/bucket_client_test.go | 21 +- pkg/storegateway/bucket_store_metrics.go | 12 +- pkg/storegateway/bucket_store_metrics_test.go | 285 ++++++++++++++++-- 12 files changed, 372 insertions(+), 46 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 94e9572f131..5c810f9ddf4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,7 @@ * [CHANGE] Tracing: Use the default OTEL trace sampler when `-tracing.otel.exporter-type` is set to `awsxray`. #5141 * [CHANGE] Ingester partial error log line to debug level. #5192 * [CHANGE] Change HTTP status code from 503/422 to 499 if a request is canceled. #5220 +* [CHANGE] Store gateways summary metrics have been converted to histograms `cortex_bucket_store_series_blocks_queried`, `cortex_bucket_store_series_data_fetched`, `cortex_bucket_store_series_data_size_touched_bytes`, `cortex_bucket_store_series_data_size_fetched_bytes`, `cortex_bucket_store_series_data_touched`, `cortex_bucket_store_series_result_series` #5239 * [FEATURE] Querier/Query Frontend: support Prometheus /api/v1/status/buildinfo API. #4978 * [FEATURE] Ingester: Add active series to all_user_stats page. #4972 * [FEATURE] Ingester: Added `-blocks-storage.tsdb.head-chunks-write-queue-size` allowing to configure the size of the in-memory queue used before flushing chunks to the disk . #5000 @@ -44,6 +45,7 @@ * [ENHANCEMENT] Distributor: Reuse byte slices when serializing requests from distributors to ingesters. #5193 * [ENHANCEMENT] Query Frontend: Add number of chunks and samples fetched in query stats. #5198 * [ENHANCEMENT] Implement grpc.Compressor.DecompressedSize for snappy to optimize memory allocations. #5213 +* [ENHANCEMENT] Querier: Batch Iterator optimization to prevent transversing it multiple times query ranges steps does not overlap. #5237 * [BUGFIX] Updated `golang.org/x/net` dependency to fix CVE-2022-27664. #5008 * [BUGFIX] Fix panic when otel and xray tracing is enabled. #5044 * [BUGFIX] Fixed no compact block got grouped in shuffle sharding grouper. #5055 @@ -57,6 +59,7 @@ * [BUGFIX] Compactor: Fix issue that shuffle sharding planner return error if block is under visit by other compactor. #5188 * [BUGFIX] Fix S3 BucketWithRetries upload empty content issue #5217 * [BUGFIX] Query Frontend: Disable `absent`, `absent_over_time` and `scalar` for vertical sharding. #5221 +* [BUGFIX] Catch context error in the s3 bucket client. #5240 ## 1.14.0 2022-12-02 diff --git a/VERSION b/VERSION index 6a385e1a358..460eb7b0ea3 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.15.0-rc.0 +1.15.0-rc.1 diff --git a/pkg/querier/batch/batch.go b/pkg/querier/batch/batch.go index 35092e3a604..022b2bce5b8 100644 --- a/pkg/querier/batch/batch.go +++ b/pkg/querier/batch/batch.go @@ -43,6 +43,9 @@ type iterator interface { // Seek or Next have returned true. AtTime() int64 + // MaxCurrentChunkTime returns the max time on the current chunk. + MaxCurrentChunkTime() int64 + // Batch returns the current batch. Must only be called after Seek or Next // have returned true. Batch() promchunk.Batch @@ -98,6 +101,17 @@ func (a *iteratorAdapter) Seek(t int64) bool { a.curr.Index++ } return true + } else if t <= a.underlying.MaxCurrentChunkTime() { + // In this case, some timestamp inside the current underlying chunk can fulfill the seek. + // In this case we will call next until we find the sample as it will be faster than calling + // `a.underlying.Seek` directly as this would cause the iterator to start from the beginning of the chunk. + // See: https://github.com/cortexproject/cortex/blob/f69452975877c67ac307709e5f60b8d20477764c/pkg/querier/batch/chunk.go#L26-L45 + // https://github.com/cortexproject/cortex/blob/f69452975877c67ac307709e5f60b8d20477764c/pkg/chunk/encoding/prometheus_chunk.go#L90-L95 + for a.Next() { + if t <= a.curr.Timestamps[a.curr.Index] { + return true + } + } } } diff --git a/pkg/querier/batch/batch_test.go b/pkg/querier/batch/batch_test.go index 733827b2c70..7f40afcf796 100644 --- a/pkg/querier/batch/batch_test.go +++ b/pkg/querier/batch/batch_test.go @@ -35,7 +35,7 @@ func BenchmarkNewChunkMergeIterator_CreateAndIterate(b *testing.B) { scenario.duplicationFactor, scenario.enc.String()) - chunks := createChunks(b, scenario.numChunks, scenario.numSamplesPerChunk, scenario.duplicationFactor, scenario.enc) + chunks := createChunks(b, step, scenario.numChunks, scenario.numSamplesPerChunk, scenario.duplicationFactor, scenario.enc) b.Run(name, func(b *testing.B) { b.ReportAllocs() @@ -55,10 +55,59 @@ func BenchmarkNewChunkMergeIterator_CreateAndIterate(b *testing.B) { } } +func BenchmarkNewChunkMergeIterator_Seek(b *testing.B) { + scenarios := []struct { + numChunks int + numSamplesPerChunk int + duplicationFactor int + seekStep time.Duration + scrapeInterval time.Duration + enc promchunk.Encoding + }{ + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second / 2, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second * 2, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second * 10, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second * 30, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second * 50, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second * 100, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 30 * time.Second, seekStep: 30 * time.Second * 200, enc: promchunk.PrometheusXorChunk}, + + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second / 2, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second * 2, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second * 10, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second * 30, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second * 50, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second * 100, enc: promchunk.PrometheusXorChunk}, + {numChunks: 1000, numSamplesPerChunk: 120, duplicationFactor: 3, scrapeInterval: 10 * time.Second, seekStep: 10 * time.Second * 200, enc: promchunk.PrometheusXorChunk}, + } + + for _, scenario := range scenarios { + name := fmt.Sprintf("scrapeInterval %vs seekStep: %vs", + scenario.scrapeInterval.Seconds(), + scenario.seekStep.Seconds()) + + chunks := createChunks(b, scenario.scrapeInterval, scenario.numChunks, scenario.numSamplesPerChunk, scenario.duplicationFactor, scenario.enc) + + b.Run(name, func(b *testing.B) { + b.ReportAllocs() + + for n := 0; n < b.N; n++ { + it := NewChunkMergeIterator(chunks, 0, 0) + i := int64(0) + for it.Seek(i*scenario.seekStep.Milliseconds()) != chunkenc.ValNone { + i++ + } + } + }) + } +} + func TestSeekCorrectlyDealWithSinglePointChunks(t *testing.T) { t.Parallel() - chunkOne := mkChunk(t, model.Time(1*step/time.Millisecond), 1, promchunk.PrometheusXorChunk) - chunkTwo := mkChunk(t, model.Time(10*step/time.Millisecond), 1, promchunk.PrometheusXorChunk) + chunkOne := mkChunk(t, step, model.Time(1*step/time.Millisecond), 1, promchunk.PrometheusXorChunk) + chunkTwo := mkChunk(t, step, model.Time(10*step/time.Millisecond), 1, promchunk.PrometheusXorChunk) chunks := []chunk.Chunk{chunkOne, chunkTwo} sut := NewChunkMergeIterator(chunks, 0, 0) @@ -72,13 +121,13 @@ func TestSeekCorrectlyDealWithSinglePointChunks(t *testing.T) { require.Equal(t, int64(1*time.Second/time.Millisecond), actual) } -func createChunks(b *testing.B, numChunks, numSamplesPerChunk, duplicationFactor int, enc promchunk.Encoding) []chunk.Chunk { +func createChunks(b *testing.B, step time.Duration, numChunks, numSamplesPerChunk, duplicationFactor int, enc promchunk.Encoding) []chunk.Chunk { result := make([]chunk.Chunk, 0, numChunks) for d := 0; d < duplicationFactor; d++ { for c := 0; c < numChunks; c++ { minTime := step * time.Duration(c*numSamplesPerChunk) - result = append(result, mkChunk(b, model.Time(minTime.Milliseconds()), numSamplesPerChunk, enc)) + result = append(result, mkChunk(b, step, model.Time(minTime.Milliseconds()), numSamplesPerChunk, enc)) } } diff --git a/pkg/querier/batch/chunk.go b/pkg/querier/batch/chunk.go index 5f45e8e1338..6514c79db24 100644 --- a/pkg/querier/batch/chunk.go +++ b/pkg/querier/batch/chunk.go @@ -21,6 +21,10 @@ func (i *chunkIterator) reset(chunk GenericChunk) { i.batch.Index = 0 } +func (i *chunkIterator) MaxCurrentChunkTime() int64 { + return i.chunk.MaxTime +} + // Seek advances the iterator forward to the value at or after // the given timestamp. func (i *chunkIterator) Seek(t int64, size int) bool { diff --git a/pkg/querier/batch/chunk_test.go b/pkg/querier/batch/chunk_test.go index f23b3427deb..c053f6c1e57 100644 --- a/pkg/querier/batch/chunk_test.go +++ b/pkg/querier/batch/chunk_test.go @@ -44,7 +44,7 @@ func forEncodings(t *testing.T, f func(t *testing.T, enc promchunk.Encoding)) { } } -func mkChunk(t require.TestingT, from model.Time, points int, enc promchunk.Encoding) chunk.Chunk { +func mkChunk(t require.TestingT, step time.Duration, from model.Time, points int, enc promchunk.Encoding) chunk.Chunk { metric := labels.Labels{ {Name: model.MetricNameLabel, Value: "foo"}, } @@ -65,7 +65,7 @@ func mkChunk(t require.TestingT, from model.Time, points int, enc promchunk.Enco } func mkGenericChunk(t require.TestingT, from model.Time, points int, enc promchunk.Encoding) GenericChunk { - ck := mkChunk(t, from, points, enc) + ck := mkChunk(t, step, from, points, enc) return NewGenericChunk(int64(ck.From), int64(ck.Through), ck.Data.NewIterator) } diff --git a/pkg/querier/batch/merge.go b/pkg/querier/batch/merge.go index 7764b37467b..facfb100f1c 100644 --- a/pkg/querier/batch/merge.go +++ b/pkg/querier/batch/merge.go @@ -128,6 +128,14 @@ func (c *mergeIterator) AtTime() int64 { return c.batches[0].Timestamps[0] } +func (c *mergeIterator) MaxCurrentChunkTime() int64 { + if len(c.h) < 1 { + return -1 + } + + return c.h[0].MaxCurrentChunkTime() +} + func (c *mergeIterator) Batch() promchunk.Batch { return c.batches[0] } diff --git a/pkg/querier/batch/non_overlapping.go b/pkg/querier/batch/non_overlapping.go index a1c6bf01005..61bfecb1aff 100644 --- a/pkg/querier/batch/non_overlapping.go +++ b/pkg/querier/batch/non_overlapping.go @@ -32,6 +32,10 @@ func (it *nonOverlappingIterator) Seek(t int64, size int) bool { } } +func (it *nonOverlappingIterator) MaxCurrentChunkTime() int64 { + return it.iter.MaxCurrentChunkTime() +} + func (it *nonOverlappingIterator) Next(size int) bool { for { if it.iter.Next(size) { diff --git a/pkg/storage/bucket/s3/bucket_client.go b/pkg/storage/bucket/s3/bucket_client.go index d0625b6deef..ed76d835e5c 100644 --- a/pkg/storage/bucket/s3/bucket_client.go +++ b/pkg/storage/bucket/s3/bucket_client.go @@ -124,7 +124,7 @@ func (b *BucketWithRetries) retry(ctx context.Context, f func() error, operation level.Error(b.logger).Log("msg", "bucket operation fail after retries", "err", lastErr, "operation", operationInfo) return lastErr } - return nil + return retries.Err() } func (b *BucketWithRetries) Name() string { diff --git a/pkg/storage/bucket/s3/bucket_client_test.go b/pkg/storage/bucket/s3/bucket_client_test.go index c62f9107093..bc991b4f8df 100644 --- a/pkg/storage/bucket/s3/bucket_client_test.go +++ b/pkg/storage/bucket/s3/bucket_client_test.go @@ -73,6 +73,25 @@ func TestBucketWithRetries_UploadFailed(t *testing.T) { require.ErrorContains(t, err, "failed upload: ") } +func TestBucketWithRetries_ContextCanceled(t *testing.T) { + t.Parallel() + + m := mockBucket{} + b := BucketWithRetries{ + logger: log.NewNopLogger(), + bucket: &m, + operationRetries: 5, + retryMinBackoff: 10 * time.Millisecond, + retryMaxBackoff: time.Second, + } + + ctx, cancel := context.WithCancel(context.Background()) + cancel() + obj, err := b.GetRange(ctx, "dummy", 0, 10) + require.ErrorIs(t, err, context.Canceled) + require.Nil(t, obj) +} + type fakeReader struct { } @@ -121,7 +140,7 @@ func (m *mockBucket) Get(ctx context.Context, name string) (io.ReadCloser, error // GetRange mocks objstore.Bucket.GetRange() func (m *mockBucket) GetRange(ctx context.Context, name string, off, length int64) (io.ReadCloser, error) { - return nil, nil + return io.NopCloser(bytes.NewBuffer(bytes.Repeat([]byte{1}, int(length)))), nil } // Exists mocks objstore.Bucket.Exists() diff --git a/pkg/storegateway/bucket_store_metrics.go b/pkg/storegateway/bucket_store_metrics.go index 567f5c39d32..09f48db971f 100644 --- a/pkg/storegateway/bucket_store_metrics.go +++ b/pkg/storegateway/bucket_store_metrics.go @@ -214,16 +214,16 @@ func (m *BucketStoreMetrics) Collect(out chan<- prometheus.Metric) { data.SendSumOfGaugesPerUser(out, m.blocksLoaded, "thanos_bucket_store_blocks_loaded") - data.SendSumOfSummariesWithLabels(out, m.seriesDataTouched, "thanos_bucket_store_series_data_touched", "data_type") - data.SendSumOfSummariesWithLabels(out, m.seriesDataFetched, "thanos_bucket_store_series_data_fetched", "data_type") - data.SendSumOfSummariesWithLabels(out, m.seriesDataSizeTouched, "thanos_bucket_store_series_data_size_touched_bytes", "data_type") - data.SendSumOfSummariesWithLabels(out, m.seriesDataSizeFetched, "thanos_bucket_store_series_data_size_fetched_bytes", "data_type") - data.SendSumOfSummariesWithLabels(out, m.seriesBlocksQueried, "thanos_bucket_store_series_blocks_queried") + data.SendSumOfHistogramsWithLabels(out, m.seriesDataTouched, "thanos_bucket_store_series_data_touched", "data_type") + data.SendSumOfHistogramsWithLabels(out, m.seriesDataFetched, "thanos_bucket_store_series_data_fetched", "data_type") + data.SendSumOfHistogramsWithLabels(out, m.seriesDataSizeTouched, "thanos_bucket_store_series_data_size_touched_bytes", "data_type") + data.SendSumOfHistogramsWithLabels(out, m.seriesDataSizeFetched, "thanos_bucket_store_series_data_size_fetched_bytes", "data_type") + data.SendSumOfHistogramsWithLabels(out, m.seriesBlocksQueried, "thanos_bucket_store_series_blocks_queried") data.SendSumOfHistograms(out, m.seriesGetAllDuration, "thanos_bucket_store_series_get_all_duration_seconds") data.SendSumOfHistograms(out, m.seriesMergeDuration, "thanos_bucket_store_series_merge_duration_seconds") data.SendSumOfCounters(out, m.seriesRefetches, "thanos_bucket_store_series_refetches_total") - data.SendSumOfSummaries(out, m.resultSeriesCount, "thanos_bucket_store_series_result_series") + data.SendSumOfHistograms(out, m.resultSeriesCount, "thanos_bucket_store_series_result_series") data.SendSumOfCounters(out, m.queriesDropped, "thanos_bucket_store_queries_dropped_total") data.SendSumOfCountersWithLabels(out, m.cachedPostingsCompressions, "thanos_bucket_store_cached_postings_compressions_total", "op") diff --git a/pkg/storegateway/bucket_store_metrics_test.go b/pkg/storegateway/bucket_store_metrics_test.go index a400cb8c96e..6cc7eabace1 100644 --- a/pkg/storegateway/bucket_store_metrics_test.go +++ b/pkg/storegateway/bucket_store_metrics_test.go @@ -46,43 +46,246 @@ func TestBucketStoreMetrics(t *testing.T) { cortex_bucket_store_block_drop_failures_total 112595 # HELP cortex_bucket_store_series_blocks_queried Number of blocks in a bucket store that were touched to satisfy a query. - # TYPE cortex_bucket_store_series_blocks_queried summary + # TYPE cortex_bucket_store_series_blocks_queried histogram + cortex_bucket_store_series_blocks_queried_bucket{le="1"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="2"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="4"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="8"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="16"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="32"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="64"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="128"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="256"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="512"} 0 + cortex_bucket_store_series_blocks_queried_bucket{le="+Inf"} 9 cortex_bucket_store_series_blocks_queried_sum 1.283583e+06 cortex_bucket_store_series_blocks_queried_count 9 # HELP cortex_bucket_store_series_data_fetched How many items of a data type in a block were fetched for a single series request. - # TYPE cortex_bucket_store_series_data_fetched summary + # TYPE cortex_bucket_store_series_data_fetched histogram + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="400"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="800"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="1600"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="3200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="6400"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="12800"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="25600"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="51200"} 1 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="102400"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="204800"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="409600"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="819200"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="1.6384e+06"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="3.2768e+06"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-a",le="+Inf"} 3 cortex_bucket_store_series_data_fetched_sum{data_type="fetched-a"} 202671 cortex_bucket_store_series_data_fetched_count{data_type="fetched-a"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="400"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="800"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="1600"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="3200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="6400"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="12800"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="25600"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="51200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="102400"} 2 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="204800"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="409600"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="819200"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="1.6384e+06"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="3.2768e+06"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-b",le="+Inf"} 3 cortex_bucket_store_series_data_fetched_sum{data_type="fetched-b"} 225190 cortex_bucket_store_series_data_fetched_count{data_type="fetched-b"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="400"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="800"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="1600"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="3200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="6400"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="12800"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="25600"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="51200"} 0 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="102400"} 2 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="204800"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="409600"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="819200"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="1.6384e+06"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="3.2768e+06"} 3 + cortex_bucket_store_series_data_fetched_bucket{data_type="fetched-c",le="+Inf"} 3 cortex_bucket_store_series_data_fetched_sum{data_type="fetched-c"} 247709 cortex_bucket_store_series_data_fetched_count{data_type="fetched-c"} 3 # HELP cortex_bucket_store_series_data_size_fetched_bytes Size of all items of a data type in a block were fetched for a single series request. - # TYPE cortex_bucket_store_series_data_size_fetched_bytes summary + # TYPE cortex_bucket_store_series_data_size_fetched_bytes histogram + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="1024"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="2048"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="4096"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="8192"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="16384"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="32768"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="65536"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="131072"} 2 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="262144"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="524288"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="1.048576e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="2.097152e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="4.194304e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="8.388608e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="1.6777216e+07"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-a",le="+Inf"} 3 cortex_bucket_store_series_data_size_fetched_bytes_sum{data_type="size-fetched-a"} 337785 cortex_bucket_store_series_data_size_fetched_bytes_count{data_type="size-fetched-a"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="1024"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="2048"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="4096"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="8192"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="16384"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="32768"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="65536"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="131072"} 2 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="262144"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="524288"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="1.048576e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="2.097152e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="4.194304e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="8.388608e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="1.6777216e+07"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-b",le="+Inf"} 3 cortex_bucket_store_series_data_size_fetched_bytes_sum{data_type="size-fetched-b"} 360304 cortex_bucket_store_series_data_size_fetched_bytes_count{data_type="size-fetched-b"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="1024"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="2048"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="4096"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="8192"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="16384"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="32768"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="65536"} 0 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="131072"} 2 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="262144"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="524288"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="1.048576e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="2.097152e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="4.194304e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="8.388608e+06"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="1.6777216e+07"} 3 + cortex_bucket_store_series_data_size_fetched_bytes_bucket{data_type="size-fetched-c",le="+Inf"} 3 cortex_bucket_store_series_data_size_fetched_bytes_sum{data_type="size-fetched-c"} 382823 cortex_bucket_store_series_data_size_fetched_bytes_count{data_type="size-fetched-c"} 3 # HELP cortex_bucket_store_series_data_size_touched_bytes Size of all items of a data type in a block were touched for a single series request. - # TYPE cortex_bucket_store_series_data_size_touched_bytes summary + # TYPE cortex_bucket_store_series_data_size_touched_bytes histogram + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="1024"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="2048"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="4096"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="8192"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="16384"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="32768"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="65536"} 1 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="131072"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="262144"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="524288"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="1.048576e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="2.097152e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="4.194304e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="8.388608e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="1.6777216e+07"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-a",le="+Inf"} 3 cortex_bucket_store_series_data_size_touched_bytes_sum{data_type="size-touched-a"} 270228 cortex_bucket_store_series_data_size_touched_bytes_count{data_type="size-touched-a"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="1024"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="2048"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="4096"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="8192"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="16384"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="32768"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="65536"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="131072"} 2 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="262144"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="524288"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="1.048576e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="2.097152e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="4.194304e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="8.388608e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="1.6777216e+07"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-b",le="+Inf"} 3 cortex_bucket_store_series_data_size_touched_bytes_sum{data_type="size-touched-b"} 292747 cortex_bucket_store_series_data_size_touched_bytes_count{data_type="size-touched-b"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="1024"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="2048"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="4096"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="8192"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="16384"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="32768"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="65536"} 0 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="131072"} 2 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="262144"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="524288"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="1.048576e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="2.097152e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="4.194304e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="8.388608e+06"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="1.6777216e+07"} 3 + cortex_bucket_store_series_data_size_touched_bytes_bucket{data_type="size-touched-c",le="+Inf"} 3 cortex_bucket_store_series_data_size_touched_bytes_sum{data_type="size-touched-c"} 315266 cortex_bucket_store_series_data_size_touched_bytes_count{data_type="size-touched-c"} 3 # HELP cortex_bucket_store_series_data_touched How many items of a data type in a block were touched for a single series request. - # TYPE cortex_bucket_store_series_data_touched summary + # TYPE cortex_bucket_store_series_data_touched histogram + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="200"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="400"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="800"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="1600"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="3200"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="6400"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="12800"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="25600"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="51200"} 2 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="102400"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="204800"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="409600"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="819200"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="1.6384e+06"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="3.2768e+06"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-a",le="+Inf"} 3 cortex_bucket_store_series_data_touched_sum{data_type="touched-a"} 135114 cortex_bucket_store_series_data_touched_count{data_type="touched-a"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="200"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="400"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="800"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="1600"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="3200"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="6400"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="12800"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="25600"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="51200"} 2 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="102400"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="204800"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="409600"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="819200"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="1.6384e+06"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="3.2768e+06"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-b",le="+Inf"} 3 cortex_bucket_store_series_data_touched_sum{data_type="touched-b"} 157633 cortex_bucket_store_series_data_touched_count{data_type="touched-b"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="200"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="400"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="800"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="1600"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="3200"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="6400"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="12800"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="25600"} 0 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="51200"} 1 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="102400"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="204800"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="409600"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="819200"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="1.6384e+06"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="3.2768e+06"} 3 + cortex_bucket_store_series_data_touched_bucket{data_type="touched-c",le="+Inf"} 3 cortex_bucket_store_series_data_touched_sum{data_type="touched-c"} 180152 cortex_bucket_store_series_data_touched_count{data_type="touched-c"} 3 @@ -131,7 +334,23 @@ func TestBucketStoreMetrics(t *testing.T) { cortex_bucket_store_series_refetches_total 743127 # HELP cortex_bucket_store_series_result_series Number of series observed in the final result of a query. - # TYPE cortex_bucket_store_series_result_series summary + # TYPE cortex_bucket_store_series_result_series histogram + cortex_bucket_store_series_result_series_bucket{le="1"} 0 + cortex_bucket_store_series_result_series_bucket{le="2"} 0 + cortex_bucket_store_series_result_series_bucket{le="4"} 0 + cortex_bucket_store_series_result_series_bucket{le="8"} 0 + cortex_bucket_store_series_result_series_bucket{le="16"} 0 + cortex_bucket_store_series_result_series_bucket{le="32"} 0 + cortex_bucket_store_series_result_series_bucket{le="64"} 0 + cortex_bucket_store_series_result_series_bucket{le="128"} 0 + cortex_bucket_store_series_result_series_bucket{le="256"} 0 + cortex_bucket_store_series_result_series_bucket{le="512"} 0 + cortex_bucket_store_series_result_series_bucket{le="1024"} 0 + cortex_bucket_store_series_result_series_bucket{le="2048"} 0 + cortex_bucket_store_series_result_series_bucket{le="4096"} 0 + cortex_bucket_store_series_result_series_bucket{le="8192"} 0 + cortex_bucket_store_series_result_series_bucket{le="16384"} 0 + cortex_bucket_store_series_result_series_bucket{le="+Inf"} 6 cortex_bucket_store_series_result_series_sum 1.238545e+06 cortex_bucket_store_series_result_series_count 6 @@ -348,15 +567,15 @@ type mockedBucketStoreMetrics struct { blockLoadFailures prometheus.Counter blockDrops prometheus.Counter blockDropFailures prometheus.Counter - seriesDataTouched *prometheus.SummaryVec - seriesDataFetched *prometheus.SummaryVec - seriesDataSizeTouched *prometheus.SummaryVec - seriesDataSizeFetched *prometheus.SummaryVec - seriesBlocksQueried prometheus.Summary + seriesDataTouched *prometheus.HistogramVec + seriesDataFetched *prometheus.HistogramVec + seriesDataSizeTouched *prometheus.HistogramVec + seriesDataSizeFetched *prometheus.HistogramVec + seriesBlocksQueried prometheus.Histogram seriesGetAllDuration prometheus.Histogram seriesMergeDuration prometheus.Histogram seriesRefetches prometheus.Counter - resultSeriesCount prometheus.Summary + resultSeriesCount prometheus.Histogram chunkSizeBytes prometheus.Histogram queriesDropped *prometheus.CounterVec @@ -400,27 +619,32 @@ func newMockedBucketStoreMetrics(reg prometheus.Registerer) *mockedBucketStoreMe Help: "Number of currently loaded blocks.", }) - m.seriesDataTouched = promauto.With(reg).NewSummaryVec(prometheus.SummaryOpts{ - Name: "thanos_bucket_store_series_data_touched", - Help: "How many items of a data type in a block were touched for a single series request.", + m.seriesDataTouched = promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{ + Name: "thanos_bucket_store_series_data_touched", + Help: "How many items of a data type in a block were touched for a single series request.", + Buckets: prometheus.ExponentialBuckets(200, 2, 15), }, []string{"data_type"}) - m.seriesDataFetched = promauto.With(reg).NewSummaryVec(prometheus.SummaryOpts{ - Name: "thanos_bucket_store_series_data_fetched", - Help: "How many items of a data type in a block were fetched for a single series request.", + m.seriesDataFetched = promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{ + Name: "thanos_bucket_store_series_data_fetched", + Help: "How many items of a data type in a block were fetched for a single series request.", + Buckets: prometheus.ExponentialBuckets(200, 2, 15), }, []string{"data_type"}) - m.seriesDataSizeTouched = promauto.With(reg).NewSummaryVec(prometheus.SummaryOpts{ - Name: "thanos_bucket_store_series_data_size_touched_bytes", - Help: "Size of all items of a data type in a block were touched for a single series request.", + m.seriesDataSizeTouched = promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{ + Name: "thanos_bucket_store_series_data_size_touched_bytes", + Help: "Size of all items of a data type in a block were touched for a single series request.", + Buckets: prometheus.ExponentialBuckets(1024, 2, 15), }, []string{"data_type"}) - m.seriesDataSizeFetched = promauto.With(reg).NewSummaryVec(prometheus.SummaryOpts{ - Name: "thanos_bucket_store_series_data_size_fetched_bytes", - Help: "Size of all items of a data type in a block were fetched for a single series request.", + m.seriesDataSizeFetched = promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{ + Name: "thanos_bucket_store_series_data_size_fetched_bytes", + Help: "Size of all items of a data type in a block were fetched for a single series request.", + Buckets: prometheus.ExponentialBuckets(1024, 2, 15), }, []string{"data_type"}) - m.seriesBlocksQueried = promauto.With(reg).NewSummary(prometheus.SummaryOpts{ - Name: "thanos_bucket_store_series_blocks_queried", - Help: "Number of blocks in a bucket store that were touched to satisfy a query.", + m.seriesBlocksQueried = promauto.With(reg).NewHistogram(prometheus.HistogramOpts{ + Name: "thanos_bucket_store_series_blocks_queried", + Help: "Number of blocks in a bucket store that were touched to satisfy a query.", + Buckets: prometheus.ExponentialBuckets(1, 2, 10), }) m.seriesGetAllDuration = promauto.With(reg).NewHistogram(prometheus.HistogramOpts{ Name: "thanos_bucket_store_series_get_all_duration_seconds", @@ -432,9 +656,10 @@ func newMockedBucketStoreMetrics(reg prometheus.Registerer) *mockedBucketStoreMe Help: "Time it takes to merge sub-results from all queried blocks into a single result.", Buckets: []float64{0.001, 0.01, 0.1, 0.3, 0.6, 1, 3, 6, 9, 20, 30, 60, 90, 120}, }) - m.resultSeriesCount = promauto.With(reg).NewSummary(prometheus.SummaryOpts{ - Name: "thanos_bucket_store_series_result_series", - Help: "Number of series observed in the final result of a query.", + m.resultSeriesCount = promauto.With(reg).NewHistogram(prometheus.HistogramOpts{ + Name: "thanos_bucket_store_series_result_series", + Help: "Number of series observed in the final result of a query.", + Buckets: prometheus.ExponentialBuckets(1, 2, 15), }) m.chunkSizeBytes = promauto.With(reg).NewHistogram(prometheus.HistogramOpts{ From 66948fda3f2408b7453175d55a382711a549ba70 Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Tue, 11 Apr 2023 11:59:06 -0700 Subject: [PATCH 3/6] Cherry-pick fixes to release 1.15 for new RC (#5259) * fix remote read error in query frontend (#5257) * fix remote read error in query frontend Signed-off-by: Ben Ye * fix integration test Signed-off-by: Ben Ye * add extra one query Signed-off-by: Ben Ye --------- Signed-off-by: Ben Ye * bump RC version Signed-off-by: Ben Ye --------- Signed-off-by: Ben Ye --- CHANGELOG.md | 1 + VERSION | 2 +- integration/e2ecortex/client.go | 69 ++++++++++++++++++++++++++++++ integration/query_frontend_test.go | 30 +++++++++++++ pkg/frontend/transport/handler.go | 11 +++-- 5 files changed, 108 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 5c810f9ddf4..299fc57b227 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -60,6 +60,7 @@ * [BUGFIX] Fix S3 BucketWithRetries upload empty content issue #5217 * [BUGFIX] Query Frontend: Disable `absent`, `absent_over_time` and `scalar` for vertical sharding. #5221 * [BUGFIX] Catch context error in the s3 bucket client. #5240 +* [BUGFIX] Fix query frontend remote read empty body. #5257 ## 1.14.0 2022-12-02 diff --git a/VERSION b/VERSION index 460eb7b0ea3..98db740001c 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.15.0-rc.1 +1.15.0-rc.2 diff --git a/integration/e2ecortex/client.go b/integration/e2ecortex/client.go index feb643c37bb..adfc99faf4c 100644 --- a/integration/e2ecortex/client.go +++ b/integration/e2ecortex/client.go @@ -19,8 +19,11 @@ import ( promapi "github.com/prometheus/client_golang/api" promv1 "github.com/prometheus/client_golang/api/prometheus/v1" "github.com/prometheus/common/model" + "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/model/rulefmt" "github.com/prometheus/prometheus/prompb" + "github.com/prometheus/prometheus/storage" + "github.com/prometheus/prometheus/storage/remote" yaml "gopkg.in/yaml.v3" "github.com/cortexproject/cortex/pkg/ruler" @@ -153,6 +156,72 @@ func (c *Client) QueryRaw(query string) (*http.Response, []byte, error) { return c.query(addr) } +// RemoteRead runs a remote read query. +func (c *Client) RemoteRead(matchers []*labels.Matcher, start, end time.Time, step time.Duration) (*prompb.ReadResponse, error) { + startMs := start.UnixMilli() + endMs := end.UnixMilli() + stepMs := step.Milliseconds() + + q, err := remote.ToQuery(startMs, endMs, matchers, &storage.SelectHints{ + Step: stepMs, + Start: startMs, + End: endMs, + }) + if err != nil { + return nil, err + } + + req := &prompb.ReadRequest{ + Queries: []*prompb.Query{q}, + AcceptedResponseTypes: []prompb.ReadRequest_ResponseType{prompb.ReadRequest_STREAMED_XOR_CHUNKS}, + } + + data, err := proto.Marshal(req) + if err != nil { + return nil, err + } + compressed := snappy.Encode(nil, data) + + // Call the remote read API endpoint with a timeout. + httpReqCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + httpReq, err := http.NewRequestWithContext(httpReqCtx, "POST", "http://"+c.querierAddress+"/prometheus/api/v1/read", bytes.NewReader(compressed)) + if err != nil { + return nil, err + } + httpReq.Header.Set("X-Scope-OrgID", "user-1") + httpReq.Header.Add("Content-Encoding", "snappy") + httpReq.Header.Add("Accept-Encoding", "snappy") + httpReq.Header.Set("Content-Type", "application/x-protobuf") + httpReq.Header.Set("User-Agent", "Prometheus/1.8.2") + httpReq.Header.Set("X-Prometheus-Remote-Read-Version", "0.1.0") + + httpResp, err := c.httpClient.Do(httpReq) + if err != nil { + return nil, err + } + if httpResp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("unexpected status code %d", httpResp.StatusCode) + } + + compressed, err = io.ReadAll(httpResp.Body) + if err != nil { + return nil, err + } + + uncompressed, err := snappy.Decode(nil, compressed) + if err != nil { + return nil, err + } + + var resp prompb.ReadResponse + if err = proto.Unmarshal(uncompressed, &resp); err != nil { + return nil, err + } + return &resp, nil +} + func (c *Client) query(addr string) (*http.Response, []byte, error) { ctx, cancel := context.WithTimeout(context.Background(), c.timeout) defer cancel() diff --git a/integration/query_frontend_test.go b/integration/query_frontend_test.go index a0a0a702529..6df5d101015 100644 --- a/integration/query_frontend_test.go +++ b/integration/query_frontend_test.go @@ -31,6 +31,7 @@ type queryFrontendTestConfig struct { testMissingMetricName bool querySchedulerEnabled bool queryStatsEnabled bool + remoteReadEnabled bool setup func(t *testing.T, s *e2e.Scenario) (configFile string, flags map[string]string) } @@ -194,6 +195,19 @@ func TestQueryFrontendWithVerticalShardingQueryScheduler(t *testing.T) { }) } +func TestQueryFrontendRemoteRead(t *testing.T) { + runQueryFrontendTest(t, queryFrontendTestConfig{ + remoteReadEnabled: true, + setup: func(t *testing.T, s *e2e.Scenario) (configFile string, flags map[string]string) { + require.NoError(t, writeFileToSharedDir(s, cortexConfigFile, []byte(BlocksStorageConfig))) + + minio := e2edb.NewMinio(9000, BlocksStorageFlags()["-blocks-storage.s3.bucket-name"]) + require.NoError(t, s.StartAndWaitReady(minio)) + return cortexConfigFile, flags + }, + }) +} + func runQueryFrontendTest(t *testing.T, cfg queryFrontendTestConfig) { const numUsers = 10 const numQueriesPerUser = 10 @@ -307,6 +321,18 @@ func runQueryFrontendTest(t *testing.T, cfg queryFrontendTestConfig) { require.Regexp(t, "querier_wall_time;dur=[0-9.]*, response_time;dur=[0-9.]*$", res.Header.Values("Server-Timing")[0]) } + // No need to repeat the test on remote read for each user. + if userID == 0 && cfg.remoteReadEnabled { + start := now.Add(-1 * time.Hour) + end := now.Add(1 * time.Hour) + res, err := c.RemoteRead([]*labels.Matcher{labels.MustNewMatcher(labels.MatchEqual, labels.MetricName, "series_1")}, start, end, time.Second) + require.NoError(t, err) + require.True(t, len(res.Results) > 0) + require.True(t, len(res.Results[0].Timeseries) > 0) + require.True(t, len(res.Results[0].Timeseries[0].Samples) > 0) + require.True(t, len(res.Results[0].Timeseries[0].Labels) > 0) + } + // In this test we do ensure that the /series start/end time is ignored and Cortex // always returns series in ingesters memory. No need to repeat it for each user. if userID == 0 { @@ -342,6 +368,10 @@ func runQueryFrontendTest(t *testing.T, cfg queryFrontendTestConfig) { extra++ } + if cfg.remoteReadEnabled { + extra++ + } + require.NoError(t, queryFrontend.WaitSumMetrics(e2e.Equals(numUsers*numQueriesPerUser+extra), "cortex_query_frontend_queries_total")) // The number of received request is greater than the query requests because include diff --git a/pkg/frontend/transport/handler.go b/pkg/frontend/transport/handler.go index a5174477b00..fdf8ae27c03 100644 --- a/pkg/frontend/transport/handler.go +++ b/pkg/frontend/transport/handler.go @@ -139,11 +139,14 @@ func (f *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) { r.Body = io.NopCloser(io.TeeReader(r.Body, &buf)) // We parse form here so that we can use buf as body, in order to // prevent https://github.com/cortexproject/cortex/issues/5201. - if err := r.ParseForm(); err != nil { - writeError(w, err) - return + // Exclude remote read here as we don't have to buffer its body. + if !strings.Contains(r.URL.Path, "api/v1/read") { + if err := r.ParseForm(); err != nil { + writeError(w, err) + return + } + r.Body = io.NopCloser(&buf) } - r.Body = io.NopCloser(&buf) startTime := time.Now() resp, err := f.roundTripper.RoundTrip(r) From 53710891296fba54cd466e1500bf9ed6e1f99409 Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Wed, 12 Apr 2023 01:47:08 -0700 Subject: [PATCH 4/6] Fix splitByInterval incorrect error response format (#5260) (#5261) * fix query frontend incorrect error response format * update changelog * fix integration test --------- Signed-off-by: Ben Ye --- CHANGELOG.md | 1 + integration/query_frontend_test.go | 16 +++++++++++++++- pkg/frontend/transport/handler.go | 6 +++--- .../tripperware/queryrange/split_by_interval.go | 7 ++++++- 4 files changed, 25 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 299fc57b227..c37a5c17fc3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -61,6 +61,7 @@ * [BUGFIX] Query Frontend: Disable `absent`, `absent_over_time` and `scalar` for vertical sharding. #5221 * [BUGFIX] Catch context error in the s3 bucket client. #5240 * [BUGFIX] Fix query frontend remote read empty body. #5257 +* [BUGFIX] Fix query frontend incorrect error response format at `SplitByQuery` middleware. #5260 ## 1.14.0 2022-12-02 diff --git a/integration/query_frontend_test.go b/integration/query_frontend_test.go index 6df5d101015..5177eb1973b 100644 --- a/integration/query_frontend_test.go +++ b/integration/query_frontend_test.go @@ -14,6 +14,7 @@ import ( "testing" "time" + v1 "github.com/prometheus/client_golang/api/prometheus/v1" "github.com/prometheus/common/model" "github.com/prometheus/prometheus/model/labels" "github.com/prometheus/prometheus/prompb" @@ -345,6 +346,19 @@ func runQueryFrontendTest(t *testing.T, cfg queryFrontendTestConfig) { assert.Equal(t, model.LabelSet{labels.MetricName: "series_1"}, result[0]) } + // No need to repeat the query 400 test for each user. + if userID == 0 { + start := time.Unix(1595846748, 806*1e6) + end := time.Unix(1595846750, 806*1e6) + + _, err := c.QueryRange("up)", start, end, time.Second) + require.Error(t, err) + + apiErr, ok := err.(*v1.Error) + require.True(t, ok) + require.Equal(t, apiErr.Type, v1.ErrBadData) + } + for q := 0; q < numQueriesPerUser; q++ { go func() { defer wg.Done() @@ -359,7 +373,7 @@ func runQueryFrontendTest(t *testing.T, cfg queryFrontendTestConfig) { wg.Wait() - extra := float64(2) + extra := float64(3) if cfg.testMissingMetricName { extra++ } diff --git a/pkg/frontend/transport/handler.go b/pkg/frontend/transport/handler.go index fdf8ae27c03..5259dd8574a 100644 --- a/pkg/frontend/transport/handler.go +++ b/pkg/frontend/transport/handler.go @@ -203,12 +203,12 @@ func (f *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) { } func formatGrafanaStatsFields(r *http.Request) []interface{} { - fields := make([]interface{}, 0, 2) + fields := make([]interface{}, 0, 4) if dashboardUID := r.Header.Get("X-Dashboard-Uid"); dashboardUID != "" { - fields = append(fields, dashboardUID) + fields = append(fields, "X-Dashboard-Uid", dashboardUID) } if panelID := r.Header.Get("X-Panel-Id"); panelID != "" { - fields = append(fields, panelID) + fields = append(fields, "X-Panel-Id", panelID) } return fields } diff --git a/pkg/querier/tripperware/queryrange/split_by_interval.go b/pkg/querier/tripperware/queryrange/split_by_interval.go index ab7fa2cfc47..2717fa415e6 100644 --- a/pkg/querier/tripperware/queryrange/split_by_interval.go +++ b/pkg/querier/tripperware/queryrange/split_by_interval.go @@ -47,7 +47,12 @@ func (s splitByInterval) Do(ctx context.Context, r tripperware.Request) (tripper // to line up the boundaries with step. reqs, err := splitQuery(r, s.interval(r)) if err != nil { - return nil, err + // If the query itself is bad, we don't return error but send the query + // to querier to return the expected error message. This is not very efficient + // but should be okay for now. + // TODO(yeya24): query frontend can reuse the Prometheus API handler and return + // expected error message locally without passing it to the querier through network. + return s.next.Do(ctx, r) } s.splitByCounter.Add(float64(len(reqs))) From 92fcee26c83499c449bf598692e9fa7ebf6d9730 Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Tue, 18 Apr 2023 23:19:04 -0700 Subject: [PATCH 5/6] release 1.15.0 (#5274) Signed-off-by: Ben Ye --- CHANGELOG.md | 2 +- VERSION | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index c37a5c17fc3..6fc8097a8ae 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,7 +2,7 @@ ## master / unreleased -## 1.15.0 in progress +## 1.15.0 2023-04-19 * [CHANGE] Storage: Make Max exemplars config per tenant instead of global configuration. #5080 #5122 * [CHANGE] Alertmanager: Local file disclosure vulnerability in OpsGenie configuration has been fixed. #5045 diff --git a/VERSION b/VERSION index 98db740001c..141f2e805be 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -1.15.0-rc.2 +1.15.0 From d1a118413211feb35ce439cdf82af4fc6719cada Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Wed, 19 Apr 2023 20:07:28 -0700 Subject: [PATCH 6/6] merge 1.15 into master and resolve changelog conflicts Signed-off-by: Ben Ye --- CHANGELOG.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 28527f1a338..44abcbd5ed4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,11 +1,6 @@ # Changelog ## master / unreleased -* [CHANGE] Store gateways summary metrics have been converted to histograms `cortex_bucket_store_series_blocks_queried`, `cortex_bucket_store_series_data_fetched`, `cortex_bucket_store_series_data_size_touched_bytes`, `cortex_bucket_store_series_data_size_fetched_bytes`, `cortex_bucket_store_series_data_touched`, `cortex_bucket_store_series_result_series` #5239 -* [ENHANCEMENT] Querier: Batch Iterator optimization to prevent transversing it multiple times query ranges steps does not overlap. #5237 -* [BUGFIX] Catch context error in the s3 bucket client. #5240 -* [BUGFIX] Fix query frontend remote read empty body. #5257 -* [BUGFIX] Fix query frontend incorrect error response format at `SplitByQuery` middleware. #5260 * [BUGFIX] Ruler: Validate if rule group can be safely converted back to rule group yaml from protobuf message #5265 ## 1.15.0 2023-04-19