Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@

## master / unreleased

* [CHANGE] Updated the Ruler storage config to utilize Thanos `objstore.Bucket` instead of `chunk.ObjectClient` based clients. #2489
* Removed `-ruler.storage.azure.download-buffer-size`
* Removed `-ruler.storage.azure.upload-buffer-size`
* Added `-ruler.storage.azure.endpoint-suffix`
* Added `-ruler.storage.azure.max-retries`
* Changed `-ruler.storage.gcs.bucketname` > `-ruler.storage.gcs.bucket-name`
* Removed `-ruler.storage.gcs.chunk-buffer-size`
* Removed `-ruler.storage.gcs.request-timeout`
* Added `-ruler.storage.gcs.service-account`
* Changed `-ruler.storage.s3.url` > `-ruler.storage.s3.endpoint`, credentials are no longer url encoded and in-memory is no longer an option
* Changed `-ruler.storage.s3.buckets` > `-ruler.storage.s3.bucket-name`, multiple buckets can no longer be specified
* Added `-ruler.storage.s3.secret-access-key`
* Added `-ruler.storage.s3.access-key-id`
* Added `-ruler.storage.s3.insecure`
Comment on lines +6 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this compatible with 1.x guarantees? It says that Ruler API is experimental, but Ruler itself is not.

so we will will keep deprecated flags around for 2 minor release

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to use object storage with the ruler you need to use the experimental API. Since these configs are directly tied to the experimental API I think they should be safe to change.

* [CHANGE] Added v1 API routes documented in #2327. #2372
* Added `-http.alertmanager-http-prefix` flag which allows the configuration of the path where the Alertmanager API and UI can be reached. The default is set to `/alertmanager`.
* Added `-http.prometheus-http-prefix` flag which allows the configuration of the path where the Prometheus API and UI can be reached. The default is set to `/prometheus`.
Expand Down
98 changes: 43 additions & 55 deletions docs/configuration/config-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -771,7 +771,7 @@ The `ruler_config` configures the Cortex ruler.
[poll_interval: <duration> | default = 1m]

storage:
# Method to use for backend rule storage (configdb, azure, gcs, s3)
# Method to use for backend rule storage (configdb, azure, gcs, s3, swift)
# CLI flag: -ruler.storage.type
[type: <string> | default = "configdb"]

Expand All @@ -781,77 +781,60 @@ storage:
[configdb: <configstore_config>]

azure:
# Name of the blob container used to store chunks. Defaults to `cortex`.
# This container must be created before running cortex.
# CLI flag: -ruler.storage.azure.container-name
[container_name: <string> | default = "cortex"]

# The Microsoft Azure account name to be used
# Azure storage account name
# CLI flag: -ruler.storage.azure.account-name
[account_name: <string> | default = ""]

# The Microsoft Azure account key to use.
# Azure storage account key
# CLI flag: -ruler.storage.azure.account-key
[account_key: <string> | default = ""]

# Preallocated buffer size for downloads (default is 512KB)
# CLI flag: -ruler.storage.azure.download-buffer-size
[download_buffer_size: <int> | default = 512000]

# Preallocated buffer size for up;oads (default is 256KB)
# CLI flag: -ruler.storage.azure.upload-buffer-size
[upload_buffer_size: <int> | default = 256000]

# Number of buffers used to used to upload a chunk. (defaults to 1)
# CLI flag: -ruler.storage.azure.download-buffer-count
[upload_buffer_count: <int> | default = 1]
# Azure storage container name
# CLI flag: -ruler.storage.azure.container-name
[container_name: <string> | default = ""]

# Timeout for requests made against azure blob storage. Defaults to 30
# seconds.
# CLI flag: -ruler.storage.azure.request-timeout
[request_timeout: <duration> | default = 30s]
# Azure storage endpoint suffix without schema. The account name will be
# prefixed to this value to create the FQDN
# CLI flag: -ruler.storage.azure.endpoint-suffix
[endpoint_suffix: <string> | default = ""]

# Number of retries for a request which times out.
# Number of retries for recoverable errors
# CLI flag: -ruler.storage.azure.max-retries
[max_retries: <int> | default = 5]

# Minimum time to wait before retrying a request.
# CLI flag: -ruler.storage.azure.min-retry-delay
[min_retry_delay: <duration> | default = 10ms]

# Maximum time to wait before retrying a request.
# CLI flag: -ruler.storage.azure.max-retry-delay
[max_retry_delay: <duration> | default = 500ms]
[max_retries: <int> | default = 20]

gcs:
# Name of GCS bucket to put chunks in.
# CLI flag: -ruler.storage.gcs.bucketname
# GCS bucket name
# CLI flag: -ruler.storage.gcs.bucket-name
[bucket_name: <string> | default = ""]

# The size of the buffer that GCS client for each PUT request. 0 to disable
# buffering.
# CLI flag: -ruler.storage.gcs.chunk-buffer-size
[chunk_buffer_size: <int> | default = 0]

# The duration after which the requests to GCS should be timed out.
# CLI flag: -ruler.storage.gcs.request-timeout
[request_timeout: <duration> | default = 0s]
# JSON representing either a Google Developers Console
# client_credentials.json file or a Google Developers service account key
# file. If empty, fallback to Google default logic.
# CLI flag: -ruler.storage.gcs.service-account
[service_account: <string> | default = ""]

s3:
# S3 endpoint URL with escaped Key and Secret encoded. If only region is
# specified as a host, proper endpoint will be deduced. Use
# inmemory:///<bucket-name> to use a mock in-memory implementation.
# CLI flag: -ruler.storage.s3.url
[s3: <url> | default = ]
# S3 endpoint without schema
# CLI flag: -ruler.storage.s3.endpoint
[endpoint: <string> | default = ""]

# S3 bucket name
# CLI flag: -ruler.storage.s3.bucket-name
[bucket_name: <string> | default = ""]

# S3 secret access key
# CLI flag: -ruler.storage.s3.secret-access-key
[secret_access_key: <string> | default = ""]

# Comma separated list of bucket names to evenly distribute chunks over.
# Overrides any buckets specified in s3.url flag
# CLI flag: -ruler.storage.s3.buckets
[bucketnames: <string> | default = ""]
# S3 access key ID
# CLI flag: -ruler.storage.s3.access-key-id
[access_key_id: <string> | default = ""]

# Set this to `true` to force the request to use path-style addressing.
# CLI flag: -ruler.storage.s3.force-path-style
[s3forcepathstyle: <boolean> | default = false]
# If enabled, use http:// for the S3 endpoint instead of https://. This
# could be useful in local dev/test environments while using an
# S3-compatible backend storage, like Minio.
# CLI flag: -ruler.storage.s3.insecure
[insecure: <boolean> | default = false]

swift:
# Openstack authentication URL.
Expand Down Expand Up @@ -913,6 +896,11 @@ storage:
# CLI flag: -ruler.storage.swift.container-name
[container_name: <string> | default = "cortex"]

filesystem:
# Local filesystem storage directory.
# CLI flag: -ruler.storage.filesystem.dir
[dir: <string> | default = ""]

# file path to store temporary rule files for the prometheus rule managers
# CLI flag: -ruler.rule-path
[rule_path: <string> | default = "/rules"]
Expand Down
6 changes: 3 additions & 3 deletions integration/api_ruler_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ func TestRulerAPI(t *testing.T) {

// Start dependencies.
dynamo := e2edb.NewDynamoDB()
minio := e2edb.NewMinio(9000, RulerConfigs["-ruler.storage.s3.buckets"])
minio := e2edb.NewMinio(9000, bucketName)
require.NoError(t, s.StartAndWaitReady(minio, dynamo))

// Start Cortex components.
Expand All @@ -34,9 +34,9 @@ func TestRulerAPI(t *testing.T) {

// Create example namespace and rule group to use for tests, using strings that
// require url escaping.
namespace := "test_encoded_+namespace?"
namespace := "test_encoded_+namespace/?"
rg := rulefmt.RuleGroup{
Name: "test_encoded_+\"+group_name?",
Name: "test_encoded_+\"+group_name/?",
Interval: 100,
Rules: []rulefmt.Rule{
{
Expand Down
63 changes: 0 additions & 63 deletions integration/chunks_storage_backends_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ package main

import (
"context"
"fmt"
"sort"
"testing"
"time"
Expand All @@ -24,8 +23,6 @@ import (
"github.com/cortexproject/cortex/pkg/chunk/openstack"
"github.com/cortexproject/cortex/pkg/chunk/storage"
"github.com/cortexproject/cortex/pkg/ingester/client"
"github.com/cortexproject/cortex/pkg/ruler"
"github.com/cortexproject/cortex/pkg/ruler/rules"
"github.com/cortexproject/cortex/pkg/util/flagext"
"github.com/cortexproject/cortex/pkg/util/validation"
)
Expand Down Expand Up @@ -232,66 +229,6 @@ func TestSwiftChunkStorage(t *testing.T) {

}

func TestSwiftRuleStorage(t *testing.T) {
s, err := e2e.NewScenario(networkName)
require.NoError(t, err)
defer s.Close()
swift := e2edb.NewSwiftStorage()

require.NoError(t, s.StartAndWaitReady(swift))

store, err := ruler.NewRuleStorage(ruler.RuleStoreConfig{
Type: "swift",
Swift: swiftConfig(swift),
})
require.NoError(t, err)
ctx := context.Background()

// Add 2 rule group.
r1 := newRule(userID, "1")
err = store.SetRuleGroup(ctx, userID, "foo", r1)
require.NoError(t, err)

r2 := newRule(userID, "2")
err = store.SetRuleGroup(ctx, userID, "bar", r2)
require.NoError(t, err)

// Get rules back.
rls, err := store.ListAllRuleGroups(ctx)
require.NoError(t, err)
require.Equal(t, 2, len(rls[userID]))

userRules := rls[userID]
sort.Slice(userRules, func(i, j int) bool { return userRules[i].Name < userRules[j].Name })
require.Equal(t, r1, userRules[0])
require.Equal(t, r2, userRules[1])

// Delete the first rule group
err = store.DeleteRuleGroup(ctx, userID, "foo", r1.Name)
require.NoError(t, err)

//Verify we only have the second rule group
rls, err = store.ListAllRuleGroups(ctx)
require.NoError(t, err)
require.Equal(t, 1, len(rls[userID]))
require.Equal(t, r2, rls[userID][0])
}

func newRule(userID, name string) *rules.RuleGroupDesc {
return &rules.RuleGroupDesc{
Name: name + "rule",
Interval: time.Minute,
Namespace: name + "namespace",
Rules: []*rules.RuleDesc{
{
Expr: fmt.Sprintf(`{%s="bar"}`, name),
Record: name + ":bar",
},
},
User: userID,
}
}

func swiftConfig(s *e2e.HTTPService) openstack.SwiftConfig {
return openstack.SwiftConfig{
SwiftConfig: thanos.SwiftConfig{
Expand Down
16 changes: 9 additions & 7 deletions integration/configs.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,15 @@ var (
}

RulerConfigs = map[string]string{
"-ruler.enable-sharding": "false",
"-ruler.poll-interval": "2s",
"-experimental.ruler.enable-api": "true",
"-ruler.storage.type": "s3",
"-ruler.storage.s3.buckets": "cortex-rules",
"-ruler.storage.s3.force-path-style": "true",
"-ruler.storage.s3.url": fmt.Sprintf("s3://%s:%s@%s-minio-9000.:9000", e2edb.MinioAccessKey, e2edb.MinioSecretKey, networkName),
"-ruler.enable-sharding": "false",
"-ruler.poll-interval": "2s",
"-experimental.ruler.enable-api": "true",
"-ruler.storage.type": "s3",
"-ruler.storage.s3.access-key-id": e2edb.MinioAccessKey,
"-ruler.storage.s3.secret-access-key": e2edb.MinioSecretKey,
"-ruler.storage.s3.bucket-name": bucketName,
"-ruler.storage.s3.endpoint": fmt.Sprintf("%s-minio-9000:9000", networkName),
"-ruler.storage.s3.insecure": "true",
}

BlocksStorageFlags = map[string]string{
Expand Down
2 changes: 1 addition & 1 deletion integration/e2ecortex/services.go
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ func NewRuler(name string, flags map[string]string, image string) *CortexService
image,
e2e.NewCommandWithoutEntrypoint("cortex", e2e.BuildArgs(e2e.MergeFlags(map[string]string{
"-target": "ruler",
"-log.level": "warn",
"-log.level": "debug",
}, flags))...),
e2e.NewHTTPReadinessProbe(httpPort, "/ready", 200, 299),
httpPort,
Expand Down
5 changes: 1 addition & 4 deletions pkg/ruler/ruler.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,6 @@ type Config struct {

// Validate config and returns error on failure
func (cfg *Config) Validate() error {
if err := cfg.StoreConfig.Validate(); err != nil {
return errors.Wrap(err, "invalid storage config")
}
return nil
}

Expand Down Expand Up @@ -167,7 +164,7 @@ func NewRuler(cfg Config, engine *promql.Engine, queryable promStorage.Queryable
return nil, err
}

ruleStore, err := NewRuleStorage(cfg.StoreConfig)
ruleStore, err := NewRuleStorage(cfg.StoreConfig, logger)
if err != nil {
return nil, err
}
Expand Down
Loading