Skip to content

Remove support for chunks ingestion #4679

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
132 changes: 9 additions & 123 deletions docs/configuration/config-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,9 +140,6 @@ api:
# blocks storage.
[store_gateway: <store_gateway_config>]

# The purger_config configures the purger which takes care of delete requests.
[purger: <purger_config>]

tenant_federation:
# If enabled on all Cortex services, queries can be federated across multiple
# tenants. The tenant IDs involved need to be specified separated by a `|`
Expand Down Expand Up @@ -604,8 +601,8 @@ instance_limits:
The `ingester_config` configures the Cortex ingester.

```yaml
# Configures the Write-Ahead Log (WAL) for the Cortex chunks storage. This
# config is ignored when running the Cortex blocks storage.
# Configures the Write-Ahead Log (WAL) for the removed Cortex chunks storage.
# This config is now always ignored.
walconfig:
# Enable writing of ingested data into WAL.
# CLI flag: -ingester.wal-enabled
Expand Down Expand Up @@ -2832,9 +2829,9 @@ chunk_tables_provisioning:
The `storage_config` configures where Cortex stores the data (chunks storage engine).

```yaml
# The storage engine to use: chunks (deprecated) or blocks.
# The storage engine to use: blocks is the only supported option today.
# CLI flag: -store.engine
[engine: <string> | default = "chunks"]
[engine: <string> | default = "blocks"]

aws:
dynamodb:
Expand Down Expand Up @@ -3375,93 +3372,6 @@ index_queries_cache_config:
# The CLI flags prefix for this block config is: store.index-cache-read
[fifocache: <fifo_cache_config>]

delete_store:
# Store for keeping delete request
# CLI flag: -deletes.store
[store: <string> | default = ""]

# Name of the table which stores delete requests
# CLI flag: -deletes.requests-table-name
[requests_table_name: <string> | default = "delete_requests"]

table_provisioning:
# Enables on demand throughput provisioning for the storage provider (if
# supported). Applies only to tables which are not autoscaled. Supported by
# DynamoDB
# CLI flag: -deletes.table.enable-ondemand-throughput-mode
[enable_ondemand_throughput_mode: <boolean> | default = false]

# Table default write throughput. Supported by DynamoDB
# CLI flag: -deletes.table.write-throughput
[provisioned_write_throughput: <int> | default = 1]

# Table default read throughput. Supported by DynamoDB
# CLI flag: -deletes.table.read-throughput
[provisioned_read_throughput: <int> | default = 300]

write_scale:
# Should we enable autoscale for the table.
# CLI flag: -deletes.table.write-throughput.scale.enabled
[enabled: <boolean> | default = false]

# AWS AutoScaling role ARN
# CLI flag: -deletes.table.write-throughput.scale.role-arn
[role_arn: <string> | default = ""]

# DynamoDB minimum provision capacity.
# CLI flag: -deletes.table.write-throughput.scale.min-capacity
[min_capacity: <int> | default = 3000]

# DynamoDB maximum provision capacity.
# CLI flag: -deletes.table.write-throughput.scale.max-capacity
[max_capacity: <int> | default = 6000]

# DynamoDB minimum seconds between each autoscale up.
# CLI flag: -deletes.table.write-throughput.scale.out-cooldown
[out_cooldown: <int> | default = 1800]

# DynamoDB minimum seconds between each autoscale down.
# CLI flag: -deletes.table.write-throughput.scale.in-cooldown
[in_cooldown: <int> | default = 1800]

# DynamoDB target ratio of consumed capacity to provisioned capacity.
# CLI flag: -deletes.table.write-throughput.scale.target-value
[target: <float> | default = 80]

read_scale:
# Should we enable autoscale for the table.
# CLI flag: -deletes.table.read-throughput.scale.enabled
[enabled: <boolean> | default = false]

# AWS AutoScaling role ARN
# CLI flag: -deletes.table.read-throughput.scale.role-arn
[role_arn: <string> | default = ""]

# DynamoDB minimum provision capacity.
# CLI flag: -deletes.table.read-throughput.scale.min-capacity
[min_capacity: <int> | default = 3000]

# DynamoDB maximum provision capacity.
# CLI flag: -deletes.table.read-throughput.scale.max-capacity
[max_capacity: <int> | default = 6000]

# DynamoDB minimum seconds between each autoscale up.
# CLI flag: -deletes.table.read-throughput.scale.out-cooldown
[out_cooldown: <int> | default = 1800]

# DynamoDB minimum seconds between each autoscale down.
# CLI flag: -deletes.table.read-throughput.scale.in-cooldown
[in_cooldown: <int> | default = 1800]

# DynamoDB target ratio of consumed capacity to provisioned capacity.
# CLI flag: -deletes.table.read-throughput.scale.target-value
[target: <float> | default = 80]

# Tag (of the form key=value) to be added to the tables. Supported by
# DynamoDB
# CLI flag: -deletes.table.tags
[tags: <map of string to string> | default = ]

grpc_store:
# Hostname or IP of the gRPC store instance.
# CLI flag: -grpc-store.server-address
Expand All @@ -3473,16 +3383,17 @@ grpc_store:
The `flusher_config` configures the WAL flusher target, used to manually run one-time flushes when scaling down ingesters.

```yaml
# Directory to read WAL from (chunks storage engine only).
# Has no effect: directory to read WAL from (chunks storage engine only).
# CLI flag: -flusher.wal-dir
[wal_dir: <string> | default = "wal"]

# Number of concurrent goroutines flushing to storage (chunks storage engine
# only).
# Has no effect: number of concurrent goroutines flushing to storage (chunks
# storage engine only).
# CLI flag: -flusher.concurrent-flushes
[concurrent_flushes: <int> | default = 50]

# Timeout for individual flush operations (chunks storage engine only).
# Has no effect: timeout for individual flush operations (chunks storage engine
# only).
# CLI flag: -flusher.flush-op-timeout
[flush_op_timeout: <duration> | default = 2m]

Expand Down Expand Up @@ -5492,31 +5403,6 @@ sharding_ring:
[sharding_strategy: <string> | default = "default"]
```

### `purger_config`

The `purger_config` configures the purger which takes care of delete requests.

```yaml
# Enable purger to allow deletion of series. Be aware that Delete series feature
# is still experimental
# CLI flag: -purger.enable
[enable: <boolean> | default = false]

# Number of workers executing delete plans in parallel
# CLI flag: -purger.num-workers
[num_workers: <int> | default = 2]

# Name of the object store to use for storing delete plans
# CLI flag: -purger.object-store-type
[object_store_type: <string> | default = ""]

# Allow cancellation of delete request until duration after they are created.
# Data would be deleted only after delete requests have been older than this
# duration. Ideally this should be set to at least 24h.
# CLI flag: -purger.delete-request-cancel-period
[delete_request_cancel_period: <duration> | default = 24h]
```

### `s3_sse_config`

The `s3_sse_config` configures the S3 server-side encryption. The supported CLI flags `<prefix>` used to reference this config block are:
Expand Down
95 changes: 95 additions & 0 deletions docs/configuration/single-process-config-blocks-local.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@

# Configuration for running Cortex in single-process mode.
# This should not be used in production. It is only for getting started
# and development.

# Disable the requirement that every request to Cortex has a
# X-Scope-OrgID header. `fake` will be substituted in instead.
auth_enabled: false

server:
http_listen_port: 9009

# Configure the server to allow messages up to 100MB.
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
grpc_server_max_concurrent_streams: 1000

distributor:
shard_by_all_labels: true
pool:
health_check_ingesters: true

ingester_client:
grpc_client_config:
# Configure the client to allow messages up to 100MB.
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
grpc_compression: gzip

ingester:
lifecycler:
# The address to advertise for this ingester. Will be autodiscovered by
# looking up address on eth0 or en0; can be specified if this fails.
# address: 127.0.0.1

# We want to start immediately and flush on shutdown.
join_after: 0
min_ready_duration: 0s
final_sleep: 0s
num_tokens: 512

# Use an in memory ring store, so we don't need to launch a Consul.
ring:
kvstore:
store: inmemory
replication_factor: 1

storage:
engine: blocks

blocks_storage:
tsdb:
dir: /tmp/cortex/tsdb

bucket_store:
sync_dir: /tmp/cortex/tsdb-sync

# You can choose between local storage and Amazon S3, Google GCS and Azure storage. Each option requires additional configuration
# as shown below. All options can be configured via flags as well which might be handy for secret inputs.
backend: filesystem # s3, gcs, azure or filesystem are valid options
# s3:
# bucket_name: cortex
# endpoint: s3.dualstack.us-east-1.amazonaws.com
# Configure your S3 credentials below.
# secret_access_key: "TODO"
# access_key_id: "TODO"
# gcs:
# bucket_name: cortex
# service_account: # if empty or omitted Cortex will use your default service account as per Google's fallback logic
# azure:
# account_name:
# account_key:
# container_name:
# endpoint_suffix:
# max_retries: # Number of retries for recoverable errors (defaults to 20)
filesystem:
dir: ./data/tsdb

compactor:
data_dir: /tmp/cortex/compactor
sharding_ring:
kvstore:
store: inmemory

frontend_worker:
match_max_concurrent: true

ruler:
enable_api: true
enable_sharding: false

ruler_storage:
backend: local
local:
directory: /tmp/cortex/rules
3 changes: 3 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,6 @@ replace github.com/thanos-io/thanos v0.22.0 => github.com/thanos-io/thanos v0.19

// Replace memberlist with Grafana's fork which includes some fixes that haven't been merged upstream yet
replace github.com/hashicorp/memberlist => github.com/grafana/memberlist v0.2.5-0.20211201083710-c7bc8e9df94b

// This commit is now only accessible via SHA if you're not using the Go modules proxy.
replace github.com/efficientgo/tools/core => github.com/efficientgo/tools/core v0.0.0-20210829154005-c7bad8450208
1 change: 0 additions & 1 deletion go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,6 @@ github.com/edsrzf/mmap-go v1.1.0 h1:6EUwBLQ/Mcr1EYLE4Tn1VdW1A4ckqCQWZBw8Hr0kjpQ=
github.com/edsrzf/mmap-go v1.1.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q=
github.com/efficientgo/e2e v0.11.2-0.20211027134903-67d538984a47 h1:k0qDUhOU0KJqKztQYJL1qMBR9nCOntuIRWYwA56Z634=
github.com/efficientgo/e2e v0.11.2-0.20211027134903-67d538984a47/go.mod h1:vDnF4AAEZmO0mvyFIATeDJPFaSRM7ywaOnKd61zaSoE=
github.com/efficientgo/tools/core v0.0.0-20210129205121-421d0828c9a6/go.mod h1:OmVcnJopJL8d3X3sSXTiypGoUSgFq1aDGmlrdi9dn/M=
github.com/efficientgo/tools/core v0.0.0-20210829154005-c7bad8450208 h1:jIALuFymwBqVsF32JhgzVsbCB6QsWvXqhetn8QgyrZ4=
github.com/efficientgo/tools/core v0.0.0-20210829154005-c7bad8450208/go.mod h1:OmVcnJopJL8d3X3sSXTiypGoUSgFq1aDGmlrdi9dn/M=
github.com/efficientgo/tools/extkingpin v0.0.0-20210609125236-d73259166f20 h1:kM/ALyvAnTrwSB+nlKqoKaDnZbInp1YImZvW+gtHwc8=
Expand Down
5 changes: 3 additions & 2 deletions integration/api_endpoints_test.go
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
//go:build requires_docker
// +build requires_docker

package integration
Expand All @@ -22,7 +23,7 @@ func TestIndexAPIEndpoint(t *testing.T) {
defer s.Close()

// Start Cortex in single binary mode, reading the config from file.
require.NoError(t, copyFileToSharedDir(s, "docs/chunks-storage/single-process-config.yaml", cortexConfigFile))
require.NoError(t, copyFileToSharedDir(s, "docs/configuration/single-process-config-blocks-local.yaml", cortexConfigFile))

cortex1 := e2ecortex.NewSingleBinaryWithConfigFile("cortex-1", cortexConfigFile, nil, "", 9009, 9095)
require.NoError(t, s.StartAndWaitReady(cortex1))
Expand All @@ -44,7 +45,7 @@ func TestConfigAPIEndpoint(t *testing.T) {
defer s.Close()

// Start Cortex in single binary mode, reading the config from file.
require.NoError(t, copyFileToSharedDir(s, "docs/chunks-storage/single-process-config.yaml", cortexConfigFile))
require.NoError(t, copyFileToSharedDir(s, "docs/configuration/single-process-config-blocks-local.yaml", cortexConfigFile))

cortex1 := e2ecortex.NewSingleBinaryWithConfigFile("cortex-1", cortexConfigFile, nil, "", 9009, 9095)
require.NoError(t, s.StartAndWaitReady(cortex1))
Expand Down
13 changes: 8 additions & 5 deletions integration/asserts.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,16 +30,19 @@ const (
var (
// Service-specific metrics prefixes which shouldn't be used by any other service.
serviceMetricsPrefixes = map[ServiceType][]string{
Distributor: {},
Ingester: {"!cortex_ingester_client", "cortex_ingester"}, // The metrics prefix cortex_ingester_client may be used by other components so we ignore it.
Querier: {"cortex_querier"},
Distributor: {},
// The metrics prefix cortex_ingester_client may be used by other components so we ignore it.
Ingester: {"!cortex_ingester_client", "cortex_ingester"},
// The metrics prefixes cortex_querier_storegateway and cortex_querier_blocks may be used by other components so we ignore them.
Querier: {"!cortex_querier_storegateway", "!cortex_querier_blocks", "cortex_querier"},
QueryFrontend: {"cortex_frontend", "cortex_query_frontend"},
QueryScheduler: {"cortex_query_scheduler"},
TableManager: {},
AlertManager: {"cortex_alertmanager"},
Ruler: {},
StoreGateway: {"!cortex_storegateway_client", "cortex_storegateway"}, // The metrics prefix cortex_storegateway_client may be used by other components so we ignore it.
Purger: {"cortex_purger"},
// The metrics prefix cortex_storegateway_client may be used by other components so we ignore it.
StoreGateway: {"!cortex_storegateway_client", "cortex_storegateway"},
Purger: {"cortex_purger"},
}

// Blacklisted metrics prefixes across any Cortex service.
Expand Down
Loading