You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Add metrics for remaining planned compactions
Signed-off-by: Albert <ac1214@users.noreply.github.com>
* fix unit tests
Signed-off-by: Albert <ac1214@users.noreply.github.com>
* Add shuffle sharding for compactor
Signed-off-by: Albert <ac1214@users.noreply.github.com>
* update changelog
Signed-off-by: Albert <ac1214@users.noreply.github.com>
* fix linting
Signed-off-by: Albert <ac1214@users.noreply.github.com>
* Fix build errors
Signed-off-by: Alvin Lin <alvinlin@amazon.com>
* Fix up change log
Signed-off-by: Alvin Lin <alvinlin@amazon.com>
* Fix linting error
Signed-off-by: Alvin Lin <alvinlin@amazon.com>
* Remove use of nolint
Signed-off-by: Alvin Lin <alvinlin@amazon.com>
* Compactor.ownUser now determines whether the user is owned by a compactor via ring, instead of returning true if shuffle-sharding is enabled
Signed-off-by: Roy Chiang <roychi@amazon.com>
* fix bug where multiple compactors are trying to cleanup the same tenant at once, which results in dangling bucket index
Signed-off-by: Roy Chiang <roychi@amazon.com>
* set all remaining compation in one go, instead of slowly incrementing it as plans get generated
Signed-off-by: Roy Chiang <roychi@amazon.com>
* rename ownUser function for better readability
Signed-off-by: Roy Chiang <roychi@amazon.com>
* address comments
Signed-off-by: Roy Chiang <roychi@amazon.com>
* fixed rebase issues
Signed-off-by: Roy Chiang <roychi@amazon.com>
* fix tests
Signed-off-by: Roy Chiang <roychi@amazon.com>
Co-authored-by: Albert <ac1214@users.noreply.github.com>
Co-authored-by: Alvin Lin <alvinlin@amazon.com>
Co-authored-by: Roy Chiang <roychi@amazon.com>
Co-authored-by: Alan Protasio <approtas@amazon.com>
Copy file name to clipboardExpand all lines: CHANGELOG.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,7 @@
8
8
*[FEATURE] Ruler: Add `external_labels` option to tag all alerts with a given set of labels.
9
9
*[FEATURE] Compactor: Add `-compactor.skip-blocks-with-out-of-order-chunks-enabled` configuration to mark blocks containing index with out-of-order chunks for no compact instead of halting the compaction
10
10
*[FEATURE] Querier/Query-Frontend: Add `-querier.per-step-stats-enabled` and `-frontend.cache-queryable-samples-stats` configurations to enable query sample statistics
11
+
*[FEATURE] Add shuffle sharding for the compactor #4433
11
12
12
13
## 1.12.0 in progress
13
14
@@ -16,7 +17,6 @@
16
17
*[CHANGE] Compactor block deletion mark migration, needed when upgrading from v1.7, is now disabled by default. #4597
17
18
*[CHANGE] The `status_code` label on gRPC client metrics has changed from '200' and '500' to '2xx', '5xx', '4xx', 'cancel' or 'error'. 4601
18
19
*[CHANGE] Memberlist: changed probe interval from `1s` to `5s` and probe timeout from `500ms` to `2s`. #4601
19
-
*[FEATURE] Add shuffle sharding grouper and planner within compactor to allow further work towards parallelizing compaction #4624
20
20
*[ENHANCEMENT] Update Go version to 1.17.8. #4602#4604#4658
21
21
*[ENHANCEMENT] Keep track of discarded samples due to bad relabel configuration in `cortex_discarded_samples_total`. #4503
22
22
*[ENHANCEMENT] Ruler: Add `-ruler.disable-rule-group-label` to disable the `rule_group` label on exported metrics. #4571
Shuffle sharding is **disabled by default** and needs to be explicitly enabled in the configuration.
59
60
@@ -154,6 +155,18 @@ Cortex ruler can run in three modes:
154
155
155
156
Note that when using sharding strategy, each rule group is evaluated by single ruler only, there is no replication.
156
157
158
+
### Compactor shuffle sharding
159
+
160
+
Cortex compactor can run in three modes:
161
+
162
+
1.**No sharding at all.** This is the most basic mode of the compactor. It is activated by using `-compactor.sharding-enabled=false` (default). In this mode every compactor will run every compaction.
163
+
2.**Default sharding**, activated by using `-compactor.sharding-enabled=true` and `-compactor.sharding-strategy=default` (default). In this mode compactors register themselves into the ring. One single tenant will belong to only 1 compactor.
164
+
3.**Shuffle sharding**, activated by using `-compactor.sharding-enabled=true` and `-compactor.sharding-strategy=shuffle-sharding`. Similarly to default sharding, but compactions for each tenant can be carried out on multiple compactors (`-compactor.tenant-shard-size`, can also be set per tenant as `compactor_tenant_shard_size` in overrides).
165
+
166
+
With shuffle sharding selected as the sharding strategy, a subset of the compactors will be used to handle a user based on the shard size.
167
+
168
+
The idea behind using the shuffle sharding strategy for the compactor is to further enable horizontal scalability and build tolerance for compactions that may take longer than the compaction interval.
169
+
157
170
## FAQ
158
171
159
172
### Does shuffle sharding add additional overhead to the KV store?
f.Var(&cfg.DisabledTenants, "compactor.disabled-tenants", "Comma separated list of tenants that cannot be compacted by this compactor. If specified, and compactor would normally pick given tenant for compaction (via -compactor.enabled-tenants or sharding), it will be ignored instead.")
0 commit comments