-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to limit the number of segments killed in kill task #14662
Conversation
to allow for limiting the number of segments that are killed
user to control the maximum ratio of total task slots available in cluster. If unset, will default to null which behaves as though all available task slots can be used in cluster, which was the existing behavior.
server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
Fixed
Show fixed
Hide fixed
...xing-service/src/main/java/org/apache/druid/indexing/common/task/KillUnusedSegmentsTask.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/server/coordinator/CoordinatorDynamicConfig.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/server/coordinator/CoordinatorDynamicConfig.java
Outdated
Show resolved
Hide resolved
server/src/test/java/org/apache/druid/server/coordinator/duty/KillUnusedSegmentsTest.java
Fixed
Show fixed
Hide fixed
server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
Outdated
Show resolved
Hide resolved
...xing-service/src/main/java/org/apache/druid/indexing/common/task/KillUnusedSegmentsTask.java
Outdated
Show resolved
Hide resolved
@@ -128,6 +146,9 @@ public TaskStatus runTask(TaskToolbox toolbox) throws Exception | |||
} | |||
|
|||
// Kill segments | |||
unusedSegments = maxSegmentsToKill == null | |||
? unusedSegments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of pruning out the unused segments up here, have we considered pushing down the limit to the metadata query so we ask only what we want? I think that should considerably reduce the load on the metadata store, reduce the lock duration, etc, for unused segments that aren't killed in a run anyway. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it was best to mark all segements in interval specified as unused. Seemed less expensive than actually deleting the segment from deep storage, and other things involved in killing the segment. I did consider this though. But with the limit also applied to marking unused, a user might need multiple requests to mark segments in interval as unused, where as if the limit is only applied to kill, then all segments in interval are marked unused, and if we hit limit on kill, auto kill will clean up eventually, if enabled. Let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the comment was referring to something different, about adjusting RetrieveUnusedSegmentsAction(getDataSource(), getInterval()))
so that it can accept a limit and return a smaller set of unused segments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes! sorry, I misunderstood, fixing now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, kill tasks shouldn't mark stuff unused at all. It's best if there is a time delay between markings things unused and permanently deleting them. It's possible for people to make mistakes with what they delete, and we want the mistakes to be undoable for some period of time.
I think the best thing to do in general is actually deprecate the markUnused
parameter, and focus on workflows that are like:
- Users delete stuff by marking it unused
- Later, the system deletes it permanently by way of autokill (which runs these
kill
tasks)
Btw, an important part of making this work is including a buffer period between when a segment is marked unused, and when autokill would remove it. This was discussed in #12526 although I am not sure what the current status is.
For this patch, IMO the best approach is to have the Coordinator send a list of segments to the kill
task, rather than interval
+ maxSegmentsToKill
. This will make it easier to implement the buffer period we need for autokill later— the Coordinator would be in charge of determining which specific segments are killable, and it would need the kill
task to respect that specific list. This also side-steps the issue with markAsUnused
, since the maxSegmentsToKill
parameter would no longer exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your input @gianm . This makes sense to me. I guess kill tasks will have another config property added, like segmentIds
, and if that is set, the killTask will ignore the other config properties like interval
and datasource
?
I guess one down side of this is that is makes the killTask harder to use by end users, as they now will need to know the ids of segments that they want to kill, if they want to do so manually. But I guess we will have the delete by interval fallback that exists now, if the user still wished to delete segments that way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a new task type to kill unused segments by id.
server/src/main/java/org/apache/druid/server/coordinator/CoordinatorDynamicConfig.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/server/coordinator/duty/KillUnusedSegments.java
Outdated
Show resolved
Hide resolved
I thought was better to keep separate. I didnt want to allow for starvation of either auto kill or auto compaction tasks because of the other.
I believe that the period must be at least greater than the indexer period, which I believe is defaulted to 30 minutes. There is a validation check for this at the moment. I believe the coorinator kill duty is spawned from the indexer loop which runs at the 30 minutes I noted. Now that I think more about this, im not even sure why we have a separate period config for kill. The coordinator is running the KillUnusedSegments duty, at the coordinator indexer period, and we are just using the kill period configured to check that it hasnt run more than it should. We could also use druid.coordinator.dutyGroups to allow kill task to run more frequently, if desired.
The ratio config is used to calculate the task slot usage allotted from the total task slots in cluster (including autoscaling if configured). So if ratio is set to 0.10, the user can at any moment use 10% of total task slots on auto kill tasks.
I actually had this exact implementation initially, but based on comments made in this review have changed approach. Let me know what you think |
@zachjsh I discussed with @jasonk000 offline and we came up with an alternative approach. The new approach should be much simpler, doesn't need a new task type and has 2 fold benefit. The approach we discussed is to modify the existing KillUnusedSegmentsTask to take in a new argument,
The two benefits here are:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see (and consider) my comment on alternative approach
@maytasm , I had something similar to this up to commit d853a03, but was requested to change to the segmentId list approach. I too like using the existing task with limit. Will discuss further, and hopefully get on the same page to go back to this approach. Thanks! |
In this comment I originally said that I thought it'd be good to have a version of the kill task that takes segments instead of intervals: #14662 (comment) The subsequent discussion illuminated some problems with that, and IMO it is fine to go back to
|
0de2615
to
2eccb33
Compare
@JsonCreator | ||
public RetrieveUnusedSegmentsAction( | ||
@JsonProperty("dataSource") String dataSource, | ||
@JsonProperty("interval") Interval interval | ||
@JsonProperty("interval") Interval interval, | ||
@JsonProperty("maxSegments") @Nullable Integer maxSegments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change to limit
, to make consistent
...xing-service/src/main/java/org/apache/druid/indexing/common/task/KillUnusedSegmentsTask.java
Dismissed
Show dismissed
Hide dismissed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, lgtm , some small logic changes I think are warranted.
...xing-service/src/main/java/org/apache/druid/indexing/common/task/KillUnusedSegmentsTask.java
Outdated
Show resolved
Hide resolved
...xing-service/src/main/java/org/apache/druid/indexing/common/task/KillUnusedSegmentsTask.java
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/indexing/overlord/IndexerMetadataStorageCoordinator.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
Show resolved
Hide resolved
indexing-service/src/main/java/org/apache/druid/indexing/common/task/Task.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few minor comments. Everything else looks good!
@@ -103,6 +110,19 @@ public KillUnusedSegmentsTask( | |||
this.markAsUnused = markAsUnused != null && markAsUnused; | |||
this.batchSize = (batchSize != null) ? batchSize : DEFAULT_SEGMENT_NUKE_BATCH_SIZE; | |||
Preconditions.checkArgument(this.batchSize > 0, "batchSize should be greater than zero"); | |||
if (null != limit && limit <= 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can use Preconditions.checkArgument
limit | ||
); | ||
} | ||
if (limit != null && markAsUnused != null && markAsUnused) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can use Preconditions.checkArgument
@@ -90,8 +96,9 @@ public KillUnusedSegmentsTask( | |||
@JsonProperty("dataSource") String dataSource, | |||
@JsonProperty("interval") Interval interval, | |||
@JsonProperty("context") Map<String, Object> context, | |||
@JsonProperty("markAsUnused") Boolean markAsUnused, | |||
@JsonProperty("batchSize") Integer batchSize | |||
@JsonProperty("markAsUnused") @Deprecated Boolean markAsUnused, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this flag deprecated? Is the reason captured anywhere?
Description
Previously, the
maxSegments
configured for auto kill could be ignored if an interval of data for a given datasource had more than this number of unused segments, causing the kill task spawned with the task of deleting unused segments in that given interval of data to delete more than themaxSegments
configured. Now each kill task spawned by the auto kill coordinator duty, will kill at mostlimit
segments. This is done by adding a new config property to theKillUnusedSegmentTask
which allows users to specify this limit.This PR has: