Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to limit the number of segments killed in kill task #14662

Merged
merged 26 commits into from
Aug 4, 2023

Conversation

zachjsh
Copy link
Contributor

@zachjsh zachjsh commented Jul 25, 2023

Description

Previously, the maxSegments configured for auto kill could be ignored if an interval of data for a given datasource had more than this number of unused segments, causing the kill task spawned with the task of deleting unused segments in that given interval of data to delete more than the maxSegments configured. Now each kill task spawned by the auto kill coordinator duty, will kill at most limit segments. This is done by adding a new config property to the KillUnusedSegmentTask which allows users to specify this limit.

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@zachjsh zachjsh requested a review from jon-wei July 26, 2023 20:32
@zachjsh zachjsh marked this pull request as ready for review July 26, 2023 21:03
@zachjsh zachjsh requested a review from kfaraz July 26, 2023 21:03
docs/data-management/delete.md Outdated Show resolved Hide resolved
@@ -128,6 +146,9 @@ public TaskStatus runTask(TaskToolbox toolbox) throws Exception
}

// Kill segments
unusedSegments = maxSegmentsToKill == null
? unusedSegments
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of pruning out the unused segments up here, have we considered pushing down the limit to the metadata query so we ask only what we want? I think that should considerably reduce the load on the metadata store, reduce the lock duration, etc, for unused segments that aren't killed in a run anyway. What do you think?

Copy link
Contributor Author

@zachjsh zachjsh Jul 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it was best to mark all segements in interval specified as unused. Seemed less expensive than actually deleting the segment from deep storage, and other things involved in killing the segment. I did consider this though. But with the limit also applied to marking unused, a user might need multiple requests to mark segments in interval as unused, where as if the limit is only applied to kill, then all segments in interval are marked unused, and if we hit limit on kill, auto kill will clean up eventually, if enabled. Let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comment was referring to something different, about adjusting RetrieveUnusedSegmentsAction(getDataSource(), getInterval())) so that it can accept a limit and return a smaller set of unused segments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes! sorry, I misunderstood, fixing now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Contributor

@gianm gianm Jul 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, kill tasks shouldn't mark stuff unused at all. It's best if there is a time delay between markings things unused and permanently deleting them. It's possible for people to make mistakes with what they delete, and we want the mistakes to be undoable for some period of time.

I think the best thing to do in general is actually deprecate the markUnused parameter, and focus on workflows that are like:

  • Users delete stuff by marking it unused
  • Later, the system deletes it permanently by way of autokill (which runs these kill tasks)

Btw, an important part of making this work is including a buffer period between when a segment is marked unused, and when autokill would remove it. This was discussed in #12526 although I am not sure what the current status is.

For this patch, IMO the best approach is to have the Coordinator send a list of segments to the kill task, rather than interval + maxSegmentsToKill. This will make it easier to implement the buffer period we need for autokill later— the Coordinator would be in charge of determining which specific segments are killable, and it would need the kill task to respect that specific list. This also side-steps the issue with markAsUnused, since the maxSegmentsToKill parameter would no longer exist.

Copy link
Contributor Author

@zachjsh zachjsh Jul 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your input @gianm . This makes sense to me. I guess kill tasks will have another config property added, like segmentIds , and if that is set, the killTask will ignore the other config properties like interval and datasource ?

I guess one down side of this is that is makes the killTask harder to use by end users, as they now will need to know the ids of segments that they want to kill, if they want to do so manually. But I guess we will have the delete by interval fallback that exists now, if the user still wished to delete segments that way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a new task type to kill unused segments by id.

@zachjsh
Copy link
Contributor Author

zachjsh commented Aug 1, 2023

@suneet-s

This part of the patch seems substantial enough to be it's own PR, and should definitely be called out in the release notes, but as written, it is very easy to miss this. wrt killTaskSlotRatio, have you considered combining this with the compactTaskSlotRatio so that an operator has one knob to tune how much capacity in the cluster should be given to "data management" operations? Compaction tasks also offer a maxCompactTaskSlot config, if we do decide to have a different set of knobs for kill tasks - is there a reason not to also introduce a maxKillTaskSlots config?

I thought was better to keep separate. I didnt want to allow for starvation of either auto kill or auto compaction tasks because of the other.

Since we are now introducing a config to limit the number of concurrent kill tasks - is there any reason the kill duty should not run more frequently, say every minute by default?

I believe that the period must be at least greater than the indexer period, which I believe is defaulted to 30 minutes. There is a validation check for this at the moment. I believe the coorinator kill duty is spawned from the indexer loop which runs at the 30 minutes I noted.

Now that I think more about this, im not even sure why we have a separate period config for kill. The coordinator is running the KillUnusedSegments duty, at the coordinator indexer period, and we are just using the kill period configured to check that it hasnt run more than it should. We could also use druid.coordinator.dutyGroups to allow kill task to run more frequently, if desired.

Are there any guardrails preventing an operator from setting the task slots used for data management operations higher than the available capacity of the cluster?

The ratio config is used to calculate the task slot usage allotted from the total task slots in cluster (including autoscaling if configured). So if ratio is set to 0.10, the user can at any moment use 10% of total task slots on auto kill tasks.

WRT the kill_by_segment_id task - Why does this task need a different name from the kill task? Have you considered making the kill task accept an interval and a limit instead of a list of segment ids? This seems like an easier interface for operators to use and reason about.

I actually had this exact implementation initially, but based on comments made in this review have changed approach. Let me know what you think

@maytasm
Copy link
Contributor

maytasm commented Aug 2, 2023

@zachjsh I discussed with @jasonk000 offline and we came up with an alternative approach. The new approach should be much simpler, doesn't need a new task type and has 2 fold benefit. The approach we discussed is to modify the existing KillUnusedSegmentsTask to take in a new argument, limit. Auto Kill would pass this limit to the KillUnusedSegmentsTask (ClientKillUnusedSegmentsTaskQuery would need to be modify to have this new argument as well). Inside KillUnusedSegmentsTask, we would do the following:

  1. First, we find the min(batchSize, limit) (Note that both batchSize and limit are arguments of KillUnusedSegmentsTask) -- lets called this value real_limit
  2. We then RetrieveUnusedSegments for the datasource, interval and real_limit (the number of segment retrieved here will be the min(batchSize, limit))
  3. Check lock covers segment for the segments retrieved in 2
  4. SegmentNukeAction for the segments retrieved in 2
  5. DataSegmentKiller.kill for the segments retrieved in 2
  6. Reduce limit variable by the number of segments we just killed
  7. Repeat step 1 to 6 until limit is 0

The two benefits here are:

  1. KillUnusedSegmentsTask would only delete up to the limit number of segments even if the interval given has more segments
    2)This improve existing KillUnusedSegmentsTask without introducing new task type. Many users who issued KillUnusedSegmentsTask manually will benefit from this.
  2. (We should do this anyway) We should actually only fetch RetrieveUnusedSegmentsAction in batches. Too large number of segments fetch in RetrieveUnusedSegmentsAction can result in the task OOM.

Copy link
Contributor

@maytasm maytasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see (and consider) my comment on alternative approach

@zachjsh
Copy link
Contributor Author

zachjsh commented Aug 2, 2023

@zachjsh I discussed with @jasonk000 offline and we came up with an alternative approach. The new approach should be much simpler, doesn't need a new task type and has 2 fold benefit. The approach we discussed is to modify the existing KillUnusedSegmentsTask to take in a new argument, limit. Auto Kill would pass this limit to the KillUnusedSegmentsTask (ClientKillUnusedSegmentsTaskQuery would need to be modify to have this new argument as well). Inside KillUnusedSegmentsTask, we would do the following:

1. First, we find the min(batchSize, limit) (Note that both batchSize and limit are arguments of KillUnusedSegmentsTask) -- lets called this value real_limit

2. We then RetrieveUnusedSegments for the datasource, interval and real_limit (the number of segment retrieved here will be the min(batchSize, limit))

3. Check lock covers segment for the segments retrieved in 2

4. SegmentNukeAction for the segments retrieved in 2

5. DataSegmentKiller.kill for the segments retrieved in 2

6. Reduce limit variable by the number of segments we just killed

7. Repeat step 1 to 6 until limit is 0

The two benefits here are:

1. KillUnusedSegmentsTask would only delete up to the `limit` number of segments even if the interval given has more segments
   2)This improve existing KillUnusedSegmentsTask without introducing new task type. Many users who issued KillUnusedSegmentsTask manually will benefit from this.

2. (We should do this anyway) We should actually only fetch RetrieveUnusedSegmentsAction in batches. Too large number of segments fetch in RetrieveUnusedSegmentsAction can result in the task OOM.

@maytasm , I had something similar to this up to commit d853a03, but was requested to change to the segmentId list approach. I too like using the existing task with limit. Will discuss further, and hopefully get on the same page to go back to this approach. Thanks!

@gianm
Copy link
Contributor

gianm commented Aug 2, 2023

In this comment I originally said that I thought it'd be good to have a version of the kill task that takes segments instead of intervals: #14662 (comment)

The subsequent discussion illuminated some problems with that, and IMO it is fine to go back to intervals, with notes:

  1. We still need to side-step the intervals + markAsUnused + limit case. A good way to do that would be to deprecate markAsUnused, and forbid it from being provided along with limit. This won't break any existing stuff, because limit is new. And it achieves the side-step we want to achieve. We should document that markAsUnused is deprecated, point people to an alternative (it should be the mark-unused APIs on the Coordinator), and remove it in a future version. [Note that when we remove it, we shouldn't completely remove the parameter: it should exist, and be an error if someone sets it to true, in case people forgot to read the release notes.]
  2. Once we implement the buffer period stuff we ultimately want (Enhance the logic used by the Coordinator when determining if an unused segment can be killed (permanently deleted) #12526), likely what'll happen is the Coordinator and the kill task will both check the correct list of segments to kill. This is good because we need the kill task to double-check under lock anyway in case something changed.

@zachjsh zachjsh changed the title Implement new Kill_by_segment_ids task and use in auto kill coordinator duty Add ability to limit the number of segments killed in kill task Aug 2, 2023
@zachjsh zachjsh requested review from jon-wei and maytasm August 2, 2023 23:31
@JsonCreator
public RetrieveUnusedSegmentsAction(
@JsonProperty("dataSource") String dataSource,
@JsonProperty("interval") Interval interval
@JsonProperty("interval") Interval interval,
@JsonProperty("maxSegments") @Nullable Integer maxSegments
Copy link
Contributor Author

@zachjsh zachjsh Aug 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change to limit, to make consistent

Copy link
Contributor

@jasonk000 jasonk000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, lgtm , some small logic changes I think are warranted.

@zachjsh zachjsh requested a review from jasonk000 August 3, 2023 15:48
Copy link
Contributor

@jasonk000 jasonk000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

Copy link
Contributor

@maytasm maytasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few minor comments. Everything else looks good!

docs/data-management/delete.md Outdated Show resolved Hide resolved
@@ -103,6 +110,19 @@ public KillUnusedSegmentsTask(
this.markAsUnused = markAsUnused != null && markAsUnused;
this.batchSize = (batchSize != null) ? batchSize : DEFAULT_SEGMENT_NUKE_BATCH_SIZE;
Preconditions.checkArgument(this.batchSize > 0, "batchSize should be greater than zero");
if (null != limit && limit <= 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can use Preconditions.checkArgument

limit
);
}
if (limit != null && markAsUnused != null && markAsUnused) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can use Preconditions.checkArgument

@zachjsh zachjsh merged commit ba957a9 into apache:master Aug 4, 2023
74 checks passed
@zachjsh zachjsh deleted the auto-kill-improvements branch August 4, 2023 02:17
@@ -90,8 +96,9 @@ public KillUnusedSegmentsTask(
@JsonProperty("dataSource") String dataSource,
@JsonProperty("interval") Interval interval,
@JsonProperty("context") Map<String, Object> context,
@JsonProperty("markAsUnused") Boolean markAsUnused,
@JsonProperty("batchSize") Integer batchSize
@JsonProperty("markAsUnused") @Deprecated Boolean markAsUnused,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this flag deprecated? Is the reason captured anywhere?

@LakshSingla LakshSingla added this to the 28.0 milestone Oct 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.