Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

retry when killing s3 based segments #14776

Merged
merged 9 commits into from
Aug 10, 2023
Merged
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
import com.google.inject.Inject;
import org.apache.druid.java.util.common.ISE;
import org.apache.druid.java.util.common.MapUtils;
import org.apache.druid.java.util.common.RetryUtils;
import org.apache.druid.java.util.common.logger.Logger;
import org.apache.druid.segment.loading.DataSegmentKiller;
import org.apache.druid.segment.loading.SegmentLoadingException;
Expand Down Expand Up @@ -150,7 +151,19 @@ private boolean deleteKeysForBucket(
s3Bucket,
keysToDeleteStrings
);
s3Client.deleteObjects(deleteObjectsRequest);
RetryUtils.retry(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

retryS3Operation is cleaner

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks updated

() -> {
try {
s3Client.deleteObjects(deleteObjectsRequest);
return null;
}
catch (Exception e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the point of try/catch around it, it doesn't seem to do anything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed

throw e;
}
},
S3Utils.S3RETRY,
3
);
}
catch (MultiObjectDeleteException e) {
hadException = true;
Expand All @@ -165,7 +178,7 @@ private boolean deleteKeysForBucket(
key
));
}
catch (AmazonServiceException e) {
catch (Exception e) {
hadException = true;
log.noStackTrace().warn(e,
"Unable to delete from bucket [%s], the following keys [%s]",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -405,4 +405,19 @@ public void test_kill_listOfSegments_amazonServiceExceptionExceptionIsThrown()
);
Assert.assertEquals("Couldn't delete segments from S3. See the task logs for more details.", thrown.getMessage());
}

@Test
public void test_kill_listOfSegments_retryableExceptionThrown() throws SegmentLoadingException
{
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(TEST_BUCKET);
deleteObjectsRequest.withKeys(KEY_1_PATH, KEY_1_PATH);
s3Client.deleteObjects(EasyMock.anyObject(DeleteObjectsRequest.class));
EasyMock.expectLastCall().andThrow(new SdkClientException("Unable to execute HTTP request")).once();
EasyMock.expectLastCall().andVoid().times(2);


EasyMock.replay(s3Client, segmentPusherConfig, inputDataConfig);
segmentKiller = new S3DataSegmentKiller(Suppliers.ofInstance(s3Client), segmentPusherConfig, inputDataConfig);
segmentKiller.kill(ImmutableList.of(DATA_SEGMENT_1, DATA_SEGMENT_1));
}
}