-
Notifications
You must be signed in to change notification settings - Fork 25.4k
Enforce higher priority for RepositoriesService ClusterStateApplier #59040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce higher priority for RepositoriesService ClusterStateApplier #59040
Conversation
Pinging @elastic/es-distributed (:Distributed/Cluster Coordination) |
This avoids shards allocation failures when the repository instance comes in the same ClusterState update as the shard allocation. Backport of elastic#58808
08c239a
to
60ff993
Compare
@fcofdez looks like the relevant test that this PR adds failed on this PR. Is this a possible failure mode in |
It seems like the change introduced in 3b71a31 wasn't ported to |
Indeed #39466 was a |
Thanks for the clarification @DaveCTurner. In the test I was forcing the recovery to be held until a third data node joins the cluster, the reason was to avoid race conditions around the registration of the cluster state listener that collects the data to do the assertions. So I think that the behavior is kind of expected, I'm not sure if we have a different way to avoid that possible race condition in |
I see, sorry for the delay, it's taken me over an hour to shave all the yaks needed to run this darn test (required an IntelliJ upgrade). I understand now. I think you can avoid these checks with diff --git a/x-pack/plugin/searchable-snapshots/src/test/java/org/elasticsearch/xpack/searchablesnapshots/ClusterStateApplierOrderingTests.java b/x-pack/plugin/searchable-snapshots/src/test/java/org/elasticsearch/xpack/searchablesnapshots/ClusterStateApplierOrderingTests.java
index 8277714a3ec..3a21761fcfb 100644
--- a/x-pack/plugin/searchable-snapshots/src/test/java/org/elasticsearch/xpack/searchablesnapshots/ClusterStateApplierOrderingTests.java
+++ b/x-pack/plugin/searchable-snapshots/src/test/java/org/elasticsearch/xpack/searchablesnapshots/ClusterStateApplierOrderingTests.java
@@ -35,10 +35,13 @@ import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.is;
-@ESIntegTestCase.ClusterScope(scope = TEST, numDataNodes = 2)
+@ESIntegTestCase.ClusterScope(scope = TEST, numDataNodes = 0, autoManageMasterNodes = false)
public class ClusterStateApplierOrderingTests extends BaseSearchableSnapshotsIntegTestCase {
public void testRepositoriesServiceClusterStateApplierIsCalledBeforeIndicesClusterStateService() throws Exception {
+ internalCluster().setBootstrapMasterNodeIndex(0);
+ internalCluster().startNodes(2);
+
final String fsRepoName = "fsrepo";
final String indexName = "test-index";
final String restoredIndexName = "restored-index"; |
retest this please |
2 similar comments
retest this please |
retest this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This avoids shards allocation failures when the repository instance
comes in the same ClusterState update as the shard allocation.
Backport of #58808