-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding search node to the cluster breaks snapshot repository #13024
Comments
Here is the related master node log excerpt: |
To collect more information, I've set UPD: few minutes later, it failed to verify again UPD2: it seems that it verified successfully sometimes, 5 to 10 per cent |
I wrote a simple Python program to check the contents of the snapshot directory every 0.01 seconds. Here is it's output when search node stopped and repo verified successfully: Here is it's output when search node connected and this error returned
I.e. two file are sitting here extra couple of seconds. |
In case somebody interested, I managed to conquer this issue. Our cluster is running Oracle Linux Server 8.9 with ol8_UEKR6 kernel (5.4.17). We've upgraded all our nodes to 5.15 and got backup broken (repository verify failed). Then I've restarted all nodes at 5.4 kernel and found repo verification is working properly again. Then downgraded new node kernel to 5.4, add it to cluster and it works properly now. |
Thanks for narrowing it down! Still, what is the root cause of the verification failure? |
I was unable to identify the exact cause. |
Hello team, We are working together with @rlevytskyi and we have tried to identify the root cause of the failure, unfortunately, without any meaningful results. Our lab:
We have identified that the search node can successfully create the test files in the repository. Afterwards it seems that the files should be deleted by the master node. Then it's verified that the files could be deleted by the master. However with kernel 5.15 it looks like the master node cannot delete the files created by the search node and verification fails. Rollback to 5.4 fixes the issue. Probably it's some synchronization problem between the two nodes when the master node tries to delete the files too early or the file is still "locked" by the search node. Yet it is still unclear how it can be related to the kernel version. We tried to play with SMB options - disabled cache and played around with versions - same issue. In addition, the files can be successfully created and deleted on the share by any node. The problem seems to be only about the repository verification procedure. We have also tried to boot all the nodes to kernel 5.15 but it has also resulted with the error. Since we are not Java developers, it is not so easy for us to analyze the code and completely understand the algorithm of the repository verification to find out the root cause of the issue. So we would be grateful for any tips about where we can dig deeper. Thanks! |
Describe the bug
We have a cluster of ten nodes running for a long time.
We use SMB share to store our snapshot repository. Many years it receives daily snapshots and sometimes some of them got restored successfully.
After testing the searchable snapshot feature at the test installation, we decided to implement it to our production system.
However, it turned out that adding the
node.roles: [search]
node makes storage snapshot inoperable. Here is the output:% curl logs:9200/_snapshot/searchable/_verify -XPOST | jq . { "error": { "root_cause": [ { "type": "repository_verification_exception", "reason": "[searchable] cannot delete test data at " } ], "type": "repository_verification_exception", "reason": "[searchable] cannot delete test data at ", "caused_by": { "type": "directory_not_empty_exception", "reason": "/usr/share/opensearch/searchable/tests-WuWXrwFrTd-BiVX9VfEkTw" } }, "status": 500 }
At the same time, using any node, including new search node:
% ssh <any_node> sudo docker exec <container_name> 'ls -l /usr/share/opensearch/searchable/tests-WuWXrwFrTd-BiVX9VfEkTw' total 0
Just by switching new search node off:
% curl logs:9200/_snapshot/searchable/_verify -XPOST | jq . { "nodes": { <all 8 data/master nodes here> }}
I.e. it is definitely an empty directory.
Related component
Other
To Reproduce
Just add
node.roles: [ search ]
node to existing '[data]', '[master]', and '[ ]' nodes and see weird verification error.Expected behavior
Adding search node should not affect existing cluster.
Additional Details
Plugins
Security with Keycloak SAML
Host/Environment (please complete the following information):
Additional context
Master node log at the next message.
The text was updated successfully, but these errors were encountered: