-
Notifications
You must be signed in to change notification settings - Fork 883
Updated RequestHandler to handle read failures #1081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @mwlon, thanks for your contribution! In order for us to evaluate and accept your PR, we ask that you sign a contribution license agreement. It's all electronic and will take just minutes. Sincerely, |
It looks like the contribution license agreement is dysfunctional. I get
|
Thanks @mwlon, we'll look into getting the certificate fixed, looks like it's reporting the wrong common names for some reason. |
With regards to how to handle read and write failures, it does appear that we don't allow addressing either of these with retry policy. I'm not completely sure if this is intentional or not, will see what others think. I know that a common cause of ReadFailure is At the very least, I think we could surface those to |
Thanks for taking a look @tolbertam. Please let me know if I can help in any way - getting this resolved is high-priority for me. |
Hi @mwlon. I went ahead and logged JAVA-1944 for tracking this issue, we are still considering how we should resolve this. Also, with regards to https://cla.datastax.com not having a valid cert, we just fixed this (may take a little for our DNS change to propagate), thanks for reporting that issue! |
@mwlon So I talked to a few people and we decided the right thing to do was:
This will allow users the capability of retrying these exceptions, but they will not be retried by default. However, this won't completely fix things for you as the spark connector's retry policy implementation only retries read timeout, write timeout and unavailables. I see that in SPARKC-507 that they do not intend to retry on |
@tolbertam that's reasonable. Do you expect |
@mwlon We were planning on targeting this for 3.6.0, which we are wrapping up work on. Since this is a behavior change, we'd like to avoid putting it in a hotfix release. |
Using the spark cassandra connector, I kept getting errors like the one I pasted below, despite a generous retry policy. I traced them back to this repo, which immediately throws for any READ_FAILURE exceptions. I believe this change fixes it, but please check.