You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 7, 2018. It is now read-only.
Getting the following errors in nginx pod, while indexing:
[error] 2979#2979: *2143972 upstream prematurely closed connection while reading response header from upstream, client: 10.132.0.4, server: XXXX, request: "POST /_bulk?timeout=1m HTTP/1.1", upstream: "http://10.40.4.20:9200/_bulk?timeout=1m", host: "XXXXXX" Expand all | Collapse all {
So, looks like the client nodes are not responding within the defined timeout.
This is set in nginx thusly:
[error] 47#47: *1759038 user "XXXX" was not found in "/etc/ingress-controller/auth/default-ingress-with-auth.passwd", client: 10.132.0.5, server: XXXX, request: "POST /_bulk?timeout=1m HTTP/1.1", host: "jXXX:80"
User is present and this only fails intermitently.
There are no resources related issues.
Additionaly, no issues occuring with k8s cluster operation at the times this fails.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Getting the following errors in nginx pod, while indexing:
[error] 2979#2979: *2143972 upstream prematurely closed connection while reading response header from upstream, client: 10.132.0.4, server: XXXX, request: "POST /_bulk?timeout=1m HTTP/1.1", upstream: "http://10.40.4.20:9200/_bulk?timeout=1m", host: "XXXXXX" Expand all | Collapse all {
So, looks like the client nodes are not responding within the defined timeout.
This is set in nginx thusly:
[error] 47#47: *1759038 user "XXXX" was not found in "/etc/ingress-controller/auth/default-ingress-with-auth.passwd", client: 10.132.0.5, server: XXXX, request: "POST /_bulk?timeout=1m HTTP/1.1", host: "jXXX:80"
User is present and this only fails intermitently.
There are no resources related issues.
Additionaly, no issues occuring with k8s cluster operation at the times this fails.
The text was updated successfully, but these errors were encountered: