Replies: 5 comments 3 replies
-
Have you checked your server logs when this happens? Perhaps it's an OOM(out of memory) error. One way to do this could be to daemonize postgrest and then do |
Beta Was this translation helpful? Give feedback.
-
I have checked postgREST logs and no error on those logs, postgREST is running as kubernetes pod and initially we were getting OOM, but we increased the memory of pod(its 2Gig now) and now we are not getting any OOM and pod is running fine not at all crashing and on logs also no error is printing and I am able to get the data for other views after this call gets failed. "The strange part is when I run curl from the machine where postgREST server is running I gets the data " (this scenario is tested on windows machine not on kubernetes) while the actual problem I am facing when postgREST is running on Kubernetes. |
Beta Was this translation helpful? Give feedback.
-
Hm, do you really need to download all the data in one request? Using Limits and Pagination should avoid this issue. You can also set a hard limit for all requests https://postgrest.org/en/stable/configuration.html#db-max-rows |
Beta Was this translation helpful? Give feedback.
-
Actually its having 300k rows and if I go with Limits and pagination I need to apply sort and then only I can apply pagination and limit, that is taking lots of time.(I have tried that as well) Is there anyway to increase the postgREST web server connection timeout? |
Beta Was this translation helpful? Give feedback.
-
There is no timeout for how long a response takes from our side.
Proxies(nginx, cloudflare, etc) can also apply timeouts, since postgREST responds correctly locally then this is likely a problem with proxies. |
Beta Was this translation helpful? Give feedback.
-
Environment
Description of issue
When trying to fetch data for a database view which has around 311000 rows postgREST server closes connection before sending the whole response.
With Postman: I get connection aborted
With Cur: l I get some data rows then gets error 'closed, but still some data to read'.
With Apache HttpClient : "org.apache.http.ConnectionClosedException: Premature end of chunk coded message body: closing chunk expected , (after reading some bytes)
I have used postgRest default configuration.
The strange part is when I run curl from the machine where postgREST server is running I gets the data, but when these machine are different and connected via network, I get this error.
(Expected behavior vs actual behavior)
I should get whole 200 MB of data vs I am getting Connection closed error after sometime.(partial data gets received)
(Steps to reproduce: Include a minimal SQL definition plus how you make the request to PostgREST and the response body)
curl --location --request GET http://ip:port/view_name'
--header 'Accept-Profile: schema_name'
--header 'Accept: text/csv' \
Beta Was this translation helpful? Give feedback.
All reactions