-
Hello, I'm currently using Apache Spark mainly since 2016 and now considering adopting Databend. I have deployed a Databend cluster on a managed Kubernetes (k8s) cluster within my team. The performance of Databend has been pretty good so far. However, I am encountering an issue. When I intentionally delete a query pod while a query is running, I receive the following error:
I am wondering if this is expected behavior (i.e., I need to re-run the failed query) or if there is a tunable parameter that could handle this scenario. In Apache Spark, for instance, a SQL is divided into multiple tasks, and a task failure triggers automatic retries up to Thank you for your assistance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
We can support |
Beta Was this translation helpful? Give feedback.
We can support
query_max_failures
settings to retry the query when encountering the retriable errors (network etc)