Description
Bug Report
Version
> cargo tree | grep tonic
│ │ │ │ │ └── tonic v0.12.3
│ │ │ │ ├── tonic v0.12.3 (*)
│ │ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ │ └── tonic v0.12.3 (*)
│ ├── tonic v0.12.3 (*)
│ ├── tonic-health v0.12.3
│ │ └── tonic v0.12.3 (*)
│ └── tonic v0.12.3 (*)
├── tonic v0.12.3 (*)
├── tonic-health v0.12.3 (*)
Platform
Linux #16~22.04.1-Ubuntu SMP Mon Aug 19 19:38:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Crates
tonic-health
Description
We're using tonic-health for native kubernetes gRPC health checks (as described here).
Here, the relevant snippet from the deployment descriptor:

On each invocation of the health endpoint in our application, we see an error being logged on DEBUG
level:
source: /github/home/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tonic-0.12.3/src/transport/server/mod.rs
line: 703
message: failed serving connection: connection error
The health check itself is working as expected. We can confirm that the endpoint responds with SERVING
/ NOT_SERVING
as expected. Therefore, I suspect that the problem may be related to how the connection is terminated.
I would expect that no errors are being logged, if the health-check is working properly, not even on DEBUG
level, especially with our standard kubernetes setup.
more info
Relevant initialisation code in main.rs:
let (mut health_reporter, health_server) = tonic_health::server::health_reporter();
health_reporter
.set_service_status("", ServingStatus::NotServing)
.await;
let addr = "0.0.0.0:9000".parse().unwrap();
let grpc_server = Server::builder()
.add_service(health_server)
.add_service(
CustomApiServer::new(business_logic_service)
)
.serve_with_shutdown(addr, cancellation_token.clone().cancelled_owned())
.unwrap();
let grpc_server_handle = tokio::spawn(grpc_server);
// set serving happens down the line
Kubernetes version: Server Version: v1.30.5
We were wondering if this issue describes our problem, because k8s has a Go-client. But we do specify the host using 0.0.0.0
, so it seems that cannot be the cause.