Skip to content

server: enhanced health endpoint #5548

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions examples/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,7 @@ node index.js
- `{"status": "loading model"}` if the model is still being loaded.
- `{"status": "error"}` if the model failed to load.
- `{"status": "ok"}` if the model is successfully loaded and the server is ready for further requests mentioned below.
- `{"status": "no slot available", "slots_idle": 0, "slots_processing": 32}` if no slot are currently available

- **POST** `/completion`: Given a `prompt`, it returns the predicted completion.

Expand Down
31 changes: 29 additions & 2 deletions examples/server/server.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2561,8 +2561,35 @@ int main(int argc, char **argv)
server_state current_state = state.load();
switch(current_state) {
case SERVER_STATE_READY:
res.set_content(R"({"status": "ok"})", "application/json");
res.status = 200; // HTTP OK
if (llama.all_slots_are_idle) {
res.set_content(R"({"status": "ok"})", "application/json");
res.status = 200; // HTTP OK
} else {
int available_slots = 0;
int processing_slots = 0;
for (llama_client_slot & slot : llama.slots) {
if (slot.available()) {
available_slots++;
} else {
processing_slots++;
}
}
if (available_slots > 0) {
json health = {
{"status", "ok"},
{"slots_idle", available_slots},
{"slots_processing", processing_slots}};
res.set_content(health.dump(), "application/json");
res.status = 200; // HTTP OK
} else {
json health = {
{"status", "no slot available"},
{"slots_idle", available_slots},
{"slots_processing", processing_slots}};
res.set_content(health.dump(), "application/json");
res.status = 503; // HTTP Service Unavailable

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@phymbert thanks for introducing this additional metadata to the health check!

one nit: it seems unidiomatic for health to return an error status code for an expected and error-free state. in practice, for a local inference server with a single slot (the default behavior), this is particularly unintuitive.

while the server is busy wrt inference, it can happily process health check requests — why return an error (5xx) status code, rather than a success (request understood and processed just fine) along with the actual information desired, the count of available slots (0)?

503 or 409 conflict make more sense to me for /completion or chat completion requests — their request can genuinely not be processed. but the health check returning 5xx codes during normal operation feels wrong to me. the server is not unhealthy by any metric.

it seems this is not an uncommon point of bike shedding so I will happily work around this behavior if i'm in the minority, but wanted to share in case there was any other agreement to this effect.

happy to put up a patch if so!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @brittlewis12, thanks for your feedback.

My primary goal is to point a kubernetes readiness probes to the health endpoint. This way, the server will not receive new incoming request but they will be routed to another available pod. It does not mean the server is down, but as 503 says: it is overloaded. This is the standard for cloud native application.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brittlewis12 I finally got your point, PR #5594 address it, thanks for pointing me out this.

}
}
break;
case SERVER_STATE_LOADING_MODEL:
res.set_content(R"({"status": "loading model"})", "application/json");
Expand Down