Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📝 [Proposal]: shutdown procedure #2976

Open
3 tasks done
the-hotmann opened this issue Apr 17, 2024 · 4 comments
Open
3 tasks done

📝 [Proposal]: shutdown procedure #2976

the-hotmann opened this issue Apr 17, 2024 · 4 comments

Comments

@the-hotmann
Copy link
Contributor

the-hotmann commented Apr 17, 2024

Feature Proposal Description

As discussed with @ReneWerner87 on discord I would like to propose this feature request to improve the shutdown-procedure of fiber.

I noticed that when I stop my container, open connections are cut off. I have also seen quite a few workarounds, but since there is a general need for proper shutdown handling, I guess it should be implemented by default.

there are two general signals, which at least shall be handeled:

SIGKILL

  1. Log "Fiber forces shutdown"
  2. Fiber kills itself immediately, after logging the last message

SIGTERM

  1. Log "Shutdown received - shutting gracefully down"
  2. Fiber shall not accept any new connections (probably requires all routes to be removed/disabled). Maybe even returning custom error like "shutting down" when disabled route beeing called in combination with an "503"-Error and the additional "retry after" so the incomming connections are handeled as smooth as possible
  3. Fiber checks for open connections.
    3.1. Fiber waits "9s" (default gracefully shutdown timeout) long for existing connections to finish themself.
    3.1.1. if "9s" pass, it cuts all existing connections and shuts down server.
  4. Check if connections has been cut, or not
    4.1. if no connection was cut, Log "Server shutdown complete (Gracefull)"
    4.2. if one or more connections has been cut, Log "Server shutdown complete (Graceperiod Exeeded)"

The gracefull shutdowntime (default: 9s) shall be configurable via fiber.Config like this:

	fiberConfig := fiber.Config{
		ShutdownGraceperiod:  15 * time.Second,
	}

Alignment with Express API

This does not affect anything related to the Express API

HTTP RFC Standards Compliance

This will be complient with RFC7231, specifically the Retry-After Part (Section 7.1.3)

API Stability

This would not affect the stability of the API as it solely affects the shutdown.

Feature Examples

I do not have any code snippet etc for this.

Checklist:

  • I agree to follow Fiber's Code of Conduct.
  • I have searched for existing issues that describe my proposal before opening this one.
  • I understand that a proposal that does not meet these guidelines may be closed without explanation.
@luk3skyw4lker
Copy link
Contributor

I'm gonna have a try at this if anyone isn't going at it.

@luk3skyw4lker
Copy link
Contributor

@the-hotmann I think that what you want can already be achieved with the GracefulContext configuration. You can configure a timeout for the Graceful Shutdown or just pass a normal context.Background() or context.TODO() to it to have the server wait undefinitely until all active connections are done.

SIGKILL cannot be trapped in Golang so it's not possible to have a custom log when SIGKILL is received.

Reference on the SIGKILL: golang/go#9463

Maybe what we could do is improve the docs to include a better explanation of the GracefulContext configuration and add the logs only. What do you think @ReneWerner87?

@newacorn
Copy link

Fiber is built on top of fasthttp, and fasthttp does not maintain a list of active connections. Therefore, the timeout-based forced closure feature you're asking for cannot be implemented directly. However, you can achieve this by setting read and write timeouts. If many users need fasthttp to maintain a list of active connections, we can consider planning its implementation.

@the-hotmann
Copy link
Contributor Author

the-hotmann commented Aug 31, 2024

If many users need fasthttp to maintain a list of active connections, we can consider planning its implementation.

I believe that this would generally benefit everything built on fasthttp. A graceful shutdown is essential for most professional applications, especially in a clustered environment. When you use a clustered setup, it’s typically to achieve a higher SLA. However, if shutting down a container, application, or instance for an upgrade results in dropped connections, the load balancer would need to manage this to prevent data loss, which always make a setup more complex.

In my opinion, this feature should be implemented and enabled by default (with a default grace period timeout set to 8-9 seconds). But let's wait and see if others share the same perspective.

The grace period should be customizable and also have the option to be disabled (set to -1, which would mean ∞ wait - untill the docker deamon kills it hard).

-1 => wait forever
0 => dont wait at all (like now)
x (any other number) => wait for x seconds.

Keep in mind that when you stop a container using Docker (docker stop container_name), there is typically a 10-second grace period. If fasthttp is set to -1, Docker will forcefully kill it after 10 seconds, which isn’t the intended outcome here. Setting it to 8 seconds would allow fasthttp to shut down 1 second before Docker’s hard kill, at least allowing it to save what it can.

Additionally, Docker’s grace period can be adjusted if needed. However, the grace period for fasthttp should always be set at least 1-2 seconds shorter than Docker's, ensuring the application shuts down properly.

@efectn efectn moved this to In Progress in v3 Dec 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

No branches or pull requests

4 participants