Description
Issue description
If ts-node-dev
is running a process that has graceful shutdowns, those graceful shutdowns will not be honored in situations such as a kubernetes rollout restart, where another process above ts-node-dev
is managing the execution.
Context
Some of the context is on this pull request that attempted to fix the issue: #269
Our current codebase does not wait for a graceful shutdown when managed by another external process.
Did you try to run with ts-node?
Yes, this works with ts-node
. The fix was merged here: TypeStrong/ts-node#419
Example
Here is an example. I created a simple express server that has the following graceful shutdown logic:
async function shutdown(signal: string, server: Server) {
console.log(`${signal} received, stopping incoming requests`);
server.close(() => {
console.log('incoming requests stopped, waiting for existing connections to finish');
setTimeout(() => {
console.log('closing mongodb connection');
mongoose.connection.close(() => {
console.log('mongoDB connection closed');
console.log('exiting successfully');
process.exit(0);
});
}, WAIT_EXISTING_REQS_MILLIS);
});
setTimeout(() => {
console.log('graceful shutdown timed out, forcefully shutting down');
process.exit(1);
}, SHUTDOWN_TIMEOUT_MILLIS);
}
The server is being started via ts-node-dev
inside a docker container managed by kubernetes.
When ts-node-dev
detects a change within the container, it successfully goes through the shutdown logic:
[server] connected to mongodb
[server] Listening on port 3000
Syncing 1 files for anthonyalayo/server:1ccaf904e3c28daaf2e77093a18ccea816264ea597f9d64baee8e1d5eb8fd4cb
Watching for changes...
[server] [INFO] 04:09:57 Restarting: /app/server/src/index.ts has been modified
[server] SIGTERM received, stopping incoming requests
[server] incoming requests stopped, waiting for existing connections to finish
[server] closing mongodb connection
[server] mongoDB connection closed
[server] exiting successfully
[server] connected to mongodb
[server] Listening on port 3000
You can see that the shutdown logic completes entirely and the server starts back up.
Now when attempting to delete a pod via kubectl to mimic a rollout:
kubectl delete pod server-depl-d98998677-7s2nt
The following is output:
[server] SIGTERM received, stopping incoming requests
[server] incoming requests stopped, waiting for existing connections to finish
[server] yarn run v1.22.5
[server] $ ts-node-dev --poll src/index.ts
[server] [INFO] 04:10:27 ts-node-dev ver. 1.1.8 (using ts-node ver. 9.1.1, typescript ver. 4.3.5)
[server] connected to mongodb
[server] Listening on port 3000
While the PR I created fixed the issue when trying it out in practice, I am not able to get our test suite passing. Attempting to debug the test suite has been unfruitful, as there is extremely limited debuggability for either printing or breakpoints with these child processes.
If anyone can tackle it, that would be appreciated.