Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lsp restart command spawn multiple procces for every call #3968

Closed
axdank opened this issue Sep 25, 2022 · 8 comments · Fixed by #3972
Closed

Lsp restart command spawn multiple procces for every call #3968

axdank opened this issue Sep 25, 2022 · 8 comments · Fixed by #3972
Labels
A-helix-term Area: Helix term improvements A-language-server Area: Language server client C-bug Category: This is a bug O-windows Operating system: Windows

Comments

@axdank
Copy link
Contributor

axdank commented Sep 25, 2022

Summary

The new lsp restart command does not shut down the current server, so one process is left hanging, and a new one is executed.
Opening the editor for the first time: 3 processes
imagen

Running lsp-restart for the first time: 6 processes
imagen

Running lsp-restart for the second time: 9 processes
imagen

Reproduction Steps

  1. Open helix
  2. Open the rust project and run the lsp-restart command once.
  3. run lsp-restart a second time

Helix log

not relevant

Platform

Windows

Terminal Emulator

wezterm 20220908-191626-25cd05a8

Helix Version

helix 22.08.1 (e8f0886)

@axdank axdank added the C-bug Category: This is a bug label Sep 25, 2022
@axdank axdank changed the title Lsp restart command Lsp restart command spawn multiple procces for every call Sep 25, 2022
@dead10ck
Copy link
Member

Interesting, this must be a Windows specific bug. This did not happen during testing on Linux.

@dead10ck dead10ck added the O-windows Operating system: Windows label Sep 25, 2022
@kirawi
Copy link
Member

kirawi commented Sep 25, 2022

It's not clear to me where the old process is dropped in #3435.

@dead10ck
Copy link
Member

dead10ck commented Sep 25, 2022

It's not clear to me where the old process is dropped in #3435.

This was discussed in the PR: #3435 (comment)

It gets killed when the client gets dropped, which should happen here.

@dead10ck
Copy link
Member

cc @mangas

@kirawi kirawi added A-language-server Area: Language server client A-helix-term Area: Helix term improvements labels Sep 26, 2022
@axdank
Copy link
Contributor Author

axdank commented Sep 26, 2022

For the moment I take the old client returned by entry.insert and call its shutdown method, which worked for me since it closes the old process, I think that in windows the kill_on_drop(true) function fails or has no effect.

                let NewClientResult(client, incoming) = start_client(id, language_config, config)?;
                self.incoming.push(UnboundedReceiverStream::new(incoming));

                let (_, old_client) = entry.insert((id, client.clone()));
                
                tokio::spawn(async move {
                    if let Err(e) = old_client.shutdown_and_exit().await {
                        log::error!("failed shutdown language server: {}", e);
                        return;
                    }
                });
                
                Ok(Some(client))

@kirawi
Copy link
Member

kirawi commented Sep 26, 2022

According to the tokio documentation:

On Unix platforms processes must be “reaped” by their parent process after they have exited in order to release all OS resources. A child process which has exited, but has not yet been reaped by its parent is considered a “zombie” process. Such processes continue to count against limits imposed by the system, and having too many zombie processes present can prevent additional processes from being spawned.

Although issuing a kill signal to the child process is a synchronous operation, the resulting zombie process cannot be .awaited inside of the destructor to avoid blocking other tasks. The tokio runtime will, on a best-effort basis, attempt to reap and clean up such processes in the background, but makes no additional guarantees are made with regards how quickly or how often this procedure will take place.

If stronger guarantees are required, it is recommended to avoid dropping a Child handle where possible, and instead utilize child.wait().await or child.kill().await where possible.

@dead10ck
Copy link
Member

Nice find. We should probably add a kill method to our LSP client then and incorporate it into the normal shutdown process. This is relevant for #3482. According to the LSP spec, the normal process is to send a shutdown request, and then a exit request, which it looks like we do:

/// Forcefully shuts down the language server ignoring any errors.
pub async fn force_shutdown(&self) -> Result<()> {
if let Err(e) = self.shutdown().await {
log::warn!("language server failed to terminate gracefully - {}", e);
}
self.exit().await
}

So it looks like normally, the process should exit on its own after these two requests. However, it's conspicuously absent from the spec what should happen if the shutdown request fails. Probably what we can do is check the child process after the exit request and kill if it it's still alive for whatever reason.

@axdank
Copy link
Contributor Author

axdank commented Sep 26, 2022

After looking at a lot of my code and searching for the problem, I overlooked that the branch from which I use the helix executable had another branch merged into it, specifically #2653, which I believe keeps the old_client active. By testing the clean master branch, I was able to verify that the old process is definitely removed. 😕😄

@axdank axdank closed this as completed Sep 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-helix-term Area: Helix term improvements A-language-server Area: Language server client C-bug Category: This is a bug O-windows Operating system: Windows
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants