-
Notifications
You must be signed in to change notification settings - Fork 1.8k
fix: Add option to report load progress messages #11299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
r? @jonas-schievink do you think this is a good idea? Also, maybe bike-shed the pref name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally we would debounce these to not send more than, perhaps, one update every 10ms. But if it helps people, adding this option in the mean time seems like an easy win too.
@@ -83,6 +83,8 @@ config_data! { | |||
cargo_useRustcWrapperForBuildScripts: bool = "true", | |||
/// Do not activate the `default` feature. | |||
cargo_noDefaultFeatures: bool = "false", | |||
/// Send progress notifications during project load. | |||
cargo_reportLoadingProgress: bool = "true", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this should be in the cargo
category, these messages are our VFS loading files from disk, not Cargo
Thanks for working on this.
Yes that would be ideal. Maybe a separate thread that takes in all internal progress information and outputs only a digest at regular intervals. I'll paste it here (suitable for From 1b71b43acf01155310f10ec263f52fedc546b3e8 Mon Sep 17 00:00:00 2001
From: Johannes Altmanninger <aclopte@gmail.com>
Date: Tue, 1 Feb 2022 16:38:23 +0100
Subject: [PATCH] Throttle progress report notifications
As reported in #11015, on startup we send a volley of progress report
notifications, sometimes many within a single millisecond.
Progress notifications are meant to be displayed to the human programmer,
so they can know that some long-running process is happening.
I don't think there is a need for high-frequency updates.
At the same time they can impact performance of clients like vim-lsp
(#7590) and kak-lsp (for Kakoune). The latter spawns a dedicated OS
process for each LSP notification. To work around the high frequency
updates they have added a [workaround] to discard excessive progress
report notifications.
Here are some timestamped log entries that illustrate today's state.
We get one progress report for each percent point!
19:54:19.831 {"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Roots Scanned","value":{"kind":"report","message":"1/104","percentage":0}}}
19:54:19.831 {"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Roots Scanned","value":{"kind":"report","message":"2/104","percentage":1}}}
19:54:19.832 {"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Roots Scanned","value":{"kind":"report","message":"3/104","percentage":2}}}
19:54:19.832 {"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Roots Scanned","value":{"kind":"report","message":"4/104","percentage":3}}}
19:54:19.833 {"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Roots Scanned","value":{"kind":"report","message":"5/104","percentage":4}}}
I think we send them all at once because progress reports are batched
up internally. In crates/rust-analyzer/src/main_loop.rs we have a
few loops that forward a bunch of batched progress reports. They look
like this:
loop {
match task {
[...]
Task::FetchBuildData(progress) => {
[...]
self.report_progress("Loading", state, msg, None);
}
}
// Coalesce multiple task events into one loop turn
task = match self.task_pool.receiver.try_recv() {
Ok(task) => task,
Err(_) => break,
};
}
Observe that we call try_recv() in a loop with no other
"slow" component, which means that we potentially send many
"rustAnalyzer/Loading" progress reports
LSP progress reports are always guarded by begin/end messages, so
the full life cycle of a task might look like this:
{"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Loading","value":{"kind":"begin","title":"Loading"}}}
{"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Loading","value":{"kind":"report","message":"something", "percentage":33}}}
{"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Loading","value":{"kind":"report","message":"something else", "percentage":66}}}
{"jsonrpc":"2.0","method":"$/progress","params":{"token":"rustAnalyzer/Loading","value":{"kind":"end"}}}
This means that progress reports are ephemeral. If we drop the
second progress report (the one with percentage 66), then clients
will still work the same way except they might display stale progress
messages/percentages.
Stop sending progress reports notifications if they appear within
300 ms. This might cause us to provide stale progress information if
there is a long pause in internal progress notifications but I think
this is unlikely, and hardly a big deal.
The value of 300 ms is debatable. I think it's enough because the main
point is to inform about long-running processes, but maybe someone
prefers more frequent updates.
To reproduce the change try removing cargo caches ("rm -rf target")
and load up a nontrivial project (I used kak-lsp).
Tasks like "rustAnalyzer/cargo check" will run for a minute or two.
Getting 3 updates per second feels enough.
[workaround]: https://github.com/kak-lsp/kak-lsp/pull/545/commits/a83aaf4c3ce06f42c592f27de9302f8273e8028c
Fixes #11015
---
crates/rust-analyzer/src/global_state.rs | 2 ++
crates/rust-analyzer/src/lsp_utils.rs | 16 +++++++++++++++-
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/crates/rust-analyzer/src/global_state.rs b/crates/rust-analyzer/src/global_state.rs
index bef943464..395436799 100644
--- a/crates/rust-analyzer/src/global_state.rs
+++ b/crates/rust-analyzer/src/global_state.rs
@@ -60,6 +60,7 @@ pub(crate) struct GlobalState {
pub(crate) shutdown_requested: bool,
pub(crate) proc_macro_changed: bool,
pub(crate) last_reported_status: Option<lsp_ext::ServerStatusParams>,
+ pub(crate) last_reported_progress: FxHashMap<String, Instant>,
pub(crate) source_root_config: SourceRootConfig,
pub(crate) proc_macro_client: Option<ProcMacroServer>,
@@ -150,6 +151,7 @@ impl GlobalState {
shutdown_requested: false,
proc_macro_changed: false,
last_reported_status: None,
+ last_reported_progress: Default::default(),
source_root_config: SourceRootConfig::default(),
proc_macro_client: None,
diff --git a/crates/rust-analyzer/src/lsp_utils.rs b/crates/rust-analyzer/src/lsp_utils.rs
index 194d03032..8b935edf0 100644
--- a/crates/rust-analyzer/src/lsp_utils.rs
+++ b/crates/rust-analyzer/src/lsp_utils.rs
@@ -1,5 +1,10 @@
//! Utilities for LSP-related boilerplate code.
-use std::{error::Error, ops::Range, sync::Arc};
+use std::{
+ error::Error,
+ ops::Range,
+ sync::Arc,
+ time::{Duration, Instant},
+};
use ide_db::base_db::Cancelled;
use lsp_server::Notification;
@@ -81,8 +86,10 @@ impl GlobalState {
(f * 100.0) as u32
});
let token = lsp_types::ProgressToken::String(format!("rustAnalyzer/{}", title));
+ let now = Instant::now();
let work_done_progress = match state {
Progress::Begin => {
+ self.last_reported_progress.insert(title.to_string(), now);
self.send_request::<lsp_types::request::WorkDoneProgressCreate>(
lsp_types::WorkDoneProgressCreateParams { token: token.clone() },
|_, _| (),
@@ -96,6 +103,12 @@ impl GlobalState {
})
}
Progress::Report => {
+ if (now - *self.last_reported_progress.get(title).unwrap())
+ < Duration::from_millis(300)
+ {
+ return;
+ }
+ self.last_reported_progress.insert(title.to_string(), now);
lsp_types::WorkDoneProgress::Report(lsp_types::WorkDoneProgressReport {
cancellable: None,
message,
@@ -103,6 +116,7 @@ impl GlobalState {
})
}
Progress::End => {
+ self.last_reported_progress.remove(title);
lsp_types::WorkDoneProgress::End(lsp_types::WorkDoneProgressEnd { message })
}
};
--
2.35.0.295.gee0e44bcb6
|
@krobelus sorry for the late answer, somehow I managed to miss the notification. I'm personally not a fan of delaying any notification for more than 8-16 ms or so (the equivalent of a screen refresh cycle, if you want), and I think On the other hand, I think it would be nice to be able to completely disable these reports: some users might be bothered by fast screen updates, so adding the option might still be good. But anyway, we can do both. Please open a PR and see what the others think about it. I'm certainly not going to reject it. |
☔ The latest upstream changes (presumably #12118) made this pull request unmergeable. Please resolve the merge conflicts. |
I believe clients can already do that by not setting the workDoneProgress capability to false (untested). Anyway, my changes are an ugly workaround. Let's first acknowledge the real problem.
this means that whenever we find that the "cargo check" process has sent any
I think this is the right direction. I tried to do that originally but it turned out more complicated than just setting a global limit. |
Closing, since it's stale and there doesn't seem to be consensus on this. |
Closes #11015