-
Notifications
You must be signed in to change notification settings - Fork 50
Description
Today within support bundles we use the sled-diagnostics
crate to collect various information from a running sled. @wfchandler noticed when collecting a support bundle that some of the ptool commands such as pstack
and pfiles
were timing out via the sled-agent api endpoint (60s). In this case sled-diagnostics is identifying a bunch of PIDs of interest via contract(5) and forking a pstack
command for each PID.
What's interesting here is that we did not observe this behavior in development with a stand alone program or with a racklette. On dogfood however we noticed that the longer sled-agent was running the slower collection time became.
Upon investigation it would seem that Rust's use of fork(2), in rust 1.86, to spawn processes was much slower when the resident memory for sled-agent was larger. This would make sense since when setting up the COW address space for the process the initial swap reservation would have to be made. At the time of the investigation I noticed that sled-agent's memory usage was around 1GB.
On dogfood here is a trace of how long a fork call was taking from sled-agent (times are in microseconds).
BRM42220057 # dtrace -n 'syscall::*fork*:entry /execname == "sled-agent"/ {self->ts = timestamp} syscall::*fork*:return /self->ts/ {@ = quantize((timestamp - self->ts) / 1000); self->ts = 0}'
dtrace: description 'syscall::*fork*:entry ' matched 4 probes
^C
value ------------- Distribution ------------- count
16384 | 0
32768 |@@@@@@@ 64
65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 310
131072 | 0
If one compares that to a stand-alone program its apparent that it's capable of forking much faster.
BRM42220057 # dtrace -n 'syscall::*fork*:entry /execname == "pstack-standalone"/ {self->ts = timestamp} syscall::*fork*:return /self->ts/ {@ = quantize((timestamp - self->ts) / 1000); self->ts = 0}'
dtrace: description 'syscall::*fork*:entry ' matched 4 probes
^C
value ------------- Distribution ------------- count
256 | 0
512 |@@@@@@@ 57
1024 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 277
2048 |@ 9
4096 | 0
When sled-agent was around 1GB in resident memory we can see that the total amount of time to fork roughly 300 pstack
processes was roughly 19s.
The forked processes are currently wrapped in Rust's FuturesUnordered
which will run things concurrently but not in parallel so we are likely eating some of this fork time up front before the next process can be ran in tokio. @hawkw suggested that we could swap to a JoinSet
to gain some parallelism.
More importantly beta rust, 1.87, swaps to using posix_spawn
.
To see if there was some improvement here I built the following program to test with:
use slog::Drain;
use slog::o;
use std::ptr;
use std::time::Instant;
#[tokio::main(worker_threads = 10)]
async fn main() {
let decorator = slog_term::TermDecorator::new().build();
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build().fuse();
let log = slog::Logger::root(drain, o!("component" => "pstack-example"));
// Make a 1GB heap
let mut data = vec![0u8; 1 << 30];
for i in (0..data.len()).step_by(4096) {
unsafe {
ptr::write_volatile(&mut data[i], 1);
}
}
let start = Instant::now();
let res = sled_diagnostics::pstack_oxide_processes(&log).await;
res.iter().for_each(|r| {
if let Ok(output) = r {
println!("{} -- {}", output.exit_status, output.command);
}
});
println!("took {:#?}", start.elapsed());
}
Here are the various results:
1GB RSS and stable rust:
took 26.568351663s
1GB RSS and beta rust:
took 4.611094017s
1GB RSS, stable rust, and JoinSet:
took 23.44045257s
1GB RSS, beta rust, and JoinSet:
took 3.674288649s
It's clear that Rust 1.87 will be a huge win overall for process spawning for us, and while JoinSet does not make a huge difference for us it's a nice to have.
The take aways are as follows...
- Upgrade to Rust 1.87 when it releases
- Switch to using a JoinSet
- Batch process the number of pids passed as arguments to tools like
pstack
rather than fork 1:1