Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Found too much procs #146

Open
mcandre opened this issue Nov 1, 2020 · 9 comments
Open

Found too much procs #146

mcandre opened this issue Nov 1, 2020 · 9 comments

Comments

@mcandre
Copy link

mcandre commented Nov 1, 2020

Why is progress limited to 32 processes?

@Xfennec
Copy link
Owner

Xfennec commented Nov 3, 2020

With our current memory management, we have to set an upper limit and keep memory usage as low as possible, as every tool should ;)

What would be your ideal limit?

@ianmaddox
Copy link

Ideally this would be configurable with 32 set as a reasonable default.

@malventano
Copy link

With our current memory management, we have to set an upper limit and keep memory usage as low as possible, as every tool should ;)

What would be your ideal limit?

A system with >32 processes to monitor would likely have enough memory to support the additional usage by progress.

@Xfennec
Copy link
Owner

Xfennec commented Jun 28, 2021

That's the point: the value is the same for everybody, even for an embedded system with 8MB of RAM .

@malventano
Copy link

malventano commented Sep 15, 2021

That's the point: the value is the same for everybody, even for an embedded system with 8MB of RAM .

Sure, but that also means that progress can only be run on those lower-end systems. Those with lots of processes running would also like to use this tool, and those who have close to 32 running must roll the dice on if the tool will work or not. This is not a good experience.

As I said before, those smaller systems with low memory would naturally have fewer processes running, and larger systems with >32 processes would very likely have enough resources for a simple tool like this to index them. Recommend not allocating the memory until you know how many processes there are, so that the tool only uses as much memory as is needed for the task. This would eliminate the need for arbitrary limits.

@Xfennec
Copy link
Owner

Xfennec commented Sep 16, 2021

My original reply started with "with our current memory management […]" because it's currently a basic allocation on the stack, so having a dynamic MAX_PIDS requires a bit of work. The PR #163 of @vgmoose is a possible fix, but I find harsh to bother the user with this, it's an internal subject.

Currently, on x86_64, a pidinfo_t costs roughly 4104 bytes, so ~ 128 KB with MAX_PIDS = 32. If we bump MAX_PIDS to, let's say, 128, we'll need ~ 0.5 MB of virtual memory (every page is not really allocated until really used). While it's a lot more, it seems pretty reasonable even for an embedded system, and 128 "candidate" PIDs seems... confortable.

This is a quick but effective fix.

Any thought?

@malventano
Copy link

malventano commented Sep 16, 2021

Anything higher would be an improvement. I tried figuring out the current count of a few running systems here, but I'm not sure what correct way to query that matches up with this limit.
ps -e | wc -l = 888 on a system that works, = 2758 on a system that errors.

If this is only virtual memory and is not actually used until it is filled, could you not just safely do 256 or even 1024, and virtually allocate only a few MB that would not actually get used unless the system had many procs? I believe the answer should be whatever allows progress to run on even the largest possible systems, as it would be very silly if say HTOP or IOSTAT failed to run on a system with too many procs. Understood that the allocation is the limiter in the current state.

...on more thought, I believe #163 could be an answer, but for the opposite reason - the default should be a high value (e.g. 1024), and those running into memory issues on embedded controllers can then use -t 32 as a workaround. The vast majority of Linux users are going to be on a system (even a Pi) that can easily handle a few MB of extra alloc. cp, mv, and many other applications that progress monitors will themselves allocate (and actually use) a couple of MB for buffers, so progress just does not need to be that lean, does it?

@snarkywolverine
Copy link

I just ran into this, and agree with @malventano that this is a frustrating experience. I know @Xfennec finds having to specify a thread count 'harsh', but the current experience is even harsher (essentially "there's too many threads so I'm not going to tell you anything.") Is there some way for the user to glean status? Besides #163 (which still isn't merged), the experience could involve listing all 'interesting processes' and telling the user to run process -p .

@Xfennec
Copy link
Owner

Xfennec commented Oct 13, 2023

PR #163 still need some work (if relevant) so I've pushed MAX_PIDS / MAX_RESULTS to 128. It seems a very reasonable value, even for low-memory systems. Tag v0.17 has been added.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants