-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Found too much procs #146
Comments
With our current memory management, we have to set an upper limit and keep memory usage as low as possible, as every tool should ;) What would be your ideal limit? |
Ideally this would be configurable with 32 set as a reasonable default. |
A system with >32 processes to monitor would likely have enough memory to support the additional usage by progress. |
That's the point: the value is the same for everybody, even for an embedded system with 8MB of RAM . |
Sure, but that also means that progress can only be run on those lower-end systems. Those with lots of processes running would also like to use this tool, and those who have close to 32 running must roll the dice on if the tool will work or not. This is not a good experience. As I said before, those smaller systems with low memory would naturally have fewer processes running, and larger systems with >32 processes would very likely have enough resources for a simple tool like this to index them. Recommend not allocating the memory until you know how many processes there are, so that the tool only uses as much memory as is needed for the task. This would eliminate the need for arbitrary limits. |
My original reply started with "with our current memory management […]" because it's currently a basic allocation on the stack, so having a dynamic Currently, on x86_64, a This is a quick but effective fix. Any thought? |
Anything higher would be an improvement. I tried figuring out the current count of a few running systems here, but I'm not sure what correct way to query that matches up with this limit. If this is only virtual memory and is not actually used until it is filled, could you not just safely do 256 or even 1024, and virtually allocate only a few MB that would not actually get used unless the system had many procs? I believe the answer should be whatever allows ...on more thought, I believe #163 could be an answer, but for the opposite reason - the default should be a high value (e.g. 1024), and those running into memory issues on embedded controllers can then use -t 32 as a workaround. The vast majority of Linux users are going to be on a system (even a Pi) that can easily handle a few MB of extra alloc. |
I just ran into this, and agree with @malventano that this is a frustrating experience. I know @Xfennec finds having to specify a thread count 'harsh', but the current experience is even harsher (essentially "there's too many threads so I'm not going to tell you anything.") Is there some way for the user to glean status? Besides #163 (which still isn't merged), the experience could involve listing all 'interesting processes' and telling the user to run process -p . |
PR #163 still need some work (if relevant) so I've pushed MAX_PIDS / MAX_RESULTS to 128. It seems a very reasonable value, even for low-memory systems. Tag v0.17 has been added. |
Why is progress limited to 32 processes?
The text was updated successfully, but these errors were encountered: