Skip to content

--mem-file-size should possibly apply per-invocation or per-file, not per-process #4907

Open
@solardiz

Description

@solardiz

When running with a huge wordlist and a high --fork count, and with otherwise default settings, startup takes ages and a lot of RAM is consumed. This appears to be because our default for --mem-file-size on 64-bit is 2 GB and we apply this limit per-process. For example, with a 60 GB wordlist and 32 forks, the entire wordlist would currently be read into RAM on startup, right? Worse, on a HDD the 32 processes would compete for disk seeks back and forth, right?

To remedy this, I think the limit should apply per-invocation or per-file, not per-process. For example, if we split a job across several machines with --node, it could be fine for each machine to have its own full limit on loading its portion of wordlist into memory. However, we shouldn't want a full limit per each virtual node within each physical machine's node numbers' range created there with --fork. If that's unreasonably complicated to implement and/or document, which I think it may be, then perhaps we should simply apply the limit per-file.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions