Skip to content

[FEATURE] Adaptive chunk sizes #551

@0xVavaldi

Description

@0xVavaldi

This is a feature suggestion posted based on a discussion between s3inlc and Thor.

Fact: Hashcat benchmarking is bad

When running large hash lists, different hash types or having an issue on the client side, different results will come in on the benchmark. In this test case we had a 250k vbulletin list (mode 2611) from hashes.org that functioned at subpar speeds.

9MH/s on 4x 1080 Ti
12MH/s on 1x 1080 Ti

The chunk size is about 3k out of the total wordlist attack keyspace of 1,464,244,267 and was completed in approx 50 seconds to 1 minute 20. This is far off from the 600 second target.

The goal of this feature is to adjust the benchmark based on the target and have it adapt the size, resulting in greater speeds & less chunks without reduced functionality or performance.

The proposed formula for this (by S3inlc) is <new chunk size> = 600s / <time needed> * <old chunk size>

The goal of the formula is to adjust the chunk size UP while the time needed for the last benchmark is less than the ideal chunk size (600). This allows for more utilization in case the benchmark turned out too low and could also be used to reduce utilization / chunk time if too high.

Metadata

Metadata

Assignees

No one assigned

    Labels

    new featureNew feature to be added

    Type

    No type

    Projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions