You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@solardiz in #4871 (comment): (...) apparently when running with a mask, the bit depths are filled using the mask multiplier. This probably means we have efficiency loss when the mask multiplier isn't a multiple of 32 (which it usually isn't). For example, for the 676 seen in our default benchmark mask, the actual number of hashes computed is probably 704, and if so 28 hash computations or almost 4% of total are wasted. I didn't verify this and don't recall past discussions of it - but it's the only plausible explanation I have of what I saw in @sayan1an's host code. Despite of this wastage, it might be the most efficient way to implement mask in there (considering locality of reference).
I think we should try doing this differently, and see where it leads.
Perhaps as a first step though, search through the ML from back in 2015 or so when Sayantan wrote the code - maybe he already established that the current way of doing it is the best way? I find it hard to believe though.
The text was updated successfully, but these errors were encountered:
@solardiz in #4871 (comment):
(...) apparently when running with a mask, the bit depths are filled using the mask multiplier. This probably means we have efficiency loss when the mask multiplier isn't a multiple of 32 (which it usually isn't). For example, for the 676 seen in our default benchmark mask, the actual number of hashes computed is probably 704, and if so 28 hash computations or almost 4% of total are wasted. I didn't verify this and don't recall past discussions of it - but it's the only plausible explanation I have of what I saw in @sayan1an's host code. Despite of this wastage, it might be the most efficient way to implement mask in there (considering locality of reference).
I think we should try doing this differently, and see where it leads.
Perhaps as a first step though, search through the ML from back in 2015 or so when Sayantan wrote the code - maybe he already established that the current way of doing it is the best way? I find it hard to believe though.
The text was updated successfully, but these errors were encountered: