Skip to content

Hash up to 8 bytes at once with FxHasher #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 28, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
/target
**/*.rs.bk
/Cargo.lock
4 changes: 0 additions & 4 deletions Cargo.lock

This file was deleted.

3 changes: 2 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "rustc-hash"
version = "1.0.0"
version = "1.0.1"
authors = ["The Rust Project Developers"]
description = "speed, non-cryptographic hash used in rustc"
license = "Apache-2.0/MIT"
Expand All @@ -9,3 +9,4 @@ keywords = ["hash", "fxhash", "rustc"]
repository = "https://github.com/rust-lang-nursery/rustc-hash"

[dependencies]
byteorder = "1.1"
32 changes: 28 additions & 4 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,15 @@
//! map.insert(22, 44);
//! ```

extern crate byteorder;

use std::collections::{HashMap, HashSet};
use std::default::Default;
use std::hash::{Hasher, BuildHasherDefault};
use std::ops::BitXor;
use std::mem::size_of;

use byteorder::{ByteOrder, NativeEndian};

/// Type alias for a hashmap using the `fx` hash algorithm.
pub type FxHashMap<K, V> = HashMap<K, V, BuildHasherDefault<FxHasher>>;
Expand Down Expand Up @@ -65,11 +70,30 @@ impl FxHasher {

impl Hasher for FxHasher {
#[inline]
fn write(&mut self, bytes: &[u8]) {
for byte in bytes {
let i = *byte;
self.add_to_hash(i as usize);
fn write(&mut self, mut bytes: &[u8]) {
#[cfg(target_pointer_width = "32")]
let read_usize = |bytes| NativeEndian::read_u32(bytes);
#[cfg(target_pointer_width = "64")]
let read_usize = |bytes| NativeEndian::read_u64(bytes);

let mut hash = FxHasher { hash: self.hash };
assert!(size_of::<usize>() <= 8);
while bytes.len() >= size_of::<usize>() {
hash.add_to_hash(read_usize(bytes) as usize);
bytes = &bytes[size_of::<usize>()..];
}
if (size_of::<usize>() > 4) && (bytes.len() >= 4) {
hash.add_to_hash(NativeEndian::read_u32(bytes) as usize);
bytes = &bytes[4..];
}
if (size_of::<usize>() > 2) && bytes.len() >= 2 {
hash.add_to_hash(NativeEndian::read_u16(bytes) as usize);
bytes = &bytes[2..];
}
if (size_of::<usize>() > 1) && bytes.len() >= 1 {
hash.add_to_hash(bytes[0] as usize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't all of this mean that splitting a write call changes the hash? IIRC it shouldn't.
Could an union { bytes: [u8; size_of::<usize>()], usize: usize } buffer be used instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't seem to be a documented nor a useful property.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @michaelwoerister @gankro I remember discussions about this property

Note that it's potentially useful to buffer the values if, with e.g. nested enums, you're writing byte-sized values (i.e. discriminants) most of the time, one at a time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the FxHasher is only used with hash tables, I don't think that the hash must be stable. As long as it is deterministic for our use cases, it's fine, I think. It already treats (u8, u8) different from u16 where a similar argument could be made.

My view is: FxHasher should be the absolute fastest for small keys and it should do whatever it can get away with in practice.

Copy link
Member

@eddyb eddyb May 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think we should try and bench this against some buffering scheme, especially if it can all be inlined down to a few applications of the usize "block" function.

EDIT: nevermind, all the leaves I was thinking off go through the write_uN methods below, so those would also need to be buffered somehow to observe a benefit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we don't need to do this in this PR. The benchmarks showed that it's an improvement.

As a sidenote, using perf.rlo is a lot more complicated when testing out-of-tree crates...

}
self.hash = hash.hash;
}

#[inline]
Expand Down