Description
The probing algorithms used in this crate for IndexMap
are pretty rudimentary, and from my pretty unscientific benchmarks (basically just copying the benchmarks from the hashbrown
crate, comparing FnvHashMap
with the fnv
-based hashmap from this crate) it seems like IndexMap
is around 15x slower than std::collections::HashMap
:
test indexmap::tests::get_remove_insert_heapless ... bench: 665 ns/iter (+/- 263)
test indexmap::tests::get_remove_insert_std ... bench: 43 ns/iter (+/- 19)
Here are the benchmarks used to get the results above. I compiled with LTO and codegen-units = 1
in order to make sure that std
wasn't getting benefits from being inlining where heapless
wasn't, most notably around std::hash
vs hash32
. Of course, these benchmarks are for large maps and smaller maps won't give such pronounced differences. Also, the use of hash32
will probably give a speedup on 32-bit targets that the std
map doesn't have access to.
#[bench]
fn get_remove_insert_heapless(b: &mut Bencher) {
let mut m = crate::FnvIndexMap::<_, _, U1024>::new();
for i in 1..1001 {
m.insert(i, i).unwrap();
}
let mut k = 1;
b.iter(|| {
m.get(&(k + 400));
m.get(&(k + 2000));
m.remove(&k);
m.insert(k + 1000, k + 1000).unwrap();
k += 1;
})
}
#[bench]
fn get_remove_insert_std(b: &mut Bencher) {
let mut m = fnv::FnvHashMap::with_capacity_and_hasher(1024, Default::default());
for i in 1..1001 {
m.insert(i, i);
}
let mut k = 1;
b.iter(|| {
m.get(&(k + 400));
m.get(&(k + 2000));
m.remove(&k);
m.insert(k + 1000, k + 1000);
k += 1;
})
}
I'm writing an embedded project that needs a hashmap, and although I do have access to an allocator avoiding it will make my performance more predictable. So I might try to put in a PR improving the performance of IndexMap
if I get some spare time to do so.