You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ever since #230 and #293, I have been interested in trying to find optimizations to the backing HashMap that separates files and directories of the LightweightGSet.
I have always allowed folks to try their own hand at providing a faster implementation by the configuration property nna.inode.collection.impl.
From the ValidQueryChecker benchmark and the GSetFilteringBenchmark underneath src/test/java I have seen an 18->13 second reduction for ValidQuery and a 36->30 second reduction for GSetFiltering. These speed-ups have been consistent as I tested with 1M files, 2M files, and 4M files.
I think the results are good enough that I will soon try a cut of this on a much larger system.
The text was updated successfully, but these errors were encountered:
Unfortunately this did not end up having the performance increase I was expecting (20 second SuggestionEngine runs became 2 minutes). I think it may be due to the way Streams and AirConcurrentMap interact. Unfortunately I can't make much more progress since Air is closed source. So debugging is a pain. I will leave this alone for now since it does work though. If anyone wants to take a crack at it go ahead.
Ever since #230 and #293, I have been interested in trying to find optimizations to the backing HashMap that separates files and directories of the LightweightGSet.
I have always allowed folks to try their own hand at providing a faster implementation by the configuration property
nna.inode.collection.impl
.Well at long last I think I finally have one. The early results look promising: https://github.com/boilerbay/airconcurrentmap
From the ValidQueryChecker benchmark and the GSetFilteringBenchmark underneath
src/test/java
I have seen an 18->13 second reduction for ValidQuery and a 36->30 second reduction for GSetFiltering. These speed-ups have been consistent as I tested with 1M files, 2M files, and 4M files.I think the results are good enough that I will soon try a cut of this on a much larger system.
The text was updated successfully, but these errors were encountered: