Description
Via @calclavia from #265 (comment) breaking this out as a separate issue:
I've a related use case that I believe would benefit from introducing chunk-based decompressed cache. I use zarr for storing data that will be used for training neural networks. For this use case, often times you want to sample random (or almost random) rows from the dataset. If the sampling is mostly localized within a chunk, it would be great if the LRUCache could cache an entire chunk so we can take advantage of spatial locality.
For example, I would like to same data points [1, 5, 8, 3, 2], and because these all reside in the same compressed chunk (cached by LRU), only reading the first sample should be slow, and the rest should be already cached in memory.
N.B., this differs from the LRUStoreCache class already implemented, because that caches the encoded chunk data, but the proposal here is to add a layer for caching the decoded chunk data (and therefore avoid having to decode the chunk multiple times).