You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason I made this package is to handle one particularly challenging use case - the [C]Worthy mCDR Atlas - which I still haven't done. Once it's done I plan to write a blog post talking about it, and maybe add it as a usage example to this repository.
This dataset has some characteristics that make it really challenging to kerchunk/virtualize1:
It's ~50TB compressed on-disk,
It has ~500,000 netCDF files(!), each with about 40 variables,
The largest variables are 3-dimensional, and require concatenation along an additional 3 dimensions, so the resulting variables are 6-dimensional,
It requires merging in lower-dimensional variables too, not just concatenation,
It has time encoding on some coordinates.
This dataset is therefore comparable to some of the largest datasets already available in Zarr (at least in terms of the number of chunks and variables, if not on-disk size), and is very similar to the pathological case described in #104
24MB per array means that even a really big store with 100 variables, each with a million chunks, still only takes up 2.4GB in memory - i.e. your xarray "virtual" dataset would be ~2.4GB to represent the entire store.
If we can virtualize this we should be able to virtualize most things 💪
To get this done requires many features to be implemented:
or generating on HPC, changing the paths to the corresponding S3 URLs using Rewrite paths in a manifest #130, and moving the altered reference files to the cloud manually.
Additionally once zarr-python actually understands some kind of chunk manifest, I want to also go back and create an actual zarr store for this dataset. That will additionally require:
In fact pretty much the only ways in which this dataset could be worse is if it had differences in encoding between netCDF files, variable-length chunks, or netCDF groups, but thankfully it has none of those 😅 ↩
The text was updated successfully, but these errors were encountered:
The reason I made this package is to handle one particularly challenging use case - the [C]Worthy mCDR Atlas - which I still haven't done. Once it's done I plan to write a blog post talking about it, and maybe add it as a usage example to this repository.
This dataset has some characteristics that make it really challenging to kerchunk/virtualize1:
This dataset is therefore comparable to some of the largest datasets already available in Zarr (at least in terms of the number of chunks and variables, if not on-disk size), and is very similar to the pathological case described in #104
If we can virtualize this we should be able to virtualize most things 💪
To get this done requires many features to be implemented:
cftime_variables
#122)combine_by_coords
to handle the 3-dimensional concatenation, which would require Inferring concatenation order from coordinate data values #18,Additionally once zarr-python actually understands some kind of chunk manifest, I want to also go back and create an actual zarr store for this dataset. That will additionally require:
.virtualize.to_zarr()
,Footnotes
In fact pretty much the only ways in which this dataset could be worse is if it had differences in encoding between netCDF files, variable-length chunks, or netCDF groups, but thankfully it has none of those 😅 ↩
The text was updated successfully, but these errors were encountered: