Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating remote access notebook #291

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

betolink
Copy link
Contributor

I'm fixing a few typos in the notebook and adding an explicit way of inspecting the I/O behavior of the different caching strategies implemented in fsspec. I'm also mentioning the impact of chunking in access performance, I think this is now a self contained notebook, I guess we could include the internals of Zarr next.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link

github-actions bot commented Jul 17, 2024

🎊 PR Preview fe9141c has been successfully built and deployed to https://xarray-contrib-xarray-tutorial-preview-pr-291.surge.sh

🕐 Build time: 0.011s

🤖 By surge-preview

@betolink
Copy link
Contributor Author

Hi @scottyhq! do you have suggestions on the failing checks? the link that is apparently broken is not really broken and the other is an example not a real link, on the spellcheck bot the fo is a reference to a file-like object.

@scottyhq
Copy link
Contributor

Yeah, the link check unfortunately is finicky and I'm not sure how to exclude specific links. For the spellcheck, you can add fo to the ignore list

ignore_words_list: hist,nd

Thanks for expanding the notebook, happy to do a full review!

@betolink betolink marked this pull request as ready for review July 18, 2024 19:15
@betolink
Copy link
Contributor Author

Thanks @scottyhq the PR is ready for review.

Copy link
Contributor

@scottyhq scottyhq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @betolink ! This is really such a useful resource. I gave it another full read and added comments and suggestions

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to leave some review comments on the whole notebook top to bottom rather than just the new changes since I didn't go over it carefully the first round!

  1. Consider using Jupyter Book admonitions for your notes. Because jupyterlab-myst is in the environment these are rendered similarly on both the website and in jupyterlab

> It is important to note that there are...

```{note}
there are...
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this first note you say "use of a file handler and a cache" but I didn't see anything about the cache

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supported file formats by backend: It's not clear what BufferedIOBase and AbstractDataStore are and where they come from. Consider defining these in a bit more detail, or introduce them later on. What is a "buffer" vs a file?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"it’s really an anti pattern when we work with scientific data formats. Benchmarks show that any of the caching schemas will perform better than using the default read-ahead." -> "It's not ideal with common multidimensional binary data formats." Can you link to benchmarking results?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

file = fsspec.open_local(f"simplecache::{uri}", filecache={'cache_storage': '/tmp/fsspec_cache'})

The keyword argument and uri should match here (filecache::{uri}). If I remember correctly filecache exposes a bit more control and the cache persists if you say close a notebook and come back to it, so I recommend that! I also like using same_names=True so that if you're working with multiple files you can do other things with them (like open in QGIS or other software) if you want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new ability to keep track of cache activity is super handy! I'm confused though, I think it's important to note that 'total requested bytes' != 'total transferred bytes' ? Also what is the cause of the cache hits and cache misses? :


        <BytesCache:
            block size  :   5242880
            block count :   0
            file size   :   4024972
            cache hits  :   175
            cache misses:   2
            total requested bytes: 4024972>
         
        <BlockCache:
            block size  :   8388608
            block count :   1
            file size   :   4024972
            cache hits  :   45
            cache misses:   1
            total requested bytes: 8388608>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"to cloud storage, but using the default caching." --> remove ", but using the default caching"? since you use blockcache in the code below

Copy link
Contributor

@scottyhq scottyhq Jul 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remote data access and chunking: How about adding a tiny bit more here? For example, I think it's often a great starting point for people wanting to work with remote data to consider 1. the total file size (ds.nbytes) 2. is the file chunked (ds.sst.encoding) 3. what do you want to do with it (in particular are you needing to compute the mean of all pixels or just read a single pixel?).

Maybe a specific example? If I understand correctly, let's say you want the value of a single pixel from s3://sst.mnmean.nc (ds.isel(lon=0,lat=0,time=0)). Using defaults, Xarray dispatches to h5netcdf/h5py which tries to read one 'chunk' (1, 89, 180) containing your pixel of interest. So this request is translated by fsspec to read ~128kb via an HTTP range request to S3, but using the default caching an additional 5MB is read. The entire file size in this case is just 4MB, so for efficiency rather than fiddling with cache settings, it might be best to just use filecache:: so that all your Xarray computations read from the local file rather than possibly reading bits and pieces over the network.

@betolink
Copy link
Contributor Author

betolink commented Aug 1, 2024

Thanks for the thorough review @scottyhq! I'll address the suggestions early next week. Also, feel free to edit directly in the notebook!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants