Skip to content

Commit b59dd1e

Browse files
authored
Merge branch 'master' into apply-to-dataset
2 parents f2d2880 + 6a101a9 commit b59dd1e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1729
-1112
lines changed

.github/workflows/ci-additional.yaml

Lines changed: 0 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -146,46 +146,6 @@ jobs:
146146
run: |
147147
python -m pytest --doctest-modules xarray --ignore xarray/tests
148148
149-
typing:
150-
name: Type checking (mypy)
151-
runs-on: "ubuntu-latest"
152-
needs: detect-ci-trigger
153-
if: needs.detect-ci-trigger.outputs.triggered == 'false'
154-
defaults:
155-
run:
156-
shell: bash -l {0}
157-
158-
steps:
159-
- uses: actions/checkout@v2
160-
with:
161-
fetch-depth: 0 # Fetch all history for all branches and tags.
162-
- uses: conda-incubator/setup-miniconda@v2
163-
with:
164-
channels: conda-forge
165-
channel-priority: strict
166-
mamba-version: "*"
167-
activate-environment: xarray-tests
168-
auto-update-conda: false
169-
python-version: "3.8"
170-
171-
- name: Install conda dependencies
172-
run: |
173-
mamba env update -f ci/requirements/environment.yml
174-
- name: Install mypy
175-
run: |
176-
mamba install --file ci/requirements/mypy_only
177-
- name: Install xarray
178-
run: |
179-
python -m pip install --no-deps -e .
180-
- name: Version info
181-
run: |
182-
conda info -a
183-
conda list
184-
python xarray/util/print_versions.py
185-
- name: Run mypy
186-
run: |
187-
python -m mypy .
188-
189149
min-version-policy:
190150
name: Minimum Version Policy
191151
runs-on: "ubuntu-latest"

.github/workflows/ci.yaml

Lines changed: 31 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,8 @@ jobs:
3737
fail-fast: false
3838
matrix:
3939
os: ["ubuntu-latest", "macos-latest", "windows-latest"]
40-
python-version: ["3.7", "3.8", "3.9"]
40+
# Bookend python versions
41+
python-version: ["3.7", "3.9"]
4142
steps:
4243
- uses: actions/checkout@v2
4344
with:
@@ -57,8 +58,7 @@ jobs:
5758
uses: actions/cache@v2
5859
with:
5960
path: ~/conda_pkgs_dir
60-
key:
61-
${{ runner.os }}-conda-py${{ matrix.python-version }}-${{
61+
key: ${{ runner.os }}-conda-py${{ matrix.python-version }}-${{
6262
hashFiles('ci/requirements/**.yml') }}
6363
- uses: conda-incubator/setup-miniconda@v2
6464
with:
@@ -87,10 +87,17 @@ jobs:
8787
run: |
8888
python -c "import xarray"
8989
- name: Run tests
90-
run: |
91-
python -m pytest -n 4 \
92-
--cov=xarray \
93-
--cov-report=xml
90+
run: python -m pytest -n 4
91+
--cov=xarray
92+
--cov-report=xml
93+
--junitxml=test-results/${{ runner.os }}-${{ matrix.python-version }}.xml
94+
95+
- name: Upload test results
96+
if: always()
97+
uses: actions/upload-artifact@v2
98+
with:
99+
name: Test results for ${{ runner.os }}-${{ matrix.python-version }}
100+
path: test-results/${{ runner.os }}-${{ matrix.python-version }}.xml
94101

95102
- name: Upload code coverage to Codecov
96103
uses: codecov/codecov-action@v1
@@ -100,3 +107,20 @@ jobs:
100107
env_vars: RUNNER_OS,PYTHON_VERSION
101108
name: codecov-umbrella
102109
fail_ci_if_error: false
110+
111+
publish-test-results:
112+
needs: test
113+
runs-on: ubuntu-latest
114+
# the build-and-test job might be skipped, we don't need to run this job then
115+
if: success() || failure()
116+
117+
steps:
118+
- name: Download Artifacts
119+
uses: actions/download-artifact@v2
120+
with:
121+
path: test-results
122+
123+
- name: Publish Unit Test Results
124+
uses: EnricoMi/publish-unit-test-result-action@v1
125+
with:
126+
files: test-results/*.xml

.pre-commit-config.yaml

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ repos:
1313
- id: isort
1414
# https://github.com/python/black#version-control-integration
1515
- repo: https://github.com/psf/black
16-
rev: 21.5b1
16+
rev: 21.6b0
1717
hooks:
1818
- id: black
1919
- repo: https://github.com/keewis/blackdoc
@@ -31,11 +31,20 @@ repos:
3131
# args: ["--write", "--compact"]
3232
- repo: https://github.com/pre-commit/mirrors-mypy
3333
# version must correspond to the one in .github/workflows/ci-additional.yaml
34-
rev: v0.812
34+
rev: v0.902
3535
hooks:
3636
- id: mypy
3737
# Copied from setup.cfg
3838
exclude: "properties|asv_bench"
39+
additional_dependencies: [
40+
# Type stubs
41+
types-python-dateutil,
42+
types-pkg_resources,
43+
types-PyYAML,
44+
types-pytz,
45+
# Dependencies that are typed
46+
numpy,
47+
]
3948
# run this occasionally, ref discussion https://github.com/pydata/xarray/pull/3194
4049
# - repo: https://github.com/asottile/pyupgrade
4150
# rev: v1.22.1

ci/requirements/mypy_only

Lines changed: 0 additions & 4 deletions
This file was deleted.
7.99 KB
Loading

doc/api.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ Top-level functions
3737
map_blocks
3838
show_versions
3939
set_options
40+
unify_chunks
4041

4142
Dataset
4243
=======
@@ -900,6 +901,7 @@ Advanced API
900901
Variable
901902
IndexVariable
902903
as_variable
904+
Context
903905
register_dataset_accessor
904906
register_dataarray_accessor
905907
Dataset.set_close

doc/getting-started-guide/installing.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ Required dependencies
88

99
- Python (3.7 or later)
1010
- setuptools (40.4 or later)
11+
- typing-extensions
1112
- `numpy <http://www.numpy.org/>`__ (1.17 or later)
1213
- `pandas <http://pandas.pydata.org/>`__ (1.0 or later)
1314

doc/internals/zarr-encoding-spec.rst

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,17 +38,28 @@ After decoding the ``_ARRAY_DIMENSIONS`` attribute and assigning the variable
3838
dimensions, Xarray proceeds to [optionally] decode each variable using its
3939
standard CF decoding machinery used for NetCDF data (see :py:func:`decode_cf`).
4040

41+
Finally, it's worth noting that Xarray writes (and attempts to read)
42+
"consolidated metadata" by default (the ``.zmetadata`` file), which is another
43+
non-standard Zarr extension, albeit one implemented upstream in Zarr-Python.
44+
You do not need to write consolidated metadata to make Zarr stores readable in
45+
Xarray, but because Xarray can open these stores much faster, users will see a
46+
warning about poor performance when reading non-consolidated stores unless they
47+
explicitly set ``consolidated=False``. See :ref:`io.zarr.consolidated_metadata`
48+
for more details.
49+
4150
As a concrete example, here we write a tutorial dataset to Zarr and then
4251
re-open it directly with Zarr:
4352

4453
.. ipython:: python
4554
55+
import os
4656
import xarray as xr
4757
import zarr
4858
4959
ds = xr.tutorial.load_dataset("rasm")
5060
ds.to_zarr("rasm.zarr", mode="w")
5161
5262
zgroup = zarr.open("rasm.zarr")
63+
print(os.listdir("rasm.zarr"))
5364
print(zgroup.tree())
5465
dict(zgroup["Tair"].attrs)

doc/tutorials-and-videos.rst

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,13 @@ Videos
1919
:card: text-center
2020

2121
---
22+
Xdev Python Tutorial Seminar Series 2021 seminar introducing Xarray (1 of 2) | Anderson Banihirwe
23+
^^^
24+
.. raw:: html
25+
26+
<iframe width="100%" src="https://www.youtube.com/embed/Ss4ryKukhi4" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
2227

28+
---
2329
Xarray's virtual tutorial | October 2020 | Anderson Banihirwe, Deepak Cherian, and Martin Durant
2430
^^^
2531
.. raw:: html
@@ -49,7 +55,6 @@ Videos
4955

5056
<iframe width="100%" src="https://www.youtube.com/embed/J9ypQOnt5l8" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
5157

52-
5358
Books, Chapters and Articles
5459
-----------------------------
5560

doc/user-guide/indexing.rst

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -234,9 +234,6 @@ arrays). However, you can do normal indexing with dimension names:
234234
ds[dict(space=[0], time=[0])]
235235
ds.loc[dict(time="2000-01-01")]
236236
237-
Using indexing to *assign* values to a subset of dataset (e.g.,
238-
``ds[dict(space=0)] = 1``) is not yet supported.
239-
240237
Dropping labels and dimensions
241238
------------------------------
242239

@@ -536,6 +533,34 @@ __ https://docs.scipy.org/doc/numpy/user/basics.indexing.html#assigning-values-t
536533
da.isel(x=[0, 1, 2])[1] = -1
537534
da
538535
536+
You can also assign values to all variables of a :py:class:`Dataset` at once:
537+
538+
.. ipython:: python
539+
540+
ds_org = xr.tutorial.open_dataset("eraint_uvz").isel(
541+
latitude=slice(56, 59), longitude=slice(255, 258), level=0
542+
)
543+
# set all values to 0
544+
ds = xr.zeros_like(ds_org)
545+
ds
546+
547+
# by integer
548+
ds[dict(latitude=2, longitude=2)] = 1
549+
ds["u"]
550+
ds["v"]
551+
552+
# by label
553+
ds.loc[dict(latitude=47.25, longitude=[11.25, 12])] = 100
554+
ds["u"]
555+
556+
# dataset as new values
557+
new_dat = ds_org.loc[dict(latitude=48, longitude=[11.25, 12])]
558+
new_dat
559+
ds.loc[dict(latitude=47.25, longitude=[11.25, 12])] = new_dat
560+
ds["u"]
561+
562+
The dimensions can differ between the variables in the dataset, but all variables need to have at least the dimensions specified in the indexer dictionary.
563+
The new values must be either a scalar, a :py:class:`DataArray` or a :py:class:`Dataset` itself that contains all variables that also appear in the dataset to be modified.
539564

540565
.. _more_advanced_indexing:
541566

0 commit comments

Comments
 (0)