Skip to content

Commit 43057ef

Browse files
author
dcherian
committed
Merge branch 'master' into refactor-plot-utils
* master: stale requires a label (pydata#2701) Update indexing.rst (pydata#2700) add line break to message posted (pydata#2698) Config for closing stale issues (pydata#2684) to_dict without data (pydata#2659) Update asv.conf.json (pydata#2693) try no rasterio in py36 env (pydata#2691) Detailed report for testing.assert_equal and testing.assert_identical (pydata#1507) Hotfix for pydata#2662 (pydata#2678) Update README.rst (pydata#2682) Fix test failures with numpy=1.16 (pydata#2675)
2 parents 792291c + 79fa060 commit 43057ef

19 files changed

+430
-53
lines changed

.github/stale.yml

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# Configuration for probot-stale - https://github.com/probot/stale
2+
3+
# Number of days of inactivity before an Issue or Pull Request becomes stale
4+
daysUntilStale: 700 # start with a large number and reduce shortly
5+
6+
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
7+
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
8+
daysUntilClose: 30
9+
10+
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
11+
exemptLabels:
12+
- pinned
13+
- security
14+
- "[Status] Maybe Later"
15+
16+
# Set to true to ignore issues in a project (defaults to false)
17+
exemptProjects: false
18+
19+
# Set to true to ignore issues in a milestone (defaults to false)
20+
exemptMilestones: false
21+
22+
# Set to true to ignore issues with an assignee (defaults to false)
23+
exemptAssignees: true
24+
25+
# Label to use when marking as stale
26+
staleLabel: stale
27+
28+
# Comment to post when marking as stale. Set to `false` to disable
29+
markComment: |
30+
In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity
31+
If this issue remains relevant, please comment here; otherwise it will be marked as closed automatically
32+
33+
# Comment to post when removing the stale label.
34+
# unmarkComment: >
35+
# Your comment here.
36+
37+
# Comment to post when closing a stale Issue or Pull Request.
38+
# closeComment: >
39+
# Your comment here.
40+
41+
# Limit the number of actions per hour, from 1-30. Default is 30
42+
limitPerRun: 1 # start with a small number
43+
44+
45+
# Limit to only `issues` or `pulls`
46+
# only: issues
47+
48+
# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
49+
# pulls:
50+
# daysUntilStale: 30
51+
# markComment: >
52+
# This pull request has been automatically marked as stale because it has not had
53+
# recent activity. It will be closed if no further activity occurs. Thank you
54+
# for your contributions.
55+
56+
# issues:
57+
# exemptLabels:
58+
# - confirmed

.travis.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ script:
6060
- python --version
6161
- python -OO -c "import xarray"
6262
- if [[ "$CONDA_ENV" == "docs" ]]; then
63-
conda install -c conda-forge sphinx sphinx_rtd_theme sphinx-gallery numpydoc;
63+
conda install -c conda-forge --override-channels sphinx sphinx_rtd_theme sphinx-gallery numpydoc "gdal>2.2.4";
6464
sphinx-build -n -j auto -b html -d _build/doctrees doc _build/html;
6565
elif [[ "$CONDA_ENV" == "lint" ]]; then
6666
pycodestyle xarray ;

README.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ __ http://climate.com/
114114
License
115115
-------
116116

117-
Copyright 2014-2018, xarray Developers
117+
Copyright 2014-2019, xarray Developers
118118

119119
Licensed under the Apache License, Version 2.0 (the "License");
120120
you may not use this file except in compliance with the License.

asv_bench/asv.conf.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040

4141
// The Pythons you'd like to test against. If not provided, defaults
4242
// to the current version of Python used to run `asv`.
43-
"pythons": ["2.7", "3.6"],
43+
"pythons": ["3.6"],
4444

4545
// The matrix of dependencies to test. Each key is the name of a
4646
// package (in PyPI) and the values are version numbers. An empty

ci/requirements-py36.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,14 @@ dependencies:
2020
- scipy
2121
- seaborn
2222
- toolz
23-
- rasterio
23+
# - rasterio # xref #2683
2424
- bottleneck
2525
- zarr
2626
- pseudonetcdf>=3.0.1
2727
- eccodes
2828
- cdms2
29-
- pynio
30-
- iris>=1.10
29+
# - pynio # xref #2683
30+
# - iris>=1.10 # xref #2683
3131
- pydap
3232
- lxml
3333
- pip:

doc/indexing.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -371,7 +371,7 @@ Vectorized indexing also works with ``isel``, ``loc``, and ``sel``:
371371
ind = xr.DataArray([['a', 'b'], ['b', 'a']], dims=['a', 'b'])
372372
da.loc[:, ind] # same as da.sel(y=ind)
373373
374-
These methods may and also be applied to ``Dataset`` objects
374+
These methods may also be applied to ``Dataset`` objects
375375

376376
.. ipython:: python
377377

doc/io.rst

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,16 @@ require external libraries and dicts can easily be pickled, or converted to
8181
json, or geojson. All the values are converted to lists, so dicts might
8282
be quite large.
8383

84+
To export just the dataset schema, without the data itself, use the
85+
``data=False`` option:
86+
87+
.. ipython:: python
88+
89+
ds.to_dict(data=False)
90+
91+
This can be useful for generating indices of dataset contents to expose to
92+
search indices or other automated data discovery tools.
93+
8494
.. _io.netcdf:
8595

8696
netCDF
@@ -665,7 +675,7 @@ To read a consolidated store, pass the ``consolidated=True`` option to
665675
:py:func:`~xarray.open_zarr`::
666676

667677
ds = xr.open_zarr('foo.zarr', consolidated=True)
668-
678+
669679
Xarray can't perform consolidation on pre-existing zarr datasets. This should
670680
be done directly from zarr, as described in the
671681
`zarr docs <https://zarr.readthedocs.io/en/latest/tutorial.html#consolidating-metadata>`_.

doc/whats-new.rst

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,8 @@ Breaking changes
2828
Enhancements
2929
~~~~~~~~~~~~
3030

31+
- Add ``data=False`` option to ``to_dict()`` methods. (:issue:`2656`)
32+
By `Ryan Abernathey <https://github.com/rabernat>`_
3133
- :py:meth:`~xarray.DataArray.coarsen` and
3234
:py:meth:`~xarray.Dataset.coarsen` are newly added.
3335
See :ref:`comput.coarsen` for details.
@@ -36,6 +38,11 @@ Enhancements
3638
- Upsampling an array via interpolation with resample is now dask-compatible,
3739
as long as the array is not chunked along the resampling dimension.
3840
By `Spencer Clark <https://github.com/spencerkclark>`_.
41+
- :py:func:`xarray.testing.assert_equal` and
42+
:py:func:`xarray.testing.assert_identical` now provide a more detailed
43+
report showing what exactly differs between the two objects (dimensions /
44+
coordinates / variables / attributes) (:issue:`1507`).
45+
By `Benoit Bovy <https://github.com/benbovy>`_.
3946

4047
Bug fixes
4148
~~~~~~~~~

xarray/core/combine.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -493,16 +493,21 @@ def _auto_combine_all_along_first_dim(combined_ids, dim, data_vars,
493493
return new_combined_ids
494494

495495

496+
def vars_as_keys(ds):
497+
return tuple(sorted(ds))
498+
499+
496500
def _auto_combine_1d(datasets, concat_dim=_CONCAT_DIM_DEFAULT,
497501
compat='no_conflicts',
498502
data_vars='all', coords='different'):
499503
# This is just the old auto_combine function (which only worked along 1D)
500504
if concat_dim is not None:
501505
dim = None if concat_dim is _CONCAT_DIM_DEFAULT else concat_dim
502-
grouped = itertools.groupby(datasets, key=lambda ds: tuple(sorted(ds)))
506+
sorted_datasets = sorted(datasets, key=vars_as_keys)
507+
grouped_by_vars = itertools.groupby(sorted_datasets, key=vars_as_keys)
503508
concatenated = [_auto_concat(list(ds_group), dim=dim,
504509
data_vars=data_vars, coords=coords)
505-
for id, ds_group in grouped]
510+
for id, ds_group in grouped_by_vars]
506511
else:
507512
concatenated = datasets
508513
merged = merge(concatenated, compat=compat)

xarray/core/dataarray.py

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1760,7 +1760,7 @@ def to_netcdf(self, *args, **kwargs):
17601760

17611761
return dataset.to_netcdf(*args, **kwargs)
17621762

1763-
def to_dict(self):
1763+
def to_dict(self, data=True):
17641764
"""
17651765
Convert this xarray.DataArray into a dictionary following xarray
17661766
naming conventions.
@@ -1769,22 +1769,20 @@ def to_dict(self):
17691769
Useful for coverting to json. To avoid datetime incompatibility
17701770
use decode_times=False kwarg in xarrray.open_dataset.
17711771
1772+
Parameters
1773+
----------
1774+
data : bool, optional
1775+
Whether to include the actual data in the dictionary. When set to
1776+
False, returns just the schema.
1777+
17721778
See also
17731779
--------
17741780
DataArray.from_dict
17751781
"""
1776-
d = {'coords': {}, 'attrs': decode_numpy_dict_values(self.attrs),
1777-
'dims': self.dims}
1778-
1782+
d = self.variable.to_dict(data=data)
1783+
d.update({'coords': {}, 'name': self.name})
17791784
for k in self.coords:
1780-
data = ensure_us_time_resolution(self[k].values).tolist()
1781-
d['coords'].update({
1782-
k: {'data': data,
1783-
'dims': self[k].dims,
1784-
'attrs': decode_numpy_dict_values(self[k].attrs)}})
1785-
1786-
d.update({'data': ensure_us_time_resolution(self.values).tolist(),
1787-
'name': self.name})
1785+
d['coords'][k] = self.coords[k].variable.to_dict(data=data)
17881786
return d
17891787

17901788
@classmethod

xarray/core/dataset.py

Lines changed: 9 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3221,7 +3221,7 @@ def to_dask_dataframe(self, dim_order=None, set_index=False):
32213221

32223222
return df
32233223

3224-
def to_dict(self):
3224+
def to_dict(self, data=True):
32253225
"""
32263226
Convert this dataset to a dictionary following xarray naming
32273227
conventions.
@@ -3230,25 +3230,22 @@ def to_dict(self):
32303230
Useful for coverting to json. To avoid datetime incompatibility
32313231
use decode_times=False kwarg in xarrray.open_dataset.
32323232
3233+
Parameters
3234+
----------
3235+
data : bool, optional
3236+
Whether to include the actual data in the dictionary. When set to
3237+
False, returns just the schema.
3238+
32333239
See also
32343240
--------
32353241
Dataset.from_dict
32363242
"""
32373243
d = {'coords': {}, 'attrs': decode_numpy_dict_values(self.attrs),
32383244
'dims': dict(self.dims), 'data_vars': {}}
3239-
32403245
for k in self.coords:
3241-
data = ensure_us_time_resolution(self[k].values).tolist()
3242-
d['coords'].update({
3243-
k: {'data': data,
3244-
'dims': self[k].dims,
3245-
'attrs': decode_numpy_dict_values(self[k].attrs)}})
3246+
d['coords'].update({k: self[k].variable.to_dict(data=data)})
32463247
for k in self.data_vars:
3247-
data = ensure_us_time_resolution(self[k].values).tolist()
3248-
d['data_vars'].update({
3249-
k: {'data': data,
3250-
'dims': self[k].dims,
3251-
'attrs': decode_numpy_dict_values(self[k].attrs)}})
3248+
d['data_vars'].update({k: self[k].variable.to_dict(data=data)})
32523249
return d
32533250

32543251
@classmethod

0 commit comments

Comments
 (0)