-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
xarray, chunking and rolling operation adds chunking along new dimension (previously worked) #3277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yikes. Thanks @p-d-moore . Looks like the rolling mean changes the chunks
A workaround is to do |
Some additional notes: The bug also appears for xarray=0.12.2 (so I presume was introduced between 0.12.1 and 0.12.2). Other rolling operations are similarly affected - replacing .mean() in the sample code with .count(), .sum(), .std(), .max() etc results in the same erroneous chunking behaviour. Another workaround is to downgrade xarray to 0.12.1 |
Confirm bug is still present in 0.13.0. |
Using xarray=0.13.0, the chunking behaviour has changed again (but still incorrect):
Note the chunksize is now (20, 7648) instead of (20, 7653). I believe it might be related to #2942 -> I think disabling bottleneck for dask arrays for the rolling operation caused the bug above to appear (so the bug may have been there for a while, but doesn't appear because bottleneck was being used). Doing a quick trace, in rolling.py , I think it's the line windows = self.construct(rolling_dim) in the reduce function which creates windows with incorrect chunking, possibly as a consequence of some transpose operations and dimension mix up? It seems strange that other applications aren't having problems with this, unless I am doing something different in my code? Note that I am very specifically chunking in a different dimension to the rolling operation. |
Error still present in 0.14.0 I believe the bug occurs in dask_array_ops.py: rolling_window My best guess at understanding the code: I believe there is an attempt to "pad" rolling windows to ensure the rolling windows doesn't miss data across chunk boundaries. I think the "padding" is supposed to be truncated later, but something is miscalculated and the final array ends up with the wrong chunking. In the case I presented, the "chunking" happens along a different dimension to the "rolling" and padding is not necessary. Perhaps something goes haywire because the code was written to guard against rolling along a chunked dimension (and missing data across chunk boundaries)? Additionally, as the padding is not necessary in this case, there is a performance penalty that could be avoided? A simple fix for my case is to not do any "padding" whenever the chunksize along the rolling dimension is equal to the arraysize along the rolling dimension. e.g. in the function dask_array_ops.py: rolling_window
becomes
This fixes the code for my usage case, perhaps someone could advise if I have understood the issue correctly? |
Uh oh!
There was an error while loading. Please reload this page.
I was testing the latest version of xarray (0.12.3) from the conda-forge channel and this broke some code I had. Under the defaults installation not using conda-forge (xarray=0.12.1), the following code works correctly with desired output:
Test code
Output (correct) with xarray=0.12.1 - note the chunksizes
Output (now failing) with xarray=0.12.3 (note the chunksizes)
Problem Description
Using dask + rolling + xarray=0.12.3 appears to add undesirable chunking in a new dimension which was not the case previously using xarray=0.12.1 This additional chunking made the the queuing of a further rolling operation fail with a ValueError. This (at the very least) makes queuing dask based delayed operations difficult when multiple rolling operations are used.
Output of
xr.show_versions()
for the not working versionxarray: 0.12.3
pandas: 0.25.1
numpy: 1.16.4
scipy: 1.3.1
netCDF4: 1.4.2
pydap: None
h5netcdf: None
h5py: 2.9.0
Nio: None
zarr: None
cftime: 1.0.3.4
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.2.1
dask: 2.3.0
distributed: 2.3.2
matplotlib: 3.1.1
cartopy: None
seaborn: 0.9.0
numbagg: None
setuptools: 41.0.1
pip: 19.2.2
conda: 4.7.11
pytest: None
IPython: 7.8.0
sphinx: None
Apologies if this issue is reported, I was unable to find a case that appeared equivalent.
The text was updated successfully, but these errors were encountered: