-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Possible race condition when appending to an existing zarr #8876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for opening your first issue here at xarray! Be sure to follow the issue template! |
Ok so if I understand well, with the merged pull request, the code snippet will raise an error instead of silently writing NaNs, is that right @dcherian ? |
Yes, it should raise an error. Can you verify please? |
The last version of xarray indeed raises an error now in our internal data pipelines, unless we add safe_chunks=False. We will internally try to find a "clean" implementation to still be able to append to our existing zarr dataset without running into this. |
What happened?
When appending to an existing zarr along a dimension (
to_zarr(..., mode='a', append_dim="x" ,..)
), if the dask chunking of the dataset to append does not align with the chunking of the existing zarr, the resulting consolidated zarr store may haveNaN
s instead of the actual values it is supposed to have.What did you expect to happen?
We would expected that zarr append to have the same behaviour as if we concatenate dataset in memory (using
concat
) and write the whole result on a new zarr store in one goMinimal Complete Verifiable Example
MVCE confirmation
Relevant log output
Anything else we need to know?
The example code snippet provided here, reproduces the issue.
Since the issue occurs randomly, we loop in the example for a few times and stop when the issue occurs.
In the example, when
ds1
is first written, since it only contains 2 values along thex
dimension, the resulting .zarr store have the chunking:{'x': 2}
, even though we called.chunk({"x": 3})
.Side note: This behaviour in itself is not problematic in this case, but the fact that the chunking is silently changed made this issue harder to spot.
However, when we try to append the second dataset
ds2
, that contains 4 values, the.chunk({"x": 3})
in the begining splits the dask array into 2 dask chunks, but in a way that does not align with zarr chunks.Zarr chunks:
x: [1; 2]
x: [3; 4]
x: [5; 6]
Dask chunks for
ds2
:x: [3; 4; 5]
x: [6]
Both dask chunks A and B, are supposed to write on zarr chunk3
And depending on who writes first, we can end up with NaN on
x = 5
orx = 6
instead of actual values.The issue obviously happens only when dask tasks are run in parallel.
Using
safe_chunks = True
when callingto_zarr
does not seem to help.We couldn't figure out from the documentation how to detect this kind of issues, and how to prevent them from happening (maybe using a synchronizer?)
Environment
INSTALLED VERSIONS
commit: None
python: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0]
python-bits: 64
OS: Linux
OS-release: 5.15.133.1-microsoft-standard-WSL2
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.3-development
xarray: 2024.2.0
pandas: 2.2.1
numpy: 1.26.4
scipy: 1.12.0
netCDF4: 1.6.5
pydap: None
h5netcdf: 1.3.0
h5py: 3.10.0
Nio: None
zarr: 2.17.1
cftime: 1.6.3
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.3.8
dask: 2024.3.1
distributed: 2024.3.1
matplotlib: 3.8.3
cartopy: None
seaborn: 0.13.2
numbagg: 0.8.1
fsspec: 2024.3.1
cupy: None
pint: None
sparse: None
flox: 0.9.5
numpy_groupies: 0.10.2
setuptools: 69.2.0
pip: 24.0
conda: None
pytest: 8.1.1
mypy: None
IPython: 8.22.2
sphinx: None
The text was updated successfully, but these errors were encountered: