You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 11, 2023. It is now read-only.
Tried it. Doesn't seem to speed things up much (if at all). But I'll leave it in the code, just in case it speeds things up a bit!
Using consolidated metadata seems to significantly (double?) the read speed though.
Code:
# Pretty sure this sets partial_decompress to True:importzarrarray=zarr.open_array(
FILENAME, path='stacked_eumetsat_data', partial_decompress=False, mode='r'storage_options=dict(consolidated=True))
%%timedata=array[:12, :128, :128, 0] # takes about 35 ms with or without partial_decompress.
This code might set partial_decompress=True, but I'm not certain!
Especially for NWPs, where we often only want a single value per chunk.
Also try in combination with uncompressed chunks
See zarr-developers/zarr-python#667
The text was updated successfully, but these errors were encountered: