You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/ecosystem.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ Geosciences
16
16
- `climpred <https://climpred.readthedocs.io>`_: Analysis of ensemble forecast models for climate prediction.
17
17
- `geocube <https://corteva.github.io/geocube>`_: Tool to convert geopandas vector data into rasterized xarray data.
18
18
- `GeoWombat <https://github.com/jgrss/geowombat>`_: Utilities for analysis of remotely sensed and gridded raster data at scale (easily tame Landsat, Sentinel, Quickbird, and PlanetScope).
19
-
- `infinite-diff <https://github.com/spencerahill/infinite-diff>`_: xarray-based finite-differencing, focused on gridded climate/meterology data
19
+
- `infinite-diff <https://github.com/spencerahill/infinite-diff>`_: xarray-based finite-differencing, focused on gridded climate/meteorology data
20
20
- `marc_analysis <https://github.com/darothen/marc_analysis>`_: Analysis package for CESM/MARC experiments and output.
21
21
- `MetPy <https://unidata.github.io/MetPy/dev/index.html>`_: A collection of tools in Python for reading, visualizing, and performing calculations with weather data.
22
22
- `MPAS-Analysis <http://mpas-analysis.readthedocs.io>`_: Analysis for simulations produced with Model for Prediction Across Scales (MPAS) components and the Accelerated Climate Model for Energy (ACME).
Copy file name to clipboardExpand all lines: doc/user-guide/io.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -360,7 +360,7 @@ Scaling and type conversions
360
360
361
361
These encoding options work on any version of the netCDF file format:
362
362
363
-
- ``dtype``: Any valid NumPy dtype or string convertable to a dtype, e.g., ``'int16'``
363
+
- ``dtype``: Any valid NumPy dtype or string convertible to a dtype, e.g., ``'int16'``
364
364
or ``'float32'``. This controls the type of the data written on disk.
365
365
- ``_FillValue``: Values of ``NaN`` in xarray variables are remapped to this value when
366
366
saved on disk. This is important when converting floating point with missing values
@@ -405,7 +405,7 @@ If character arrays are used:
405
405
`any string encoding recognized by Python <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ if you feel the need to deviate from UTF-8,
406
406
by setting the ``_Encoding`` field in ``encoding``. But
407
407
`we don't recommend it <http://utf8everywhere.org/>`_.
408
-
- The character dimension name can be specifed by the ``char_dim_name`` field of a variable's
408
+
- The character dimension name can be specified by the ``char_dim_name`` field of a variable's
409
409
``encoding``. If the name of the character dimension is not specified, the default is
410
410
``f'string{data.shape[-1]}'``. When decoding character arrays from existing files, the
411
411
``char_dim_name`` is added to the variables ``encoding`` to preserve if encoding happens, but
@@ -472,7 +472,7 @@ Invalid netCDF files
472
472
The library ``h5netcdf`` allows writing some dtypes (booleans, complex, ...) that aren't
0 commit comments