Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/ecosystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Geosciences
- `climpred <https://climpred.readthedocs.io>`_: Analysis of ensemble forecast models for climate prediction.
- `geocube <https://corteva.github.io/geocube>`_: Tool to convert geopandas vector data into rasterized xarray data.
- `GeoWombat <https://github.com/jgrss/geowombat>`_: Utilities for analysis of remotely sensed and gridded raster data at scale (easily tame Landsat, Sentinel, Quickbird, and PlanetScope).
- `infinite-diff <https://github.com/spencerahill/infinite-diff>`_: xarray-based finite-differencing, focused on gridded climate/meterology data
- `infinite-diff <https://github.com/spencerahill/infinite-diff>`_: xarray-based finite-differencing, focused on gridded climate/meteorology data
- `marc_analysis <https://github.com/darothen/marc_analysis>`_: Analysis package for CESM/MARC experiments and output.
- `MetPy <https://unidata.github.io/MetPy/dev/index.html>`_: A collection of tools in Python for reading, visualizing, and performing calculations with weather data.
- `MPAS-Analysis <http://mpas-analysis.readthedocs.io>`_: Analysis for simulations produced with Model for Prediction Across Scales (MPAS) components and the Accelerated Climate Model for Energy (ACME).
Expand Down
4 changes: 2 additions & 2 deletions doc/roadmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Why has xarray been successful? In our opinion:
labeled multidimensional arrays, rather than solving particular
problems.
- This facilitates collaboration between users with different needs,
and helps us attract a broad community of contributers.
and helps us attract a broad community of contributors.
- Importantly, this retains flexibility, for use cases that don't
fit particularly well into existing frameworks.

Expand Down Expand Up @@ -82,7 +82,7 @@ We think the right approach to extending xarray's user community and the
usefulness of the project is to focus on improving key interfaces that
can be used externally to meet domain-specific needs.

We can generalize the community's needs into three main catagories:
We can generalize the community's needs into three main caeagories:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be

Suggested change
We can generalize the community's needs into three main caeagories:
We can generalize the community's needs into three main categories:

?

Since this PR has already been merged we would need a new PR. @slowy07, would you be up for that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay sir @keewis ,i'll do it,thanks for the review


- More flexible grids/indexing.
- More flexible arrays/computing.
Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ We can also manually iterate through ``Rolling`` objects:

While ``rolling`` provides a simple moving average, ``DataArray`` also supports
an exponential moving average with :py:meth:`~xarray.DataArray.rolling_exp`.
This is similiar to pandas' ``ewm`` method. numbagg_ is required.
This is similar to pandas' ``ewm`` method. numbagg_ is required.

.. _numbagg: https://github.com/shoyer/numbagg

Expand Down
6 changes: 3 additions & 3 deletions doc/user-guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ Scaling and type conversions

These encoding options work on any version of the netCDF file format:

- ``dtype``: Any valid NumPy dtype or string convertable to a dtype, e.g., ``'int16'``
- ``dtype``: Any valid NumPy dtype or string convertible to a dtype, e.g., ``'int16'``
or ``'float32'``. This controls the type of the data written on disk.
- ``_FillValue``: Values of ``NaN`` in xarray variables are remapped to this value when
saved on disk. This is important when converting floating point with missing values
Expand Down Expand Up @@ -405,7 +405,7 @@ If character arrays are used:
`any string encoding recognized by Python <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ if you feel the need to deviate from UTF-8,
by setting the ``_Encoding`` field in ``encoding``. But
`we don't recommend it <http://utf8everywhere.org/>`_.
- The character dimension name can be specifed by the ``char_dim_name`` field of a variable's
- The character dimension name can be specified by the ``char_dim_name`` field of a variable's
``encoding``. If the name of the character dimension is not specified, the default is
``f'string{data.shape[-1]}'``. When decoding character arrays from existing files, the
``char_dim_name`` is added to the variables ``encoding`` to preserve if encoding happens, but
Expand Down Expand Up @@ -472,7 +472,7 @@ Invalid netCDF files
The library ``h5netcdf`` allows writing some dtypes (booleans, complex, ...) that aren't
allowed in netCDF4 (see
`h5netcdf documentation <https://github.com/shoyer/h5netcdf#invalid-netcdf-files>`_).
This feature is availabe through :py:meth:`DataArray.to_netcdf` and
This feature is available through :py:meth:`DataArray.to_netcdf` and
:py:meth:`Dataset.to_netcdf` when used with ``engine="h5netcdf"``
and currently raises a warning unless ``invalid_netcdf=True`` is set:

Expand Down
10 changes: 5 additions & 5 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ New Features
- Added :py:meth:`DataArray.curvefit` and :py:meth:`Dataset.curvefit` for general curve fitting applications. (:issue:`4300`, :pull:`4849`)
By `Sam Levang <https://github.com/slevang>`_.
- Add options to control expand/collapse of sections in display of Dataset and
DataArray. The function :py:func:`set_options` now takes keyword aguments
DataArray. The function :py:func:`set_options` now takes keyword arguments
``display_expand_attrs``, ``display_expand_coords``, ``display_expand_data``,
``display_expand_data_vars``, all of which can be one of ``True`` to always
expand, ``False`` to always collapse, or ``default`` to expand unless over a
Expand Down Expand Up @@ -2628,7 +2628,7 @@ This minor release contains a number of backwards compatible enhancements.
Announcements of note:

- Xarray is now a NumFOCUS fiscally sponsored project! Read
`the anouncement <https://numfocus.org/blog/xarray-joins-numfocus-sponsored-projects>`_
`the announcement <https://numfocus.org/blog/xarray-joins-numfocus-sponsored-projects>`_
for more details.
- We have a new :doc:`roadmap` that outlines our future development plans.

Expand Down Expand Up @@ -3481,7 +3481,7 @@ Enhancements
By `Willi Rath <https://github.com/willirath>`_.

- You can now explicitly disable any default ``_FillValue`` (``NaN`` for
floating point values) by passing the enconding ``{'_FillValue': None}``
floating point values) by passing the encoding ``{'_FillValue': None}``
(:issue:`1598`).
By `Stephan Hoyer <https://github.com/shoyer>`_.

Expand Down Expand Up @@ -5056,7 +5056,7 @@ Enhancements
These methods return a new Dataset (or DataArray) with updated data or
coordinate variables.
- ``xray.Dataset.sel`` now supports the ``method`` parameter, which works
like the paramter of the same name on ``xray.Dataset.reindex``. It
like the parameter of the same name on ``xray.Dataset.reindex``. It
provides a simple interface for doing nearest-neighbor interpolation:

.. use verbatim because I can't seem to install pandas 0.16.1 on RTD :(
Expand Down Expand Up @@ -5253,7 +5253,7 @@ Breaking changes

xray.DataArray([1, 2, np.nan, 3]).mean()

You can turn this behavior off by supplying the keyword arugment
You can turn this behavior off by supplying the keyword argument
``skipna=False``.

These operations are lightning fast thanks to integration with bottleneck_,
Expand Down