Skip to content

Commit 434333f

Browse files
Armavicamichaelosthege
authored andcommitted
Correct orthographic typos
1 parent 669e4f5 commit 434333f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+91
-91
lines changed

GOVERNANCE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -622,7 +622,7 @@ For current members of the documentation team, refer to the recurrent and
622622
core contributor membership sections.
623623

624624
### Community team
625-
The focus of the Community team is activities intended to nurture, energize, and grow the community of PyMC users and contributors. These activities include moderation of and participation in the discussions on the PyMC Discourse, planning and organization of events such as PyMCon and sprints, and coordination of presence on various social networks. These activites are not intended to be the sole responsibility of the Community team. Instead, the Community team provides leadership in these efforts, but recruits other contributors and community members as needed, thus encourging participation and fostering a healthy, self-sustaining community.
625+
The focus of the Community team is activities intended to nurture, energize, and grow the community of PyMC users and contributors. These activities include moderation of and participation in the discussions on the PyMC Discourse, planning and organization of events such as PyMCon and sprints, and coordination of presence on various social networks. These activities are not intended to be the sole responsibility of the Community team. Instead, the Community team provides leadership in these efforts, but recruits other contributors and community members as needed, thus encourging participation and fostering a healthy, self-sustaining community.
626626

627627
For current members of the community team, refer to the recurrent and
628628
core contributor membership sections.

RELEASE-NOTES.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Feel free to read it, print it out, and give it to people on the street -- becau
3737
- Added a `logcdf` implementation for the Kumaraswamy distribution (see [#4706](https://github.com/pymc-devs/pymc/pull/4706)).
3838
- The `OrderedMultinomial` distribution has been added for use on ordinal data which are _aggregated_ by trial, like multinomial observations, whereas `OrderedLogistic` only accepts ordinal data in a _disaggregated_ format, like categorical observations (see [#4773](https://github.com/pymc-devs/pymc/pull/4773)).
3939
- The `Polya-Gamma` distribution has been added (see [#4531](https://github.com/pymc-devs/pymc/pull/4531)). To make use of this distribution, the [`polyagamma>=1.3.1`](https://pypi.org/project/polyagamma/) library must be installed and available in the user's environment.
40-
- `pm.DensityDist` can now accept an optional `logcdf` keyword argument to pass in a function to compute the cummulative density function of the distribution (see [5026](https://github.com/pymc-devs/pymc/pull/5026)).
40+
- `pm.DensityDist` can now accept an optional `logcdf` keyword argument to pass in a function to compute the cumulative density function of the distribution (see [5026](https://github.com/pymc-devs/pymc/pull/5026)).
4141
- `pm.DensityDist` can now accept an optional `moment` keyword argument to pass in a function to compute the moment of the distribution (see [5026](https://github.com/pymc-devs/pymc/pull/5026)).
4242

4343
- Added an alternative parametrization, `logit_p` to `pm.Binomial` and `pm.Categorical` distributions (see [5637](https://github.com/pymc-devs/pymc/pull/5637)).
@@ -136,7 +136,7 @@ _Read on if you're a developer. Or curious. Or both._
136136
- The `gp.prior(..., shape=...)` kwarg was renamed to `size`.
137137
- Multiple methods including `gp.prior` now require explicit kwargs.
138138
- For all implementations, `gp.Latent`, `gp.Marginal` etc., `cov_func` and `mean_func` are required kwargs.
139-
- In Windows test conda environment the `mkl` version is fixed to verison 2020.4, and `mkl-service` is fixed to `2.3.0`. This was required for `gp.MarginalKron` to function properly.
139+
- In Windows test conda environment the `mkl` version is fixed to version 2020.4, and `mkl-service` is fixed to `2.3.0`. This was required for `gp.MarginalKron` to function properly.
140140
- `gp.MvStudentT` uses rotated samples from `StudentT` directly now, instead of sampling from `pm.Chi2` and then from `pm.Normal`.
141141
- The "jitter" parameter, or the diagonal noise term added to Gram matrices such that the Cholesky is numerically stable, is now exposed to the user instead of hard-coded. See the function `gp.util.stabilize`.
142142
- The `is_observed` arguement for `gp.Marginal*` implementations has been deprecated.
@@ -223,7 +223,7 @@ This release breaks some APIs w.r.t. `3.10.0`. It also brings some dreadfully aw
223223

224224

225225
### New Features
226-
- Option to set `check_bounds=False` when instantiating `pymc.Model()`. This turns off bounds checks that ensure that input parameters of distributions are valid. For correctly specified models, this is unneccessary as all parameters get automatically transformed so that all values are valid. Turning this off should lead to faster sampling (see [#4377](https://github.com/pymc-devs/pymc/pull/4377)).
226+
- Option to set `check_bounds=False` when instantiating `pymc.Model()`. This turns off bounds checks that ensure that input parameters of distributions are valid. For correctly specified models, this is unnecessary as all parameters get automatically transformed so that all values are valid. Turning this off should lead to faster sampling (see [#4377](https://github.com/pymc-devs/pymc/pull/4377)).
227227
- `OrderedProbit` distribution added (see [#4232](https://github.com/pymc-devs/pymc/pull/4232)).
228228
- `plot_posterior_predictive_glm` now works with `arviz.InferenceData` as well (see [#4234](https://github.com/pymc-devs/pymc/pull/4234))
229229
- Add `logcdf` method to all univariate discrete distributions (see [#4387](https://github.com/pymc-devs/pymc/pull/4387)).
@@ -439,7 +439,7 @@ Though we had to temporarily remove the `docs/*` folder from the tarball due to
439439

440440
### Maintenance
441441

442-
- All occurances of `sd` as a parameter name have been renamed to `sigma`. `sd` will continue to function for backwards compatibility.
442+
- All occurrences of `sd` as a parameter name have been renamed to `sigma`. `sd` will continue to function for backwards compatibility.
443443
- `HamiltonianMC` was ignoring certain arguments like `target_accept`, and not using the custom step size jitter function with expectation 1.
444444
- Made `BrokenPipeError` for parallel sampling more verbose on Windows.
445445
- Added the `broadcast_distribution_samples` function that helps broadcasting arrays of drawn samples, taking into account the requested `size` and the inferred distribution shape. This sometimes is needed by distributions that call several `rvs` separately within their `random` method, such as the `ZeroInflatedPoisson` (fixes issue [#3310](https://github.com/pymc-devs/pymc/issues/3310)).
@@ -676,7 +676,7 @@ This will be the last release to support Python 2.
676676
This version includes two major contributions from our Google Summer of Code 2017 students:
677677

678678
* Maxim Kochurov extended and refactored the variational inference module. This primarily adds two important classes, representing operator variational inference (`OPVI`) objects and `Approximation` objects. These make it easier to extend existing `variational` classes, and to derive inference from `variational` optimizations, respectively. The `variational` module now also includes normalizing flows (`NFVI`).
679-
* Bill Engels added an extensive new Gaussian processes (`gp`) module. Standard GPs can be specified using either `Latent` or `Marginal` classes, depending on the nature of the underlying function. A Student-T process `TP` has been added. In order to accomodate larger datasets, approximate marginal Gaussian processes (`MarginalSparse`) have been added.
679+
* Bill Engels added an extensive new Gaussian processes (`gp`) module. Standard GPs can be specified using either `Latent` or `Marginal` classes, depending on the nature of the underlying function. A Student-T process `TP` has been added. In order to accommodate larger datasets, approximate marginal Gaussian processes (`MarginalSparse`) have been added.
680680

681681
Documentation has been improved as the result of the project's monthly "docathons".
682682

docs/source/api.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ API
2525
------------------
2626
Dimensionality
2727
------------------
28-
PyMC provides numerous methods, and syntatic sugar, to easily specify the dimensionality of
28+
PyMC provides numerous methods, and syntactic sugar, to easily specify the dimensionality of
2929
Random Variables in modeling. Refer to :ref:`dimensionality` notebook to see examples
3030
demonstrating the functionality.
3131

docs/source/api/shape_utils.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ shape_utils
44

55
This submodule contains various functions that apply numpy's broadcasting rules to shape tuples, and also to samples drawn from probability distributions.
66

7-
The main challenge when broadcasting samples drawn from a generative model, is that each random variate has a core shape. When we draw many i.i.d samples from a given RV, for example if we ask for `size_tuple` i.i.d draws, the result usually is a `size_tuple + RV_core_shape`. In the generative model's hierarchy, the downstream RVs that are conditionally dependent on our above sampled values, will get an array with a shape that is incosistent with the core shape they expect to see for their parameters. This is a problem sometimes because it prevents regular broadcasting in complex hierachical models, and thus make prior and posterior predictive sampling difficult.
7+
The main challenge when broadcasting samples drawn from a generative model, is that each random variate has a core shape. When we draw many i.i.d samples from a given RV, for example if we ask for `size_tuple` i.i.d draws, the result usually is a `size_tuple + RV_core_shape`. In the generative model's hierarchy, the downstream RVs that are conditionally dependent on our above sampled values, will get an array with a shape that is inconsistent with the core shape they expect to see for their parameters. This is a problem sometimes because it prevents regular broadcasting in complex hierarchical models, and thus make prior and posterior predictive sampling difficult.
88

99
This module introduces functions that are made aware of the requested `size_tuple` of i.i.d samples, and does the broadcasting on the core shapes, transparently ignoring or moving the i.i.d `size_tuple` prepended axes around.
1010

docs/source/contributing/developer_guide.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -277,7 +277,7 @@ as for ``FreeRV`` and ``ObservedRV``, they are ``TensorVariable``\s with
277277
278278
``Factor`` basically `enable and assign the
279279
logp <https://github.com/pymc-devs/pymc/blob/6d07591962a6c135640a3c31903eba66b34e71d8/pymc/model.py#L195-L276>`__
280-
(representated as a tensor also) property to an PyTensor tensor (thus
280+
(represented as a tensor also) property to an PyTensor tensor (thus
281281
making it a random variable). For a ``TransformedRV``, it transforms the
282282
distribution into a ``TransformedDistribution``, and then ``model.Var`` is
283283
called again to added the RV associated with the
@@ -290,7 +290,7 @@ called again to added the RV associated with the
290290
transformed_name, transform.apply(distribution), total_size=total_size)
291291
292292
note: after ``transform.apply(distribution)`` its ``.transform``
293-
porperty is set to ``None``, thus making sure that the above call will
293+
property is set to ``None``, thus making sure that the above call will
294294
only add one ``FreeRV``. In another word, you *cannot* do chain
295295
transformation by nested applying multiple transforms to a Distribution
296296
(however, you can use ``Chain`` transformation.
@@ -404,7 +404,7 @@ def logp_dlogp_function(self, grad_vars=None, **kwargs):
404404
grad_vars = list(typefilter(self.free_RVs, continuous_types))
405405
else:
406406
...
407-
varnames = [var.name for var in grad_vars] # In a simple case with only continous RVs,
407+
varnames = [var.name for var in grad_vars] # In a simple case with only continuous RVs,
408408
# this is all the free_RVs
409409
extra_vars = [var for var in self.free_RVs if var.name not in varnames]
410410
return ValueGradFunction(self.logpt, grad_vars, extra_vars, **kwargs)
@@ -522,7 +522,7 @@ That is the reason we often see no advantage in using GPU, because the data is c
522522
Also, ``pytensor.clone_replace`` is too convenient (PyMC internal joke is that it is like a drug - very addictive).
523523
If all the operation happens in the graph (including the conditioning and setting value), I see no need to isolate part of the graph (via graph copying or graph rewriting) for building model and running inference.
524524

525-
Moreover, if we are limiting to the problem that we can solved most confidently - model with all continous unknown parameters that could be sampled with dynamic HMC, there is even less need to think about graph cloning/rewriting.
525+
Moreover, if we are limiting to the problem that we can solved most confidently - model with all continuous unknown parameters that could be sampled with dynamic HMC, there is even less need to think about graph cloning/rewriting.
526526

527527
## Inference
528528

@@ -531,7 +531,7 @@ The ability for model instance to generate conditional logp and dlogp function e
531531
On a conceptual level it is a Metropolis-within-Gibbs sampler.
532532
Users can specify different sampler for different RVs.
533533
Alternatively, it is implemented as yet another interceptor:
534-
The ``pm.sample(...)`` call will try to [assign the best step methods to different free\_RVs](https://github.com/pymc-devs/pymc/blob/6d07591962a6c135640a3c31903eba66b34e71d8/pymc/sampling.py#L86-L152) (e.g., NUTS if all free\_RVs are continous).
534+
The ``pm.sample(...)`` call will try to [assign the best step methods to different free\_RVs](https://github.com/pymc-devs/pymc/blob/6d07591962a6c135640a3c31903eba66b34e71d8/pymc/sampling.py#L86-L152) (e.g., NUTS if all free\_RVs are continuous).
535535
Then, (conditional) logp function(s) are compiled, and the sampler called each sampler within the list of CompoundStep in a for-loop for one sample circle.
536536

537537
For each sampler, it implements a ``step.step`` method to perform MH updates.
@@ -560,7 +560,7 @@ Moreover, transition kernels in TFP do not flatten the tensors, see eg docstring
560560
#### Dynamic HMC
561561
We love NUTS, or to be more precise Dynamic HMC with complex stopping rules.
562562
This part is actually all done outside of PyTensor, for NUTS, it includes:
563-
The leapfrog, dual averaging, tunning of mass matrix and step size, the tree building, sampler related statistics like divergence and energy checking.
563+
The leapfrog, dual averaging, tuning of mass matrix and step size, the tree building, sampler related statistics like divergence and energy checking.
564564
We actually have an PyTensor version of HMC, but it has never been used, and has been removed from the main repository.
565565
It can still be found in the [git history](https://github.com/pymc-devs/pymc/pull/3734/commits/0fdae8207fd14f66635f3673ef267b2b8817aa68), though.
566566

@@ -627,7 +627,7 @@ As for the [`logq`` since it is a Gaussian `it is pretty straightforward to eval
627627
TensorFlow has graph utils for that that could potentially help in doing this.
628628
On the other hand graph management in Tensorflow seemed to more tricky than expected.
629629
The high level reason is that graph is an add only container.
630-
- There were few fixed bugs not obvoius in the first place.
630+
- There were few fixed bugs not obvious in the first place.
631631
PyTensor has a tool to manipulate the graph (``pytensor.clone_replace``) and this tool requires extremely careful treatment when doing a lot of graph replacements at different level.
632632
- We coined a term ``pytensor.clone_replace`` curse.
633633
We got extremely dependent on this feature.

docs/source/contributing/implementing_distribution.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ blah = BlahRV()
8888
Some important things to keep in mind:
8989

9090
1. Everything inside the `rng_fn` method is pure Python code (as are the inputs) and should not make use of other `PyTensor` symbolic ops. The random method should make use of the `rng` which is a NumPy {class}`~numpy.random.RandomState`, so that samples are reproducible.
91-
1. Non-default `RandomVariable` dimensions will end up in the `rng_fn` via the `size` kwarg. The `rng_fn` will have to take this into consideration for correct output. `size` is the specification used by NumPy and SciPy and works like PyMC `shape` for univariate distributions, but is different for multivariate distributions. For multivariate distributions the __`size` excludes the `ndim_supp` support dimensions__, whereas the __`shape` of the resulting `TensorVariabe` or `ndarray` includes the support dimensions__. For more context check {ref}`The dimensionality notebook <dimensionality>`.
91+
1. Non-default `RandomVariable` dimensions will end up in the `rng_fn` via the `size` kwarg. The `rng_fn` will have to take this into consideration for correct output. `size` is the specification used by NumPy and SciPy and works like PyMC `shape` for univariate distributions, but is different for multivariate distributions. For multivariate distributions the __`size` excludes the `ndim_supp` support dimensions__, whereas the __`shape` of the resulting `TensorVariable` or `ndarray` includes the support dimensions__. For more context check {ref}`The dimensionality notebook <dimensionality>`.
9292
1. `PyTensor` tries to infer the output shape of the `RandomVariable` (given a user-specified size) by introspection of the `ndim_supp` and `ndim_params` attributes. However, the default method may not work for more complex distributions. In that case, custom `_supp_shape_from_params` (and less probably, `_infer_shape`) should also be implemented in the new `RandomVariable` class. One simple example is seen in the {class}`~pymc.DirichletMultinomialRV` where it was necessary to specify the `rep_param_idx` so that the `default_supp_shape_from_params` helper method can do its job. In more complex cases, it may not suffice to use this default helper. This could happen for instance if the argument values determined the support shape of the distribution, as happens in the `~pymc.distributions.multivarite._LKJCholeskyCovRV`.
9393
1. It's okay to use the `rng_fn` `classmethods` of other PyTensor and PyMC `RandomVariables` inside the new `rng_fn`. For example if you are implementing a negative HalfNormal `RandomVariable`, your `rng_fn` can simply return `- halfnormal.rng_fn(rng, scale, size)`.
9494

docs/source/contributing/jupyter_style.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ In case it helps, the dropdown below has some suggestions so you can focus on wr
139139
is needed instead of one change per matplotlib function call.
140140

141141
* It is often useful to make a numpy linspace into an {class}`~xarray.DataArray`
142-
for xarray to handle aligning and broadcasing automatically and ease computation.
142+
for xarray to handle aligning and broadcasting automatically and ease computation.
143143
* If a dimension name is needed, use `x_plot`
144144
* If a variable name is needed for the original array and DataArray to coexist, add `_da` suffix
145145

@@ -320,7 +320,7 @@ Thus, notebooks with extra dependencies should:
320320
}
321321
```
322322

323-
The pip and conda spcific keys overwrite the `extra_installs` one, so it doesn't make
323+
The pip and conda specific keys overwrite the `extra_installs` one, so it doesn't make
324324
sense to use `extra_installs` if using them. Either both pip and conda substitutions
325325
are defined or none of them is.
326326
:::

docs/source/contributing/versioning_schemes_explanation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ PyMC library. These exist in addition to more specific websites like PyMCon, spr
55
This guide explains their relation and what type of content should go on each of the websites.
66

77
:::{attention}
8-
All 3 websites share the same nabvar to give the appeareance of
8+
All 3 websites share the same nabvar to give the appearance of
99
a single website to users, but their generation process is completely independent from one another.
1010
:::
1111

docs/source/guides/Gaussian_Processes.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ which allows users to combine covariance functions into new ones, for example:
114114
After the covariance function is defined, it is now a function that is
115115
evaluated by calling :code:`cov_func(x, x)` (or :code:`mean_func(x)`). Since
116116
PyMC is built on top of PyTensor, it is relatively easy to define and experiment
117-
with non-standard covariance and mean functons. For more information check out
117+
with non-standard covariance and mean functions. For more information check out
118118
the tutorial on covariance functions.
119119

120120

0 commit comments

Comments
 (0)