Skip to content

Update to v0.39 #612

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
303 changes: 149 additions & 154 deletions Manifest.toml

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DistributionsAD = "ced4e74d-a319-5a8a-b0ac-84af2272839c"
DynamicHMC = "bbc10e6e-7c05-544b-b16e-64fede858acb"
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
Expand Down Expand Up @@ -54,4 +55,4 @@ UnPack = "3a884ed6-31ef-47d7-9d2a-63182c4928ed"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
Turing = "0.38"
Turing = "0.39"
2 changes: 1 addition & 1 deletion _quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ website:
text: Team
right:
# Current version
- text: "v0.38"
- text: "v0.39"
menu:
- text: Changelog
href: https://turinglang.org/docs/changelog.html
Expand Down
2 changes: 1 addition & 1 deletion developers/compiler/minituring-contexts/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ Of course, using an MCMC algorithm to sample from the prior is unnecessary and s
The use of contexts also goes far beyond just evaluating log probabilities and sampling. Some examples from Turing are

* `FixedContext`, which fixes some variables to given values and removes them completely from the evaluation of any log probabilities. They power the `Turing.fix` and `Turing.unfix` functions.
* `ConditionContext` conditions the model on fixed values for some parameters. They are used by `Turing.condition` and `Turing.uncondition`, i.e. the `model | (parameter=value,)` syntax. The difference between `fix` and `condition` is whether the log probability for the corresponding variable is included in the overall log density.
* `ConditionContext` conditions the model on fixed values for some parameters. They are used by `Turing.condition` and `Turing.decondition`, i.e. the `model | (parameter=value,)` syntax. The difference between `fix` and `condition` is whether the log probability for the corresponding variable is included in the overall log density.

* `PriorExtractorContext` collects information about what the prior distribution of each variable is.
* `PrefixContext` adds prefixes to variable names, allowing models to be used within other models without variable name collisions.
Expand Down
8 changes: 4 additions & 4 deletions developers/compiler/model-manual/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -36,26 +36,26 @@ using DynamicPPL
function gdemo2(model, varinfo, context, x)
# Assume s² has an InverseGamma distribution.
s², varinfo = DynamicPPL.tilde_assume!!(
context, InverseGamma(2, 3), Turing.@varname(s²), varinfo
context, InverseGamma(2, 3), @varname(s²), varinfo
)

# Assume m has a Normal distribution.
m, varinfo = DynamicPPL.tilde_assume!!(
context, Normal(0, sqrt(s²)), Turing.@varname(m), varinfo
context, Normal(0, sqrt(s²)), @varname(m), varinfo
)

# Observe each value of x[i] according to a Normal distribution.
for i in eachindex(x)
_retval, varinfo = DynamicPPL.tilde_observe!!(
context, Normal(m, sqrt(s²)), x[i], Turing.@varname(x[i]), varinfo
context, Normal(m, sqrt(s²)), x[i], @varname(x[i]), varinfo
)
end

# The final return statement should comprise both the original return
# value and the updated varinfo.
return nothing, varinfo
end
gdemo2(x) = Turing.Model(gdemo2, (; x))
gdemo2(x) = DynamicPPL.Model(gdemo2, (; x))

# Instantiate a Model object with our data variables.
model2 = gdemo2([1.5, 2.0])
Expand Down
4 changes: 2 additions & 2 deletions developers/inference/implementing-samplers/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -403,11 +403,11 @@ As we promised, all of this hassle of implementing our `MALA` sampler in a way t
It also enables use with Turing.jl through the `externalsampler`, but we need to do one final thing first: we need to tell Turing.jl how to extract a vector of parameters from the "sample" returned in our implementation of `AbstractMCMC.step`. In our case, the "sample" is a `MALASample`, so we just need the following line:

```{julia}
# Load Turing.jl.
using Turing
using DynamicPPL

# Overload the `getparams` method for our "sample" type, which is just a vector.
Turing.Inference.getparams(::Turing.Model, sample::MALASample) = sample.x
Turing.Inference.getparams(::DynamicPPL.Model, sample::MALASample) = sample.x
```

And with that, we're good to go!
Expand Down
4 changes: 3 additions & 1 deletion tutorials/bayesian-time-series-analysis/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,8 @@ scatter!(t, yf; color=2, label="Data")
With the model specified and with a reasonable prior we can now let Turing decompose the time series for us!

```{julia}
using MCMCChains: get_sections

function mean_ribbon(samples)
qs = quantile(samples)
low = qs[:, Symbol("2.5%")]
Expand All @@ -174,7 +176,7 @@ function mean_ribbon(samples)
end

function get_decomposition(model, x, cyclic_features, chain, op)
chain_params = Turing.MCMCChains.get_sections(chain, :parameters)
chain_params = get_sections(chain, :parameters)
return returned(model(x, cyclic_features, op), chain_params)
end

Expand Down
8 changes: 4 additions & 4 deletions tutorials/gaussian-mixture-models/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ $$
$$

Where we sum the components with `logsumexp` from the [`LogExpFunctions.jl` package](https://juliastats.org/LogExpFunctions.jl/stable/).
The manually incremented likelihood can be added to the log-probability with `Turing.@addlogprob!`, giving us the following model:
The manually incremented likelihood can be added to the log-probability with `@addlogprob!`, giving us the following model:

```{julia}
#| output: false
Expand All @@ -295,18 +295,18 @@ using LogExpFunctions
for k in 1:K
lvec[k] = (w[k] + logpdf(dists[k], x[:, i]))
end
Turing.@addlogprob! logsumexp(lvec)
@addlogprob! logsumexp(lvec)
end
end
```

::: {.callout-warning collapse="false"}
## Manually Incrementing Probablity

When possible, use of `Turing.@addlogprob!` should be avoided, as it exists outside the
When possible, use of `@addlogprob!` should be avoided, as it exists outside the
usual structure of a Turing model. In most cases, a custom distribution should be used instead.

Here, the next section demonstrates the perfered method --- using the `MixtureModel` distribution we have seen already to
Here, the next section demonstrates the preferred method --- using the `MixtureModel` distribution we have seen already to
perform the marginalization automatically.
:::

Expand Down
4 changes: 2 additions & 2 deletions tutorials/hidden-markov-models/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ using LogExpFunctions
T ~ filldist(Dirichlet(fill(1/K, K)), K)

hmm = HMM(softmax(ones(K)), copy(T'), [Normal(m[i], 0.1) for i in 1:K])
Turing.@addlogprob! logdensityof(hmm, y)
@addlogprob! logdensityof(hmm, y)
end

chn2 = sample(BayesHmm2(y, 3), NUTS(), 1000)
Expand Down Expand Up @@ -221,7 +221,7 @@ We can use the `viterbi()` algorithm, also from the `HiddenMarkovModels` package
T ~ filldist(Dirichlet(fill(1/K, K)), K)

hmm = HMM(softmax(ones(K)), copy(T'), [Normal(m[i], 0.1) for i in 1:K])
Turing.@addlogprob! logdensityof(hmm, y)
@addlogprob! logdensityof(hmm, y)

# Conditional generation of the hidden states.
if IncludeGenerated
Expand Down
10 changes: 6 additions & 4 deletions tutorials/variational-inference/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Here we will focus on how to use VI in Turing and not much on the theory underly
If you are interested in understanding the mathematics you can checkout [our write-up]({{<meta using-turing-variational-inference>}}) or any other resource online (there a lot of great ones).

Using VI in Turing.jl is very straight forward.
If `model` denotes a definition of a `Turing.Model`, performing VI is as simple as
If `model` denotes a definition of a `DynamicPPL.Model`, performing VI is as simple as

```{julia}
#| eval: false
Expand Down Expand Up @@ -54,7 +54,7 @@ x_i &\overset{\text{i.i.d.}}{=} \mathcal{N}(m, s), \quad i = 1, \dots, n

Recall that *conjugate* refers to the fact that we can obtain a closed-form expression for the posterior. Of course one wouldn't use something like variational inference for a conjugate model, but it's useful as a simple demonstration as we can compare the result to the true posterior.

First we generate some synthetic data, define the `Turing.Model` and instantiate the model on the data:
First we generate some synthetic data, define the `DynamicPPL.Model` and instantiate the model on the data:

```{julia}
# generate data
Expand Down Expand Up @@ -666,11 +666,13 @@ using Bijectors: Scale, Shift
```

```{julia}
using DistributionsAD

d = length(q)
base_dist = Turing.DistributionsAD.TuringDiagMvNormal(zeros(d), ones(d))
base_dist = DistributionsAD.TuringDiagMvNormal(zeros(d), ones(d))
```

`bijector(model::Turing.Model)` is defined by Turing, and will return a `bijector` which takes you from the space of the latent variables to the real space. In this particular case, this is a mapping `((0, ∞) × ℝ × ℝ¹⁰) → ℝ¹²`. We're interested in using a normal distribution as a base-distribution and transform samples to the latent space, thus we need the inverse mapping from the reals to the latent space:
`bijector(model::DynamicPPL.Model)` is defined in DynamicPPL, and will return a `bijector` which takes you from the space of the latent variables to the real space. In this particular case, this is a mapping `((0, ∞) × ℝ × ℝ¹⁰) → ℝ¹²`. We're interested in using a normal distribution as a base-distribution and transform samples to the latent space, thus we need the inverse mapping from the reals to the latent space:

```{julia}
to_constrained = inverse(bijector(m));
Expand Down
10 changes: 5 additions & 5 deletions usage/modifying-logprob/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Pkg.instantiate();
```

Turing accumulates log probabilities internally in an internal data structure that is accessible through the internal variable `__varinfo__` inside of the model definition.
To avoid users having to deal with internal data structures, Turing provides the `Turing.@addlogprob!` macro which increases the accumulated log probability.
To avoid users having to deal with internal data structures, Turing provides the `@addlogprob!` macro which increases the accumulated log probability.
For instance, this allows you to
[include arbitrary terms in the likelihood](https://github.com/TuringLang/Turing.jl/issues/1332)

Expand All @@ -24,7 +24,7 @@ myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x)

@model function demo(x)
μ ~ Normal()
Turing.@addlogprob! myloglikelihood(x, μ)
@addlogprob! myloglikelihood(x, μ)
end
```

Expand All @@ -37,7 +37,7 @@ using LinearAlgebra
@model function demo(x)
m ~ MvNormal(zero(x), I)
if dot(m, x) < 0
Turing.@addlogprob! -Inf
@addlogprob! -Inf
# Exit the model evaluation early
return nothing
end
Expand All @@ -49,11 +49,11 @@ end

Note that `@addlogprob!` always increases the accumulated log probability, regardless of the provided
sampling context.
For instance, if you do not want to apply `Turing.@addlogprob!` when evaluating the prior of your model but only when computing the log likelihood and the log joint probability, then you should [check the type of the internal variable `__context_`](https://github.com/TuringLang/DynamicPPL.jl/issues/154), as in the following example:
For instance, if you do not want to apply `@addlogprob!` when evaluating the prior of your model but only when computing the log likelihood and the log joint probability, then you should [check the type of the internal variable `__context_`](https://github.com/TuringLang/DynamicPPL.jl/issues/154), as in the following example:

```{julia}
#| eval: false
if DynamicPPL.leafcontext(__context__) !== Turing.PriorContext()
Turing.@addlogprob! myloglikelihood(x, μ)
@addlogprob! myloglikelihood(x, μ)
end
```
Loading