Skip to content

Commit 0eb8576

Browse files
penelopeysmmhaurusunxd3
authored
[breaking] v0.41 (#2667)
* Bump minor version * Do not take an initial step before starting the chain in HMC (#2674) * Do not take an initial step before starting the chain in HMC * Fix some tests * update changelog * Compatibility with DynamicPPL 0.38 + InitContext (#2676) * Import `varname_leaves` etc from AbstractPPL instead * initial updates for InitContext * More fixes * Fix pMCMC * Fix Gibbs * More fixes, reexport InitFrom * Fix a bunch of tests; I'll let CI tell me what's still broken... * Remove comment * Fix more tests * More test fixes * Fix more tests * fix GeneralizedExtremeValue numerical test * fix sample method * fix ESS reproducibility * Fix externalsampler test correctly * Fix everything (I _think_) * Add changelog * Fix remaining tests (for real this time) * Specify default chain type in Turing * fix DPPL revision * Fix changelog to mention unwrapped NT / Dict for initial_params * Remove references to islinked, set_flag, unset_flag * use `setleafcontext(::Model, ::AbstractContext)` * Fix for upstream removal of default_chain_type * Add clarifying comment for IS test * Revert ESS test (and add some numerical accuracy checks) * istrans -> is_transformed * Remove `loadstate` and `resume_from` * Remove a Sampler test * Paper over one crack * fix `resume_from` * remove a `Sampler` test * Update HISTORY.md Co-authored-by: Markus Hauru <[email protected]> * Remove `Sampler`, remove `InferenceAlgorithm`, transfer `initialstep`, `init_strategy`, and other functions from DynamicPPL to Turing (#2689) * Remove `Sampler` and move its interface to Turing * Test fixes (this is admittedly quite tiring) * Fix a couple of Gibbs tests (no doubt there are more) * actually fix the Gibbs ones * actually fix it this time * fix typo * point to breaking * Improve loadstate implementation * Re-add tests that were removed from DynamicPPL * Fix qualifier in src/mcmc/external_sampler.jl Co-authored-by: Xianda Sun <[email protected]> * Remove the default argument for initial_params * Remove DynamicPPL sources --------- Co-authored-by: Xianda Sun <[email protected]> * Fix a word in changelog * Improve changelog * Add PNTDist to changelog --------- Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Xianda Sun <[email protected]> * Fix all docs warnings --------- Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Markus Hauru <[email protected]> Co-authored-by: Xianda Sun <[email protected]>
1 parent cabe73f commit 0eb8576

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1250
-919
lines changed

HISTORY.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,65 @@
1+
# 0.41.0
2+
3+
## DynamicPPL 0.38
4+
5+
Turing.jl v0.41 brings with it all the underlying changes in DynamicPPL 0.38.
6+
Please see [the DynamicPPL changelog](https://github.com/TuringLang/DynamicPPL.jl/blob/main/HISTORY.md) for full details: in this section we only describe the changes that will directly affect end-users of Turing.jl.
7+
8+
### Performance
9+
10+
A number of functions such as `returned` and `predict` will have substantially better performance in this release.
11+
12+
### `ProductNamedTupleDistribution`
13+
14+
`Distributions.ProductNamedTupleDistribution` can now be used on the right-hand side of `~` in Turing models.
15+
16+
### Initial parameters
17+
18+
**Initial parameters for MCMC sampling must now be specified in a different form.**
19+
You still need to use the `initial_params` keyword argument to `sample`, but the allowed values are different.
20+
For almost all samplers in Turing.jl (except `Emcee`) this should now be a `DynamicPPL.AbstractInitStrategy`.
21+
22+
There are three kinds of initialisation strategies provided out of the box with Turing.jl (they are exported so you can use these directly with `using Turing`):
23+
24+
- `InitFromPrior()`: Sample from the prior distribution. This is the default for most samplers in Turing.jl (if you don't specify `initial_params`).
25+
26+
- `InitFromUniform(a, b)`: Sample uniformly from `[a, b]` in linked space. This is the default for Hamiltonian samplers. If `a` and `b` are not specified it defaults to `[-2, 2]`, which preserves the behaviour in previous versions (and mimics that of Stan).
27+
- `InitFromParams(p)`: Explicitly provide a set of initial parameters. **Note: `p` must be either a `NamedTuple` or an `AbstractDict{<:VarName}`; it can no longer be a `Vector`.** Parameters must be provided in unlinked space, even if the sampler later performs linking.
28+
29+
+ For this release of Turing.jl, you can also provide a `NamedTuple` or `AbstractDict{<:VarName}` and this will be automatically wrapped in `InitFromParams` for you. This is an intermediate measure for backwards compatibility, and will eventually be removed.
30+
31+
This change is made because Vectors are semantically ambiguous.
32+
It is not clear which element of the vector corresponds to which variable in the model, nor is it clear whether the parameters are in linked or unlinked space.
33+
Previously, both of these would depend on the internal structure of the VarInfo, which is an implementation detail.
34+
In contrast, the behaviour of `AbstractDict`s and `NamedTuple`s is invariant to the ordering of variables and it is also easier for readers to understand which variable is being set to which value.
35+
36+
If you were previously using `varinfo[:]` to extract a vector of initial parameters, you can now use `Dict(k => varinfo[k] for k in keys(varinfo)` to extract a Dict of initial parameters.
37+
38+
For more details about initialisation you can also refer to [the main TuringLang docs](https://turinglang.org/docs/usage/sampling-options/#specifying-initial-parameters), and/or the [DynamicPPL API docs](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.InitFromPrior).
39+
40+
### `resume_from` and `loadstate`
41+
42+
The `resume_from` keyword argument to `sample` is now removed.
43+
Instead of `sample(...; resume_from=chain)` you can use `sample(...; initial_state=loadstate(chain))` which is entirely equivalent.
44+
`loadstate` is exported from Turing now instead of in DynamicPPL.
45+
46+
Note that `loadstate` only works for `MCMCChains.Chains`.
47+
For FlexiChains users please consult the FlexiChains docs directly where this functionality is described in detail.
48+
49+
### `pointwise_logdensities`
50+
51+
`pointwise_logdensities(model, chn)`, `pointwise_loglikelihoods(...)`, and `pointwise_prior_logdensities(...)` now return an `MCMCChains.Chains` object if `chn` is itself an `MCMCChains.Chains` object.
52+
The old behaviour of returning an `OrderedDict` is still available: you just need to pass `OrderedDict` as the third argument, i.e., `pointwise_logdensities(model, chn, OrderedDict)`.
53+
54+
## Initial step in MCMC sampling
55+
56+
HMC and NUTS samplers no longer take an extra single step before starting the chain.
57+
This means that if you do not discard any samples at the start, the first sample will be the initial parameters (which may be user-provided).
58+
59+
Note that if the initial sample is included, the corresponding sampler statistics will be `missing`.
60+
Due to a technical limitation of MCMCChains.jl, this causes all indexing into MCMCChains to return `Union{Float64, Missing}` or similar.
61+
If you want the old behaviour, you can discard the first sample (e.g. using `discard_initial=1`).
62+
163
# 0.40.5
264

365
Bump Optimization.jl compatibility to include v5.

Project.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
name = "Turing"
22
uuid = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"
3-
version = "0.40.5"
3+
version = "0.41.0"
44

55
[deps]
66
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
@@ -45,7 +45,7 @@ Optim = "429524aa-4258-5aef-a3af-852621145aeb"
4545

4646
[extensions]
4747
TuringDynamicHMCExt = "DynamicHMC"
48-
TuringOptimExt = "Optim"
48+
TuringOptimExt = ["Optim", "AbstractPPL"]
4949

5050
[compat]
5151
ADTypes = "1.9"
@@ -64,7 +64,7 @@ Distributions = "0.25.77"
6464
DistributionsAD = "0.6"
6565
DocStringExtensions = "0.8, 0.9"
6666
DynamicHMC = "3.4"
67-
DynamicPPL = "0.37.2"
67+
DynamicPPL = "0.38"
6868
EllipticalSliceSampling = "0.5, 1, 2"
6969
ForwardDiff = "0.10.3, 1"
7070
Libtask = "0.9.3"

docs/make.jl

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,15 @@ using Turing
44
using DocumenterInterLinks
55

66
links = InterLinks(
7-
"DynamicPPL" => "https://turinglang.org/DynamicPPL.jl/stable/objects.inv",
8-
"AbstractPPL" => "https://turinglang.org/AbstractPPL.jl/stable/objects.inv",
9-
"LinearAlgebra" => "https://docs.julialang.org/en/v1/objects.inv",
10-
"AbstractMCMC" => "https://turinglang.org/AbstractMCMC.jl/stable/objects.inv",
11-
"ADTypes" => "https://sciml.github.io/ADTypes.jl/stable/objects.inv",
12-
"AdvancedVI" => "https://turinglang.org/AdvancedVI.jl/stable/objects.inv",
13-
"DistributionsAD" => "https://turinglang.org/DistributionsAD.jl/stable/objects.inv",
14-
"OrderedCollections" => "https://juliacollections.github.io/OrderedCollections.jl/stable/objects.inv",
7+
"DynamicPPL" => "https://turinglang.org/DynamicPPL.jl/stable/",
8+
"AbstractPPL" => "https://turinglang.org/AbstractPPL.jl/stable/",
9+
"LinearAlgebra" => "https://docs.julialang.org/en/v1/",
10+
"AbstractMCMC" => "https://turinglang.org/AbstractMCMC.jl/stable/",
11+
"ADTypes" => "https://sciml.github.io/ADTypes.jl/stable/",
12+
"AdvancedVI" => "https://turinglang.org/AdvancedVI.jl/stable/",
13+
"DistributionsAD" => "https://turinglang.org/DistributionsAD.jl/stable/",
14+
"OrderedCollections" => "https://juliacollections.github.io/OrderedCollections.jl/stable/",
15+
"Distributions" => "https://juliastats.org/Distributions.jl/stable/",
1516
)
1617

1718
# Doctest setup
@@ -27,6 +28,7 @@ makedocs(;
2728
"Inference" => "api/Inference.md",
2829
"Optimisation" => "api/Optimisation.md",
2930
"Variational " => "api/Variational.md",
31+
"RandomMeasures " => "api/RandomMeasures.md",
3032
],
3133
],
3234
checkdocs=:exports,

docs/src/api.md

Lines changed: 36 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ DynamicPPL.@model function my_model() end
3131
sample(my_model(), Turing.Inference.Prior(), 100)
3232
```
3333

34-
even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` module and [`@model`](@ref) in the `DynamicPPL` package.
34+
even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` module and [`@model`](@extref `DynamicPPL.@model`) in the `DynamicPPL` package.
3535

3636
### Modelling
3737

@@ -46,12 +46,13 @@ even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` modu
4646

4747
### Inference
4848

49-
| Exported symbol | Documentation | Description |
50-
|:----------------- |:------------------------------------------------------------------------------------------------ |:---------------------------------- |
51-
| `sample` | [`StatsBase.sample`](https://turinglang.org/AbstractMCMC.jl/stable/api/#Sampling-a-single-chain) | Sample from a model |
52-
| `MCMCThreads` | [`AbstractMCMC.MCMCThreads`](@extref) | Run MCMC using multiple threads |
53-
| `MCMCDistributed` | [`AbstractMCMC.MCMCDistributed`](@extref) | Run MCMC using multiple processes |
54-
| `MCMCSerial` | [`AbstractMCMC.MCMCSerial`](@extref) | Run MCMC using without parallelism |
49+
| Exported symbol | Documentation | Description |
50+
|:----------------- |:------------------------------------------------------------------------- |:----------------------------------------- |
51+
| `sample` | [`StatsBase.sample`](https://turinglang.org/docs/usage/sampling-options/) | Sample from a model |
52+
| `MCMCThreads` | [`AbstractMCMC.MCMCThreads`](@extref) | Run MCMC using multiple threads |
53+
| `MCMCDistributed` | [`AbstractMCMC.MCMCDistributed`](@extref) | Run MCMC using multiple processes |
54+
| `MCMCSerial` | [`AbstractMCMC.MCMCSerial`](@extref) | Run MCMC using without parallelism |
55+
| `loadstate` | [`Turing.Inference.loadstate`](@ref) | Load saved state from `MCMCChains.Chains` |
5556

5657
### Samplers
5758

@@ -75,6 +76,34 @@ even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` modu
7576
| `RepeatSampler` | [`Turing.Inference.RepeatSampler`](@ref) | A sampler that runs multiple times on the same variable |
7677
| `externalsampler` | [`Turing.Inference.externalsampler`](@ref) | Wrap an external sampler for use in Turing |
7778

79+
### DynamicPPL utilities
80+
81+
Please see the [generated quantities](https://turinglang.org/docs/tutorials/usage-generated-quantities/) and [probability interface](https://turinglang.org/docs/tutorials/usage-probability-interface/) guides for more information.
82+
83+
| Exported symbol | Documentation | Description |
84+
|:-------------------------- |:---------------------------------------------------------------------------------------------------------------------------- |:------------------------------------------------------- |
85+
| `returned` | [`DynamicPPL.returned`](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.returned-Tuple%7BModel,%20NamedTuple%7D) | Calculate additional quantities defined in a model |
86+
| `predict` | [`StatsAPI.predict`](https://turinglang.org/DynamicPPL.jl/stable/api/#Predicting) | Generate samples from posterior predictive distribution |
87+
| `pointwise_loglikelihoods` | [`DynamicPPL.pointwise_loglikelihoods`](@extref) | Compute log likelihoods for each sample in a chain |
88+
| `logprior` | [`DynamicPPL.logprior`](@extref) | Compute log prior probability |
89+
| `logjoint` | [`DynamicPPL.logjoint`](@extref) | Compute log joint probability |
90+
| `condition` | [`AbstractPPL.condition`](@extref) | Condition a model on data |
91+
| `decondition` | [`AbstractPPL.decondition`](@extref) | Remove conditioning on data |
92+
| `conditioned` | [`DynamicPPL.conditioned`](@extref) | Return the conditioned values of a model |
93+
| `fix` | [`DynamicPPL.fix`](@extref) | Fix the value of a variable |
94+
| `unfix` | [`DynamicPPL.unfix`](@extref) | Unfix the value of a variable |
95+
| `OrderedDict` | [`OrderedCollections.OrderedDict`](@extref) | An ordered dictionary |
96+
97+
### Initialisation strategies
98+
99+
Turing.jl provides several strategies to initialise parameters for models.
100+
101+
| Exported symbol | Documentation | Description |
102+
|:----------------- |:--------------------------------------- |:--------------------------------------------------------------- |
103+
| `InitFromPrior` | [`DynamicPPL.InitFromPrior`](@extref) | Obtain initial parameters from the prior distribution |
104+
| `InitFromUniform` | [`DynamicPPL.InitFromUniform`](@extref) | Obtain initial parameters by sampling uniformly in linked space |
105+
| `InitFromParams` | [`DynamicPPL.InitFromParams`](@extref) | Manually specify (possibly a subset of) initial parameters |
106+
78107
### Variational inference
79108

80109
See the [docs of AdvancedVI.jl](https://turinglang.org/AdvancedVI.jl/stable/) for detailed usage and the [variational inference tutorial](https://turinglang.org/docs/tutorials/09-variational-inference/) for a basic walkthrough.
@@ -124,29 +153,6 @@ LogPoisson
124153
| `arraydist` | [`DistributionsAD.arraydist`](@extref) | Create a product distribution from an array of distributions |
125154
| `NamedDist` | [`DynamicPPL.NamedDist`](@extref) | A distribution that carries the name of the variable |
126155

127-
### Predictions
128-
129-
| Exported symbol | Documentation | Description |
130-
|:--------------- |:--------------------------------------------------------------------------------- |:------------------------------------------------------- |
131-
| `predict` | [`StatsAPI.predict`](https://turinglang.org/DynamicPPL.jl/stable/api/#Predicting) | Generate samples from posterior predictive distribution |
132-
133-
### Querying model probabilities and quantities
134-
135-
Please see the [generated quantities](https://turinglang.org/docs/tutorials/usage-generated-quantities/) and [probability interface](https://turinglang.org/docs/tutorials/usage-probability-interface/) guides for more information.
136-
137-
| Exported symbol | Documentation | Description |
138-
|:-------------------------- |:---------------------------------------------------------------------------------------------------------------------------- |:-------------------------------------------------- |
139-
| `returned` | [`DynamicPPL.returned`](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.returned-Tuple%7BModel,%20NamedTuple%7D) | Calculate additional quantities defined in a model |
140-
| `pointwise_loglikelihoods` | [`DynamicPPL.pointwise_loglikelihoods`](@extref) | Compute log likelihoods for each sample in a chain |
141-
| `logprior` | [`DynamicPPL.logprior`](@extref) | Compute log prior probability |
142-
| `logjoint` | [`DynamicPPL.logjoint`](@extref) | Compute log joint probability |
143-
| `condition` | [`AbstractPPL.condition`](@extref) | Condition a model on data |
144-
| `decondition` | [`AbstractPPL.decondition`](@extref) | Remove conditioning on data |
145-
| `conditioned` | [`DynamicPPL.conditioned`](@extref) | Return the conditioned values of a model |
146-
| `fix` | [`DynamicPPL.fix`](@extref) | Fix the value of a variable |
147-
| `unfix` | [`DynamicPPL.unfix`](@extref) | Unfix the value of a variable |
148-
| `OrderedDict` | [`OrderedCollections.OrderedDict`](@extref) | An ordered dictionary |
149-
150156
### Point estimates
151157

152158
See the [mode estimation tutorial](https://turinglang.org/docs/tutorials/docs-17-mode-estimation/) for more information.

docs/src/api/RandomMeasures.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
# API: `Turing.RandomMeasures`
2+
3+
```@autodocs
4+
Modules = [Turing.RandomMeasures]
5+
Order = [:type, :function]
6+
```

ext/TuringDynamicHMCExt.jl

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -44,26 +44,22 @@ struct DynamicNUTSState{L,V<:DynamicPPL.AbstractVarInfo,C,M,S}
4444
stepsize::S
4545
end
4646

47-
function DynamicPPL.initialsampler(::DynamicPPL.Sampler{<:DynamicNUTS})
48-
return DynamicPPL.SampleFromUniform()
49-
end
50-
51-
function DynamicPPL.initialstep(
47+
function Turing.Inference.initialstep(
5248
rng::Random.AbstractRNG,
5349
model::DynamicPPL.Model,
54-
spl::DynamicPPL.Sampler{<:DynamicNUTS},
50+
spl::DynamicNUTS,
5551
vi::DynamicPPL.AbstractVarInfo;
5652
kwargs...,
5753
)
5854
# Ensure that initial sample is in unconstrained space.
59-
if !DynamicPPL.islinked(vi)
55+
if !DynamicPPL.is_transformed(vi)
6056
vi = DynamicPPL.link!!(vi, model)
6157
vi = last(DynamicPPL.evaluate!!(model, vi))
6258
end
6359

6460
# Define log-density function.
6561
= DynamicPPL.LogDensityFunction(
66-
model, DynamicPPL.getlogjoint_internal, vi; adtype=spl.alg.adtype
62+
model, DynamicPPL.getlogjoint_internal, vi; adtype=spl.adtype
6763
)
6864

6965
# Perform initial step.
@@ -84,14 +80,14 @@ end
8480
function AbstractMCMC.step(
8581
rng::Random.AbstractRNG,
8682
model::DynamicPPL.Model,
87-
spl::DynamicPPL.Sampler{<:DynamicNUTS},
83+
spl::DynamicNUTS,
8884
state::DynamicNUTSState;
8985
kwargs...,
9086
)
9187
# Compute next sample.
9288
vi = state.vi
9389
= state.logdensity
94-
steps = DynamicHMC.mcmc_steps(rng, spl.alg.sampler, state.metric, ℓ, state.stepsize)
90+
steps = DynamicHMC.mcmc_steps(rng, spl.sampler, state.metric, ℓ, state.stepsize)
9591
Q, _ = DynamicHMC.mcmc_next_step(steps, state.cache)
9692

9793
# Create next sample and state.

ext/TuringOptimExt.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
module TuringOptimExt
22

33
using Turing: Turing
4+
using AbstractPPL: AbstractPPL
45
import Turing: DynamicPPL, NamedArrays, Accessors, Optimisation
56
using Optim: Optim
67

@@ -186,7 +187,7 @@ function _optimize(
186187
f.ldf.model, f.ldf.getlogdensity, vi_optimum; adtype=f.ldf.adtype
187188
)
188189
vals_dict = Turing.Inference.getparams(f.ldf.model, vi_optimum)
189-
iters = map(DynamicPPL.varname_and_value_leaves, keys(vals_dict), values(vals_dict))
190+
iters = map(AbstractPPL.varname_and_value_leaves, keys(vals_dict), values(vals_dict))
190191
vns_vals_iter = mapreduce(collect, vcat, iters)
191192
varnames = map(Symbol first, vns_vals_iter)
192193
vals = map(last, vns_vals_iter)

0 commit comments

Comments
 (0)