You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Bump minor version
* Do not take an initial step before starting the chain in HMC (#2674)
* Do not take an initial step before starting the chain in HMC
* Fix some tests
* update changelog
* Compatibility with DynamicPPL 0.38 + InitContext (#2676)
* Import `varname_leaves` etc from AbstractPPL instead
* initial updates for InitContext
* More fixes
* Fix pMCMC
* Fix Gibbs
* More fixes, reexport InitFrom
* Fix a bunch of tests; I'll let CI tell me what's still broken...
* Remove comment
* Fix more tests
* More test fixes
* Fix more tests
* fix GeneralizedExtremeValue numerical test
* fix sample method
* fix ESS reproducibility
* Fix externalsampler test correctly
* Fix everything (I _think_)
* Add changelog
* Fix remaining tests (for real this time)
* Specify default chain type in Turing
* fix DPPL revision
* Fix changelog to mention unwrapped NT / Dict for initial_params
* Remove references to islinked, set_flag, unset_flag
* use `setleafcontext(::Model, ::AbstractContext)`
* Fix for upstream removal of default_chain_type
* Add clarifying comment for IS test
* Revert ESS test (and add some numerical accuracy checks)
* istrans -> is_transformed
* Remove `loadstate` and `resume_from`
* Remove a Sampler test
* Paper over one crack
* fix `resume_from`
* remove a `Sampler` test
* Update HISTORY.md
Co-authored-by: Markus Hauru <[email protected]>
* Remove `Sampler`, remove `InferenceAlgorithm`, transfer `initialstep`, `init_strategy`, and other functions from DynamicPPL to Turing (#2689)
* Remove `Sampler` and move its interface to Turing
* Test fixes (this is admittedly quite tiring)
* Fix a couple of Gibbs tests (no doubt there are more)
* actually fix the Gibbs ones
* actually fix it this time
* fix typo
* point to breaking
* Improve loadstate implementation
* Re-add tests that were removed from DynamicPPL
* Fix qualifier in src/mcmc/external_sampler.jl
Co-authored-by: Xianda Sun <[email protected]>
* Remove the default argument for initial_params
* Remove DynamicPPL sources
---------
Co-authored-by: Xianda Sun <[email protected]>
* Fix a word in changelog
* Improve changelog
* Add PNTDist to changelog
---------
Co-authored-by: Markus Hauru <[email protected]>
Co-authored-by: Xianda Sun <[email protected]>
* Fix all docs warnings
---------
Co-authored-by: Markus Hauru <[email protected]>
Co-authored-by: Markus Hauru <[email protected]>
Co-authored-by: Xianda Sun <[email protected]>
Copy file name to clipboardExpand all lines: HISTORY.md
+62Lines changed: 62 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,65 @@
1
+
# 0.41.0
2
+
3
+
## DynamicPPL 0.38
4
+
5
+
Turing.jl v0.41 brings with it all the underlying changes in DynamicPPL 0.38.
6
+
Please see [the DynamicPPL changelog](https://github.com/TuringLang/DynamicPPL.jl/blob/main/HISTORY.md) for full details: in this section we only describe the changes that will directly affect end-users of Turing.jl.
7
+
8
+
### Performance
9
+
10
+
A number of functions such as `returned` and `predict` will have substantially better performance in this release.
11
+
12
+
### `ProductNamedTupleDistribution`
13
+
14
+
`Distributions.ProductNamedTupleDistribution` can now be used on the right-hand side of `~` in Turing models.
15
+
16
+
### Initial parameters
17
+
18
+
**Initial parameters for MCMC sampling must now be specified in a different form.**
19
+
You still need to use the `initial_params` keyword argument to `sample`, but the allowed values are different.
20
+
For almost all samplers in Turing.jl (except `Emcee`) this should now be a `DynamicPPL.AbstractInitStrategy`.
21
+
22
+
There are three kinds of initialisation strategies provided out of the box with Turing.jl (they are exported so you can use these directly with `using Turing`):
23
+
24
+
-`InitFromPrior()`: Sample from the prior distribution. This is the default for most samplers in Turing.jl (if you don't specify `initial_params`).
25
+
26
+
-`InitFromUniform(a, b)`: Sample uniformly from `[a, b]` in linked space. This is the default for Hamiltonian samplers. If `a` and `b` are not specified it defaults to `[-2, 2]`, which preserves the behaviour in previous versions (and mimics that of Stan).
27
+
-`InitFromParams(p)`: Explicitly provide a set of initial parameters. **Note: `p` must be either a `NamedTuple` or an `AbstractDict{<:VarName}`; it can no longer be a `Vector`.** Parameters must be provided in unlinked space, even if the sampler later performs linking.
28
+
29
+
+ For this release of Turing.jl, you can also provide a `NamedTuple` or `AbstractDict{<:VarName}` and this will be automatically wrapped in `InitFromParams` for you. This is an intermediate measure for backwards compatibility, and will eventually be removed.
30
+
31
+
This change is made because Vectors are semantically ambiguous.
32
+
It is not clear which element of the vector corresponds to which variable in the model, nor is it clear whether the parameters are in linked or unlinked space.
33
+
Previously, both of these would depend on the internal structure of the VarInfo, which is an implementation detail.
34
+
In contrast, the behaviour of `AbstractDict`s and `NamedTuple`s is invariant to the ordering of variables and it is also easier for readers to understand which variable is being set to which value.
35
+
36
+
If you were previously using `varinfo[:]` to extract a vector of initial parameters, you can now use `Dict(k => varinfo[k] for k in keys(varinfo)` to extract a Dict of initial parameters.
37
+
38
+
For more details about initialisation you can also refer to [the main TuringLang docs](https://turinglang.org/docs/usage/sampling-options/#specifying-initial-parameters), and/or the [DynamicPPL API docs](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.InitFromPrior).
39
+
40
+
### `resume_from` and `loadstate`
41
+
42
+
The `resume_from` keyword argument to `sample` is now removed.
43
+
Instead of `sample(...; resume_from=chain)` you can use `sample(...; initial_state=loadstate(chain))` which is entirely equivalent.
44
+
`loadstate` is exported from Turing now instead of in DynamicPPL.
45
+
46
+
Note that `loadstate` only works for `MCMCChains.Chains`.
47
+
For FlexiChains users please consult the FlexiChains docs directly where this functionality is described in detail.
48
+
49
+
### `pointwise_logdensities`
50
+
51
+
`pointwise_logdensities(model, chn)`, `pointwise_loglikelihoods(...)`, and `pointwise_prior_logdensities(...)` now return an `MCMCChains.Chains` object if `chn` is itself an `MCMCChains.Chains` object.
52
+
The old behaviour of returning an `OrderedDict` is still available: you just need to pass `OrderedDict` as the third argument, i.e., `pointwise_logdensities(model, chn, OrderedDict)`.
53
+
54
+
## Initial step in MCMC sampling
55
+
56
+
HMC and NUTS samplers no longer take an extra single step before starting the chain.
57
+
This means that if you do not discard any samples at the start, the first sample will be the initial parameters (which may be user-provided).
58
+
59
+
Note that if the initial sample is included, the corresponding sampler statistics will be `missing`.
60
+
Due to a technical limitation of MCMCChains.jl, this causes all indexing into MCMCChains to return `Union{Float64, Missing}` or similar.
61
+
If you want the old behaviour, you can discard the first sample (e.g. using `discard_initial=1`).
Copy file name to clipboardExpand all lines: docs/src/api.md
+36-30Lines changed: 36 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ DynamicPPL.@model function my_model() end
31
31
sample(my_model(), Turing.Inference.Prior(), 100)
32
32
```
33
33
34
-
even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` module and [`@model`](@ref) in the `DynamicPPL` package.
34
+
even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` module and [`@model`](@extref`DynamicPPL.@model`) in the `DynamicPPL` package.
35
35
36
36
### Modelling
37
37
@@ -46,12 +46,13 @@ even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` modu
|`sample`|[`StatsBase.sample`](https://turinglang.org/docs/usage/sampling-options/)| Sample from a model |
52
+
|`MCMCThreads`|[`AbstractMCMC.MCMCThreads`](@extref)| Run MCMC using multiple threads |
53
+
|`MCMCDistributed`|[`AbstractMCMC.MCMCDistributed`](@extref)| Run MCMC using multiple processes |
54
+
|`MCMCSerial`|[`AbstractMCMC.MCMCSerial`](@extref)| Run MCMC using without parallelism |
55
+
|`loadstate`|[`Turing.Inference.loadstate`](@ref)| Load saved state from `MCMCChains.Chains`|
55
56
56
57
### Samplers
57
58
@@ -75,6 +76,34 @@ even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` modu
75
76
|`RepeatSampler`|[`Turing.Inference.RepeatSampler`](@ref)| A sampler that runs multiple times on the same variable |
76
77
|`externalsampler`|[`Turing.Inference.externalsampler`](@ref)| Wrap an external sampler for use in Turing |
77
78
79
+
### DynamicPPL utilities
80
+
81
+
Please see the [generated quantities](https://turinglang.org/docs/tutorials/usage-generated-quantities/) and [probability interface](https://turinglang.org/docs/tutorials/usage-probability-interface/) guides for more information.
|`returned`|[`DynamicPPL.returned`](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.returned-Tuple%7BModel,%20NamedTuple%7D)| Calculate additional quantities defined in a model |
86
+
|`predict`|[`StatsAPI.predict`](https://turinglang.org/DynamicPPL.jl/stable/api/#Predicting)| Generate samples from posterior predictive distribution |
87
+
|`pointwise_loglikelihoods`|[`DynamicPPL.pointwise_loglikelihoods`](@extref)| Compute log likelihoods for each sample in a chain |
88
+
|`logprior`|[`DynamicPPL.logprior`](@extref)| Compute log prior probability |
89
+
|`logjoint`|[`DynamicPPL.logjoint`](@extref)| Compute log joint probability |
90
+
|`condition`|[`AbstractPPL.condition`](@extref)| Condition a model on data |
91
+
|`decondition`|[`AbstractPPL.decondition`](@extref)| Remove conditioning on data |
92
+
|`conditioned`|[`DynamicPPL.conditioned`](@extref)| Return the conditioned values of a model |
93
+
|`fix`|[`DynamicPPL.fix`](@extref)| Fix the value of a variable |
94
+
|`unfix`|[`DynamicPPL.unfix`](@extref)| Unfix the value of a variable |
95
+
|`OrderedDict`|[`OrderedCollections.OrderedDict`](@extref)| An ordered dictionary |
96
+
97
+
### Initialisation strategies
98
+
99
+
Turing.jl provides several strategies to initialise parameters for models.
|`InitFromPrior`|[`DynamicPPL.InitFromPrior`](@extref)| Obtain initial parameters from the prior distribution |
104
+
|`InitFromUniform`|[`DynamicPPL.InitFromUniform`](@extref)| Obtain initial parameters by sampling uniformly in linked space |
105
+
|`InitFromParams`|[`DynamicPPL.InitFromParams`](@extref)| Manually specify (possibly a subset of) initial parameters |
106
+
78
107
### Variational inference
79
108
80
109
See the [docs of AdvancedVI.jl](https://turinglang.org/AdvancedVI.jl/stable/) for detailed usage and the [variational inference tutorial](https://turinglang.org/docs/tutorials/09-variational-inference/) for a basic walkthrough.
@@ -124,29 +153,6 @@ LogPoisson
124
153
|`arraydist`|[`DistributionsAD.arraydist`](@extref)| Create a product distribution from an array of distributions |
125
154
|`NamedDist`|[`DynamicPPL.NamedDist`](@extref)| A distribution that carries the name of the variable |
|`predict`|[`StatsAPI.predict`](https://turinglang.org/DynamicPPL.jl/stable/api/#Predicting)| Generate samples from posterior predictive distribution |
132
-
133
-
### Querying model probabilities and quantities
134
-
135
-
Please see the [generated quantities](https://turinglang.org/docs/tutorials/usage-generated-quantities/) and [probability interface](https://turinglang.org/docs/tutorials/usage-probability-interface/) guides for more information.
|`returned`|[`DynamicPPL.returned`](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.returned-Tuple%7BModel,%20NamedTuple%7D)| Calculate additional quantities defined in a model |
140
-
|`pointwise_loglikelihoods`|[`DynamicPPL.pointwise_loglikelihoods`](@extref)| Compute log likelihoods for each sample in a chain |
141
-
|`logprior`|[`DynamicPPL.logprior`](@extref)| Compute log prior probability |
142
-
|`logjoint`|[`DynamicPPL.logjoint`](@extref)| Compute log joint probability |
143
-
|`condition`|[`AbstractPPL.condition`](@extref)| Condition a model on data |
144
-
|`decondition`|[`AbstractPPL.decondition`](@extref)| Remove conditioning on data |
145
-
|`conditioned`|[`DynamicPPL.conditioned`](@extref)| Return the conditioned values of a model |
146
-
|`fix`|[`DynamicPPL.fix`](@extref)| Fix the value of a variable |
147
-
|`unfix`|[`DynamicPPL.unfix`](@extref)| Unfix the value of a variable |
148
-
|`OrderedDict`|[`OrderedCollections.OrderedDict`](@extref)| An ordered dictionary |
149
-
150
156
### Point estimates
151
157
152
158
See the [mode estimation tutorial](https://turinglang.org/docs/tutorials/docs-17-mode-estimation/) for more information.
0 commit comments