|  | 
| 1 | 1 | # 0.40.0 | 
| 2 | 2 | 
 | 
| 3 |  | -[...] | 
|  | 3 | +## Breaking changes | 
|  | 4 | + | 
|  | 5 | +**DynamicPPL 0.37** | 
|  | 6 | + | 
|  | 7 | +Turing.jl v0.40 updates DynamicPPL compatibility to 0.37. | 
|  | 8 | +The summary of the changes provided here is intended for end-users of Turing. | 
|  | 9 | +If you are a package developer, or would otherwise like to understand these changes in-depth, please see [the DynamicPPL changelog](https://github.com/TuringLang/DynamicPPL.jl/blob/main/HISTORY.md#0370). | 
|  | 10 | + | 
|  | 11 | +  - **`@submodel`** is now completely removed; please use `to_submodel`. | 
|  | 12 | + | 
|  | 13 | +  - **Prior and likelihood calculations** are now completely separated in Turing. Previously, the log-density used to be accumulated in a single field and thus there was no clear way to separate prior and likelihood components. | 
|  | 14 | +     | 
|  | 15 | +      + **`@addlogprob! f`**, where `f` is a float, now adds to the likelihood by default. | 
|  | 16 | +      + You can instead use **`@addlogprob! (; logprior=x, loglikelihood=y)`** to control which log-density component to add to. | 
|  | 17 | +      + This means that usage of `PriorContext` and `LikelihoodContext` is no longer needed, and these have now been removed. | 
|  | 18 | +  - The special **`__context__`** variable has been removed. If you still need to access the evaluation context, it is now available as `__model__.context`. | 
|  | 19 | + | 
|  | 20 | +**Log-density in chains** | 
|  | 21 | + | 
|  | 22 | +When sampling from a Turing model, the resulting `MCMCChains.Chains` object now contains not only the log-joint (accessible via `chain[:lp]`) but also the log-prior and log-likelihood (`chain[:logprior]` and `chain[:loglikelihood]` respectively). | 
|  | 23 | + | 
|  | 24 | +These values now correspond to the log density of the sampled variables exactly as per the model definition / user parameterisation and thus will ignore any linking (transformation to unconstrained space). | 
|  | 25 | +For example, if the model is `@model f() = x ~ LogNormal()`, `chain[:lp]` would always contain the value of `logpdf(LogNormal(), x)` for each sampled value of `x`. | 
|  | 26 | +Previously these values could be incorrect if linking had occurred: some samplers would return `logpdf(Normal(), log(x))` i.e. the log-density with respect to the transformed distribution. | 
|  | 27 | + | 
|  | 28 | +**Gibbs sampler** | 
|  | 29 | + | 
|  | 30 | +When using Turing's Gibbs sampler, e.g. `Gibbs(:x => MH(), :y => HMC(0.1, 20))`, the conditioned variables (for example `y` during the MH step, or `x` during the HMC step) are treated as true observations. | 
|  | 31 | +Thus the log-density associated with them is added to the likelihood. | 
|  | 32 | +Previously these would effectively be added to the prior (in the sense that if `LikelihoodContext` was used they would be ignored). | 
|  | 33 | +This is unlikely to affect users but we mention it here to be explicit. | 
|  | 34 | +This change only affects the log probabilities as the Gibbs component samplers see them; the resulting chain will include the usual log prior, likelihood, and joint, as described above. | 
|  | 35 | + | 
|  | 36 | +**Particle Gibbs** | 
|  | 37 | + | 
|  | 38 | +Previously, only 'true' observations (i.e., `x ~ dist` where `x` is a model argument or conditioned upon) would trigger resampling of particles. | 
|  | 39 | +Specifically, there were two cases where resampling would not be triggered: | 
|  | 40 | + | 
|  | 41 | +  - Calls to `@addlogprob!` | 
|  | 42 | +  - Gibbs-conditioned variables: e.g. `y` in `Gibbs(:x => PG(20), :y => MH())` | 
|  | 43 | + | 
|  | 44 | +Turing 0.40 changes this such that both of the above cause resampling. | 
|  | 45 | +(The second case follows from the changes to the Gibbs sampler, see above.) | 
|  | 46 | + | 
|  | 47 | +This release also fixes a bug where, if the model ended with one of these statements, their contribution to the particle weight would be ignored, leading to incorrect results. | 
|  | 48 | + | 
|  | 49 | +## Other changes | 
|  | 50 | + | 
|  | 51 | +  - Sampling using `Prior()` should now be about twice as fast because we now avoid evaluating the model twice on every iteration. | 
|  | 52 | +  - `Turing.Inference.Transition` now has different fields. | 
|  | 53 | +    If `t isa Turing.Inference.Transition`, `t.stat` is always a NamedTuple, not `nothing` (if it genuinely has no information then it's an empty NamedTuple). | 
|  | 54 | +    Furthermore, `t.lp` has now been split up into `t.logprior` and `t.loglikelihood` (see also 'Log-density in chains' section above). | 
| 4 | 55 | 
 | 
| 5 | 56 | # 0.39.9 | 
| 6 | 57 | 
 | 
|  | 
0 commit comments