Skip to content

Commit

Permalink
Merge pull request #77 from TuringLang/ml/docs
Browse files Browse the repository at this point in the history
  • Loading branch information
mileslucas authored Jan 4, 2022
2 parents 8386dc9 + c4d37f1 commit ad2cce4
Show file tree
Hide file tree
Showing 4 changed files with 10 additions and 3 deletions.
3 changes: 2 additions & 1 deletion docs/src/examples/correlated.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ using StatsPlots
θ1 = range(-1, 1, length=1000)
θ2 = range(-1, 1, length=1000)
logf = [model.loglike([t1, t2, 0, 0]) for t2 in θ2, t1 in θ1]
loglike = model.prior_transform_and_loglikelihood.loglikelihood
logf = [loglike([t1, t2, 0, 0]) for t2 in θ2, t1 in θ1]
heatmap(
θ1, θ2, exp.(logf),
aspect_ratio=1,
Expand Down
3 changes: 2 additions & 1 deletion docs/src/examples/eggbox.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@ using StatsPlots
x = range(0, 1, length=1000)
y = range(0, 1, length=1000)
logf = [model.loglike([xi, yi]) for yi in y, xi in x]
loglike = model.prior_transform_and_loglikelihood.loglikelihood
logf = [loglike([xi, yi]) for yi in y, xi in x]
heatmap(
x, y, logf,
xlims=extrema(x),
Expand Down
3 changes: 2 additions & 1 deletion docs/src/examples/shells.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@ using StatsPlots
x = range(-6, 6, length=1000)
y = range(-2.5, 2.5, length=1000)
logf = [model.loglike([xi, yi]) for yi in y, xi in x]
loglike = model.prior_transform_and_loglikelihood.loglikelihood
logf = [loglike([xi, yi]) for yi in y, xi in x]
heatmap(
x, y, exp.(logf),
xlims=extrema(x),
Expand Down
4 changes: 4 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,10 @@ To use the nested samplers first install this library
julia> ]add NestedSamplers
```

## Background

For statistical background and a more in-depth introduction to nested sampling, I recommend the [dynesty documentation](https://dynesty.readthedocs.io/en/latest/overview.html). In short, nested sampling is a technique for simultaneously estimating the Bayesian evidence and the posterior distribution (according to [Bayes' theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem)) from nested iso-likelihood shells. These shells allow a quadrature estimate of the integral for the Bayesian evidence, which we can use for model selection, as well as the statistical weights for the underlying "live" points, which is where we get our posterior samples from!

## Usage

The samplers are built using the [AbstractMCMC](https://github.com/turinglang/abstractmcmc.jl) interface. To use it, we need to create a [`NestedModel`](@ref).
Expand Down

0 comments on commit ad2cce4

Please sign in to comment.