Skip to content

Commit fb4ddb0

Browse files
committed
fix web links and add relevant text
1 parent a137f5d commit fb4ddb0

File tree

1 file changed

+5
-9
lines changed

1 file changed

+5
-9
lines changed

lectures/mix_model.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ jupytext:
44
extension: .md
55
format_name: myst
66
format_version: 0.13
7-
jupytext_version: 1.17.2
7+
jupytext_version: 1.17.3
88
kernelspec:
99
display_name: Python 3 (ipykernel)
1010
language: python
@@ -114,7 +114,7 @@ In this lecture, we'll learn about
114114

115115
* how nature can *mix* between two distributions $f$ and $g$ to create a new distribution $h$.
116116

117-
* The Kullback-Leibler statistical divergence <https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence> that governs statistical learning under an incorrect statistical model
117+
* The [Kullback-Leibler statistical divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) that governs statistical learning under an incorrect statistical model
118118

119119
* A useful Python function `numpy.searchsorted` that, in conjunction with a uniform random number generator, can be used to sample from an arbitrary distribution
120120

@@ -229,7 +229,7 @@ Here is pseudo code for a direct "method 1" for drawing from our compound lotter
229229
* put the first two steps in a big loop and do them for each realization of $w$
230230

231231

232-
Our second method uses a uniform distribution and the following fact that we also described and used in the quantecon lecture <https://python.quantecon.org/prob_matrix.html>:
232+
Our second method uses a uniform distribution and the following fact that we also described and used in the [quantecon lecture on elementary probability with matrices](https://python.quantecon.org/prob_matrix.html):
233233

234234
* If a random variable $X$ has c.d.f. $F$, then a random variable $F^{-1}(U)$ also has c.d.f. $F$, where $U$ is a uniform random variable on $[0,1]$.
235235

@@ -240,15 +240,13 @@ a uniform distribution on $[0,1]$ and computing $F^{-1}(U)$.
240240
We'll use this fact
241241
in conjunction with the `numpy.searchsorted` command to sample from $H$ directly.
242242

243-
See <https://numpy.org/doc/stable/reference/generated/numpy.searchsorted.html> for the
244-
`searchsorted` function.
243+
See the [numpy.searchsorted documentation](https://numpy.org/doc/stable/reference/generated/numpy.searchsorted.html) for details on the `searchsorted` function.
245244

246245
See the [Mr. P Solver video on Monte Carlo simulation](https://www.google.com/search?q=Mr.+P+Solver+video+on+Monte+Carlo+simulation&oq=Mr.+P+Solver+video+on+Monte+Carlo+simulation) to see other applications of this powerful trick.
247246

248247
In the Python code below, we'll use both of our methods and confirm that each of them does a good job of sampling
249248
from our target mixture distribution.
250249

251-
252250
```{code-cell} ipython3
253251
@jit
254252
def draw_lottery(p, N):
@@ -582,7 +580,6 @@ recorded on the $x$ axis.
582580

583581
Thus, the graph below confirms how a minimum KL divergence governs what our type 1 agent eventually learns.
584582

585-
586583
```{code-cell} ipython3
587584
α_arr_x = α_arr[(α_arr<discretion)|(α_arr>discretion)]
588585
π_lim_arr = π_lim_v(α_arr_x)
@@ -648,7 +645,7 @@ $$
648645
We'll use numpyro to approximate this equation.
649646

650647
We'll create graphs of the posterior $\pi_t(\alpha)$ as
651-
$t \rightarrow +\infty$ corresponding to ones presented in the quantecon lecture <https://python.quantecon.org/bayes_nonconj.html>.
648+
$t \rightarrow +\infty$ corresponding to ones presented in the [quantecon lecture on Bayesian nonconjugate priors](https://python.quantecon.org/bayes_nonconj.html).
652649

653650
We anticipate that a posterior distribution will collapse around the true $\alpha$ as
654651
$t \rightarrow + \infty$.
@@ -684,7 +681,6 @@ def MCMC_run(ws):
684681
The following code generates the graph below that displays Bayesian posteriors for $\alpha$ at various history lengths.
685682

686683
```{code-cell} ipython3
687-
688684
fig, ax = plt.subplots(figsize=(10, 6))
689685
690686
for i in range(len(sizes)):

0 commit comments

Comments
 (0)