You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/mix_model.md
+5-9Lines changed: 5 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ jupytext:
4
4
extension: .md
5
5
format_name: myst
6
6
format_version: 0.13
7
-
jupytext_version: 1.17.2
7
+
jupytext_version: 1.17.3
8
8
kernelspec:
9
9
display_name: Python 3 (ipykernel)
10
10
language: python
@@ -114,7 +114,7 @@ In this lecture, we'll learn about
114
114
115
115
* how nature can *mix* between two distributions $f$ and $g$ to create a new distribution $h$.
116
116
117
-
* The Kullback-Leibler statistical divergence<https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence> that governs statistical learning under an incorrect statistical model
117
+
* The [Kullback-Leibler statistical divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) that governs statistical learning under an incorrect statistical model
118
118
119
119
* A useful Python function `numpy.searchsorted` that, in conjunction with a uniform random number generator, can be used to sample from an arbitrary distribution
120
120
@@ -229,7 +229,7 @@ Here is pseudo code for a direct "method 1" for drawing from our compound lotter
229
229
* put the first two steps in a big loop and do them for each realization of $w$
230
230
231
231
232
-
Our second method uses a uniform distribution and the following fact that we also described and used in the quantecon lecture <https://python.quantecon.org/prob_matrix.html>:
232
+
Our second method uses a uniform distribution and the following fact that we also described and used in the [quantecon lecture on elementary probability with matrices](https://python.quantecon.org/prob_matrix.html):
233
233
234
234
* If a random variable $X$ has c.d.f. $F$, then a random variable $F^{-1}(U)$ also has c.d.f. $F$, where $U$ is a uniform random variable on $[0,1]$.
235
235
@@ -240,15 +240,13 @@ a uniform distribution on $[0,1]$ and computing $F^{-1}(U)$.
240
240
We'll use this fact
241
241
in conjunction with the `numpy.searchsorted` command to sample from $H$ directly.
242
242
243
-
See <https://numpy.org/doc/stable/reference/generated/numpy.searchsorted.html> for the
244
-
`searchsorted` function.
243
+
See the [numpy.searchsorted documentation](https://numpy.org/doc/stable/reference/generated/numpy.searchsorted.html) for details on the `searchsorted` function.
245
244
246
245
See the [Mr. P Solver video on Monte Carlo simulation](https://www.google.com/search?q=Mr.+P+Solver+video+on+Monte+Carlo+simulation&oq=Mr.+P+Solver+video+on+Monte+Carlo+simulation) to see other applications of this powerful trick.
247
246
248
247
In the Python code below, we'll use both of our methods and confirm that each of them does a good job of sampling
249
248
from our target mixture distribution.
250
249
251
-
252
250
```{code-cell} ipython3
253
251
@jit
254
252
def draw_lottery(p, N):
@@ -582,7 +580,6 @@ recorded on the $x$ axis.
582
580
583
581
Thus, the graph below confirms how a minimum KL divergence governs what our type 1 agent eventually learns.
We'll create graphs of the posterior $\pi_t(\alpha)$ as
651
-
$t \rightarrow +\infty$ corresponding to ones presented in the quantecon lecture <https://python.quantecon.org/bayes_nonconj.html>.
648
+
$t \rightarrow +\infty$ corresponding to ones presented in the [quantecon lecture on Bayesian nonconjugate priors](https://python.quantecon.org/bayes_nonconj.html).
652
649
653
650
We anticipate that a posterior distribution will collapse around the true $\alpha$ as
654
651
$t \rightarrow + \infty$.
@@ -684,7 +681,6 @@ def MCMC_run(ws):
684
681
The following code generates the graph below that displays Bayesian posteriors for $\alpha$ at various history lengths.
0 commit comments