Skip to content

Commit

Permalink
cosmetic changes
Browse files Browse the repository at this point in the history
  • Loading branch information
CamDavidsonPilon committed Feb 26, 2013
1 parent a4031a4 commit 66a1218
Show file tree
Hide file tree
Showing 6 changed files with 222 additions and 90 deletions.
87 changes: 56 additions & 31 deletions Chapter1_Introduction/Chapter1_Introduction.ipynb

Large diffs are not rendered by default.

94 changes: 72 additions & 22 deletions Chapter2_MorePyMC/MorePyMC.ipynb

Large diffs are not rendered by default.

32 changes: 23 additions & 9 deletions Chapter3_MCMC/IntroMCMC.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Example: Unsupervised Clustering using Mixture Model\n",
"##### Example: Unsupervised Clustering using Mixture Model\n",
"\n",
"------------\n",
"\n",
Expand Down Expand Up @@ -642,8 +642,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"_____\n",
"### Example: Poisson Regression [Needs work]\n",
"\n",
"##### Example: Poisson Regression [Needs work]\n",
"\n",
"---\n",
"\n",
"Perhaps the most important result from medical research was the *now obvious* link between *smoking and cancer*. We'll try to establish a link using Bayesian methods. We have a decision here: should we include a prior that biases us towards there existing a significant link between smoking and cancer? I think we should act like scientists at the turn of the century, and assume there's is no *a priori* reason to assume a link. \n",
"\n",
Expand Down Expand Up @@ -1035,8 +1037,7 @@
" margin-right:auto;\n",
" }\n",
" h1 {\n",
" text-align:center;\n",
" font-family:\"Charis SIL\", serif;\n",
" font-family: \"Charis SIL\", Palatino, serif;\n",
" }\n",
" div.text_cell_render{\n",
" font-family: Computer Modern, \"Helvetica Neue\", Arial, Helvetica, Geneva, sans-serif;\n",
Expand All @@ -1047,21 +1048,34 @@
" margin-right:auto;\n",
" }\n",
" .CodeMirror{\n",
" font-family: Consolas, monospace;\n",
" font-family: \"Source Code Pro\", source-code-pro,Consolas, monospace;\n",
" }\n",
" .prompt{\n",
" display: None;\n",
" }\n",
" .text_cell_render h5 {\n",
" font-weight: 300;\n",
" font-size: 16pt;\n",
" color: #4057A1;\n",
" font-style: italic;\n",
" margin-bottom: .5em;\n",
" margin-top: 0.5em;\n",
" display: block;\n",
" }\n",
" \n",
" .warning{\n",
" color: rgb( 240, 20, 20 )\n",
" }\n",
"</style>"
],
"output_type": "pyout",
"prompt_number": 15,
"prompt_number": 1,
"text": [
"<IPython.core.display.HTML at 0x5e01370>"
"<IPython.core.display.HTML at 0x82d5dd8>"
]
}
],
"prompt_number": 15
"prompt_number": 1
},
{
"cell_type": "code",
Expand Down
41 changes: 28 additions & 13 deletions Chapter4_TheGreatestTheoremNeverTold/LawOfLargeNumbers.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,20 @@
"metadata": {},
"source": [
"#Chapter 4\n",
"______\n",
"\n",
"##The greatest theorem never told\n",
"\n",
"\n",
"\n",
"> This relatively short chapter focuses on an idea that is always bouncing around our heads, but is rarely made explicit outside books devoted to statistics or Monte Carlo. In fact, we've been used this idea in every example so far. \n",
"\n",
"______"
"> This relatively short chapter focuses on an idea that is always bouncing around our heads, but is rarely made explicit outside books devoted to statistics or Monte Carlo. In fact, we've been used this idea in every example so far. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##The Law of Large Numbers\n",
"###The Law of Large Numbers\n",
"\n",
"Let $Z_i$ be samples from some probability distribution. According to *the Law of Large numbers*, so long as $E[Z]$ is finite, the following holds,\n",
"\n",
Expand All @@ -55,7 +54,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Intution \n",
"### Intuition \n",
"\n",
"If the above Law is somewhat surprising, it can be made more clear be examining a simple example. \n",
"\n",
Expand All @@ -80,8 +79,9 @@
"\n",
"Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for *any distribution*, minus some pathological examples that only mathematicians have fun with. \n",
"\n",
"##### Example\n",
"____\n",
"### Example\n",
"\n",
"\n",
"Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables. \n",
"\n",
Expand Down Expand Up @@ -258,8 +258,11 @@
"\n",
"The Law of Large Numbers is only valid as $N$ gets *infinitely* large: the law is treasure at the end of an infinite rainbow. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this.\n",
"\n",
"\n",
"\n",
"##### Example: Aggregated geographic data\n",
"\n",
"--------\n",
"### Example\n",
"\n",
"Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If included in the data is an average of some characteristic of each the geographic area, we must be concious of the Law of Large Numbers and how it can *fail* for areas with small populations.\n",
"\n",
Expand Down Expand Up @@ -483,7 +486,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercises\n",
"##### Exercises\n",
"\n",
"1\\. How would you estimate the quantity $E\\left[ \\cos{X} \\right]$, where $X \\sim \\text{Exp}(4)$? What about $E\\left[ \\cos{X} | X \\lt 1\\right]$, i.e. the expected value *given* we know $X$ is less than 1? Would you need more samples than the original samples size to be equally as accurate?"
]
Expand All @@ -507,7 +510,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"2. The following table was located in the paper \"Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression\" [2]. What mistake have the researchers made?\n",
"2. The following table was located in the paper \"Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression\" [2]. The table ranks football field-goal kickers by there percent of non-misses. What mistake have the researchers made?\n",
"\n",
"-----\n",
"\n",
Expand Down Expand Up @@ -551,8 +554,7 @@
" margin-right:auto;\n",
" }\n",
" h1 {\n",
" text-align:center;\n",
" font-family:\"Charis SIL\", serif;\n",
" font-family: \"Charis SIL\", Palatino, serif;\n",
" }\n",
" div.text_cell_render{\n",
" font-family: Computer Modern, \"Helvetica Neue\", Arial, Helvetica, Geneva, sans-serif;\n",
Expand All @@ -563,17 +565,30 @@
" margin-right:auto;\n",
" }\n",
" .CodeMirror{\n",
" font-family: Consolas, monospace;\n",
" font-family: \"Source Code Pro\", source-code-pro,Consolas, monospace;\n",
" }\n",
" .prompt{\n",
" display: None;\n",
" }\n",
" .text_cell_render h5 {\n",
" font-weight: 300;\n",
" font-size: 16pt;\n",
" color: #4057A1;\n",
" font-style: italic;\n",
" margin-bottom: .5em;\n",
" margin-top: 0.5em;\n",
" display: block;\n",
" }\n",
" \n",
" .warning{\n",
" color: rgb( 240, 20, 20 )\n",
" }\n",
"</style>"
],
"output_type": "pyout",
"prompt_number": 1,
"text": [
"<IPython.core.display.HTML at 0x581b050>"
"<IPython.core.display.HTML at 0x82addd8>"
]
}
],
Expand Down
40 changes: 28 additions & 12 deletions Chapter5_LossFunctions/LossFunctions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"##### Example: Optimizing for the *Showcase* on *The Price is Right*\n",
"\n",
"______________________________________\n",
"### Example: Optimizing for the *Showcase* on *The Price is Right*\n",
"\n",
"Bless you if you are ever choosen as a contestant on the Price is Right, for here we will show you how to optimize your final price on the *Showcase*. For those who forget the rules:\n",
"\n",
Expand Down Expand Up @@ -234,7 +236,8 @@
"input": [
"_hist = plt.hist( price_trace, bins = 50, normed= True, histtype= \"stepfilled\")\n",
"plt.title( \"Posterior of the true price estimate\" )\n",
"plt.vlines( mu_prior, 0, 1.1*np.max(_hist[0] ), label = \"prior's mean\", linestyles=\"--\" )\n",
"plt.vlines( mu_prior, 0, 1.1*np.max(_hist[0] ), label = \"prior's mean\",\n",
" linestyles=\"--\" )\n",
"plt.vlines( price_trace.mean(), 0, 1.1*np.max(_hist[0] ), \\\n",
" label = \"posterior's mean\")\n",
"plt.legend()"
Expand Down Expand Up @@ -518,8 +521,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"##### Example: Financial prediction\n",
"\n",
"____\n",
"### Example: Financial prediction\n",
"\n",
"Suppose the future return of a stock price is very small, say 0.01 (or 1%). We have a model that predicts the stock's future price, and our profit and loss is directly tied to us acting on the prediction. How should be measure the loss associated with the model's predictions, and subsequent future predictions? A squared-error loss is agnogstic to the signage and would penalize a prediction of -0.01 equally as bad a prediction of 0.03:\n",
"\n",
Expand Down Expand Up @@ -805,17 +810,16 @@
"\n",
"A good sanity check that our model is still reasonable: as the signal becomes more and more extreme, and we feel more and more confident about the positive/negativeness of returns, our position converges with that of the least-squares line. \n",
"\n",
"The sparse-prediction model is not trying to *fit* the data the best (according to a *squared-error loss* definition of *fit*). That honour would go to the least-squares model. The sparse-prediction model is trying to find the best prediction *with respect to our `stock_loss`-defined loss*. We can turn this reasoning around: the least-squares model is not try to *predict* the best (according to a *`stock-loss`* definition of *predict*). That honour would go the *sparse prediction* model. The least-squares model is trying to find the best fit of the data *with respect to the squared-error loss*.\n",
"\n",
"\n",
"-------\n"
"The sparse-prediction model is not trying to *fit* the data the best (according to a *squared-error loss* definition of *fit*). That honour would go to the least-squares model. The sparse-prediction model is trying to find the best prediction *with respect to our `stock_loss`-defined loss*. We can turn this reasoning around: the least-squares model is not try to *predict* the best (according to a *`stock-loss`* definition of *predict*). That honour would go the *sparse prediction* model. The least-squares model is trying to find the best fit of the data *with respect to the squared-error loss*.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Example: Kaggle contest on *Observing Dark World*\n",
"##### Example: Kaggle contest on *Observing Dark World*\n",
"\n",
"----\n",
"\n",
"A personal motivation for learning Bayesian methods was trying to piece together the winning solution to Kaggle's [*Observing Dark Worlds*](http://www.kaggle.com/c/DarkWorlds) contest. From the contest's website:\n",
"\n",
Expand Down Expand Up @@ -1539,8 +1543,7 @@
" margin-right:auto;\n",
" }\n",
" h1 {\n",
" text-align:center;\n",
" font-family:\"Charis SIL\", serif;\n",
" font-family: \"Charis SIL\", Palatino, serif;\n",
" }\n",
" div.text_cell_render{\n",
" font-family: Computer Modern, \"Helvetica Neue\", Arial, Helvetica, Geneva, sans-serif;\n",
Expand All @@ -1551,17 +1554,30 @@
" margin-right:auto;\n",
" }\n",
" .CodeMirror{\n",
" font-family: Consolas, monospace;\n",
" font-family: \"Source Code Pro\", source-code-pro,Consolas, monospace;\n",
" }\n",
" .prompt{\n",
" display: None;\n",
" }\n",
" .text_cell_render h5 {\n",
" font-weight: 300;\n",
" font-size: 16pt;\n",
" color: #4057A1;\n",
" font-style: italic;\n",
" margin-bottom: .5em;\n",
" margin-top: 0.5em;\n",
" display: block;\n",
" }\n",
" \n",
" .warning{\n",
" color: rgb( 240, 20, 20 )\n",
" }\n",
"</style>"
],
"output_type": "pyout",
"prompt_number": 1,
"text": [
"<IPython.core.display.HTML at 0x586b050>"
"<IPython.core.display.HTML at 0x825be80>"
]
}
],
Expand Down
18 changes: 15 additions & 3 deletions styles/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@
margin-right:auto;
}
h1 {
text-align:center;
font-family:"Charis SIL", serif;
font-family: "Charis SIL", Palatino, serif;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
Expand All @@ -21,9 +20,22 @@
margin-right:auto;
}
.CodeMirror{
font-family: Consolas, monospace;
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}

.warning{
color: rgb( 240, 20, 20 )
}
</style>

0 comments on commit 66a1218

Please sign in to comment.