Skip to content

Commit 895caad

Browse files
Merge pull request CamDavidsonPilon#364 from itselijahtai/some-fixes
Spelling/grammar fixes for ch 2, 4, 5, 6
2 parents e7ea717 + 0f38d9f commit 895caad

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -739,7 +739,7 @@
739739
"\n",
740740
"With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be. \n",
741741
"\n",
742-
"To setup a Bayesian model, we need to assign prior distrbutions to our unknown quantities. *A priori*, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:"
742+
"To setup a Bayesian model, we need to assign prior distributions to our unknown quantities. *A priori*, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:"
743743
]
744744
},
745745
{

Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC2.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -507,7 +507,7 @@
507507
}
508508
],
509509
"source": [
510-
"# adding a number to the end of the %run call with get the ith top photo.\n",
510+
"# adding a number to the end of the %run call will get the ith top post.\n",
511511
"%run top_showerthoughts_submissions.py 2\n",
512512
"\n",
513513
"print(\"Post contents: \\n\")\n",

Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"\n",
7070
"Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables. \n",
7171
"\n",
72-
" We sample `sample_size = 100000` Poisson random variables with parameter $\\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to it's parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`. "
72+
" We sample `sample_size = 100000` Poisson random variables with parameter $\\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to its parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`. "
7373
]
7474
},
7575
{
@@ -354,7 +354,7 @@
354354
"source": [
355355
"What do we observe? *Without accounting for population sizes* we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do *not* necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively. \n",
356356
"\n",
357-
"We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 4000, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights."
357+
"We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 1500, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights."
358358
]
359359
},
360360
{
@@ -508,7 +508,7 @@
508508
}
509509
],
510510
"source": [
511-
"#adding a number to the end of the %run call with get the ith top post.\n",
511+
"#adding a number to the end of the %run call will get the ith top post.\n",
512512
"%run top_showerthoughts_submissions.py 2\n",
513513
"\n",
514514
"print(\"Post contents: \\n\")\n",

Chapter5_LossFunctions/Ch5_LossFunctions_PyMC2.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@
6464
"\n",
6565
"\n",
6666
"- $L( \\theta, \\hat{\\theta} ) = \\frac{ | \\theta - \\hat{\\theta} | }{ \\theta(1-\\theta) }, \\; \\; \\hat{\\theta}, \\theta \\in [0,1]$ emphasizes an estimate closer to 0 or 1 since if the true value $\\theta$ is near 0 or 1, the loss will be *very* large unless $\\hat{\\theta}$ is similarly close to 0 or 1. \n",
67-
"This loss function might be used by a political pundit who's job requires him or her to give confident \"Yes/No\" answers. This loss reflects that if the true parameter is close to 1 (for example, if a political outcome is very likely to occur), he or she would want to strongly agree as to not look like a skeptic. \n",
67+
"This loss function might be used by a political pundit whose job requires him or her to give confident \"Yes/No\" answers. This loss reflects that if the true parameter is close to 1 (for example, if a political outcome is very likely to occur), he or she would want to strongly agree as to not look like a skeptic. \n",
6868
"\n",
6969
"- $L( \\theta, \\hat{\\theta} ) = 1 - \\exp \\left( -(\\theta - \\hat{\\theta} )^2 \\right)$ is bounded between 0 and 1 and reflects that the user is indifferent to sufficiently-far-away estimates. It is similar to the zero-one loss above, but not quite as penalizing to estimates that are close to the true parameter. \n",
7070
"- Complicated non-linear loss functions can programmed: \n",

Chapter5_LossFunctions/Ch5_LossFunctions_PyMC3.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@
7070
"\n",
7171
"\n",
7272
"- $L( \\theta, \\hat{\\theta} ) = \\frac{ | \\theta - \\hat{\\theta} | }{ \\theta(1-\\theta) }, \\; \\; \\hat{\\theta}, \\theta \\in [0,1]$ emphasizes an estimate closer to 0 or 1 since if the true value $\\theta$ is near 0 or 1, the loss will be *very* large unless $\\hat{\\theta}$ is similarly close to 0 or 1. \n",
73-
"This loss function might be used by a political pundit who's job requires him or her to give confident \"Yes/No\" answers. This loss reflects that if the true parameter is close to 1 (for example, if a political outcome is very likely to occur), he or she would want to strongly agree as to not look like a skeptic. \n",
73+
"This loss function might be used by a political pundit whose job requires him or her to give confident \"Yes/No\" answers. This loss reflects that if the true parameter is close to 1 (for example, if a political outcome is very likely to occur), he or she would want to strongly agree as to not look like a skeptic. \n",
7474
"\n",
7575
"- $L( \\theta, \\hat{\\theta} ) = 1 - \\exp \\left( -(\\theta - \\hat{\\theta} )^2 \\right)$ is bounded between 0 and 1 and reflects that the user is indifferent to sufficiently-far-away estimates. It is similar to the zero-one loss above, but not quite as penalizing to estimates that are close to the true parameter. \n",
7676
"- Complicated non-linear loss functions can programmed: \n",

Chapter6_Priorities/Ch6_Priors_PyMC3.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1273,7 +1273,7 @@
12731273
"source": [
12741274
"(Plots like these are what inspired the book's cover.)\n",
12751275
"\n",
1276-
"What can we say about the results above? Clearly TSLA has been a strong performer, and our analysis suggests that it has an almost 1% daily return! Similarly, most of the distribution of AAPL is negative, suggesting that it's *true daily return* is negative.\n",
1276+
"What can we say about the results above? Clearly TSLA has been a strong performer, and our analysis suggests that it has an almost 1% daily return! Similarly, most of the distribution of AAPL is negative, suggesting that its *true daily return* is negative.\n",
12771277
"\n",
12781278
"\n",
12791279
"You may not have immediately noticed, but these variables are a whole order of magnitude *less* than our priors on them. For example, to put these one the same scale as the above prior distributions:"

0 commit comments

Comments
 (0)