Skip to content

Commit 44467e0

Browse files
Merge pull request CamDavidsonPilon#258 from qhfgva/master
Fixed some spelling errors
2 parents f4d6c6d + f2049ad commit 44467e0

File tree

6 files changed

+10
-10
lines changed

6 files changed

+10
-10
lines changed

Chapter2_MorePyMC/Chapter2.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -851,7 +851,7 @@
851851
"\n",
852852
"### *A* and *B* Together\n",
853853
"\n",
854-
"A similar anaylsis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the *difference* between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, *and* $\\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\\text{delta} = 0.01$, $N_B = 750$ (signifcantly less than $N_A$) and we will simulate site B's data like we did for site A's data )"
854+
"A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the *difference* between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, *and* $\\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )"
855855
]
856856
},
857857
{
@@ -1580,7 +1580,7 @@
15801580
"# drop the NA values\n",
15811581
"challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]\n",
15821582
"\n",
1583-
"# plot it, as a function of tempature (the first column)\n",
1583+
"# plot it, as a function of temperature (the first column)\n",
15841584
"print \"Temp (F), O-Ring failure?\"\n",
15851585
"print challenger_data\n",
15861586
"\n",

Chapter3_MCMC/Chapter3.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1282,7 +1282,7 @@
12821282
"\n",
12831283
"If the priors are poorly chosen, the MCMC algorithm may not converge, or at least have difficulty converging. Consider what may happen if the prior chosen does not even contain the true parameter: the prior assigns 0 probability to the unknown, hence the posterior will assign 0 probability as well. This can cause pathological results.\n",
12841284
"\n",
1285-
"For this reason, it is best to carefully choose the priors. Often, lack of covergence or evidence of samples crowding to boundaries implies something is wrong with the chosen priors (see *Folk Theorem of Statistical Computing* below). \n",
1285+
"For this reason, it is best to carefully choose the priors. Often, lack of convergence or evidence of samples crowding to boundaries implies something is wrong with the chosen priors (see *Folk Theorem of Statistical Computing* below). \n",
12861286
"\n",
12871287
"#### Covariance matrices and eliminating parameters\n",
12881288
"\n",

Chapter4_TheGreatestTheoremNeverTold/Chapter4.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -952,7 +952,7 @@
952952
"source": [
953953
"##### Example: Counting Github stars\n",
954954
"\n",
955-
"What is the average number of stars a Github repository has? How would you calculate this? There are over 6 million respositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO"
955+
"What is the average number of stars a Github repository has? How would you calculate this? There are over 6 million repositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO"
956956
]
957957
},
958958
{

Chapter6_Priorities/Chapter6.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -860,7 +860,7 @@
860860
"\n",
861861
"- Hierarchical algorithms: We can setup a Bayesian Bandit algorithm on top of smaller bandit algorithms. Suppose we have $N$ Bayesian Bandit models, each varying in some behavior (for example different `rate` parameters, representing varying sensitivity to changing environments). On top of these $N$ models is another Bayesian Bandit learner that will select a sub-Bayesian Bandit. This chosen Bayesian Bandit will then make an internal choice as to which machine to pull. The super-Bayesian Bandit updates itself depending on whether the sub-Bayesian Bandit was correct or not. \n",
862862
"\n",
863-
"- Extending the rewards, denoted $y_a$ for bandit $a$, to random variables from a distribution $f_{y_a}(y)$ is straightforward. More generally, this problem can be rephrased as \"Find the bandit with the largest expected value\", as playing the bandit with the largest expected value is optimal. In the case above, $f_{y_a}$ was Bernoulli with probability $p_a$, hence the expected value for a bandit is equal to $p_a$, which is why it looks like we are aiming to maximize the probability of winning. If $f$ is not Bernoulli, and it is non-negative, which can be accomplished apriori by shifting the distribution (we assume we know $f$), then the algorithm behaves as before:\n",
863+
"- Extending the rewards, denoted $y_a$ for bandit $a$, to random variables from a distribution $f_{y_a}(y)$ is straightforward. More generally, this problem can be rephrased as \"Find the bandit with the largest expected value\", as playing the bandit with the largest expected value is optimal. In the case above, $f_{y_a}$ was Bernoulli with probability $p_a$, hence the expected value for a bandit is equal to $p_a$, which is why it looks like we are aiming to maximize the probability of winning. If $f$ is not Bernoulli, and it is non-negative, which can be accomplished a priori by shifting the distribution (we assume we know $f$), then the algorithm behaves as before:\n",
864864
"\n",
865865
" For each round, \n",
866866
" \n",
@@ -4924,7 +4924,7 @@
49244924
"Peadar is known as @springcoil on Twitter and is an Irish Data Scientist with a Mathematical focus, he is currently based in Luxembourg. \n",
49254925
"I came across the following blog post on http://danielweitzenfeld.github.io/passtheroc/blog/2014/10/28/bayes-premier-league/ \n",
49264926
"I quote from him, about his realization about Premier League Football -\n",
4927-
"_It occurred to me that this problem is perfect for a Bayesian model. We want to infer the latent paremeters (every team's strength) that are generating the data we observe (the scorelines). Moreover, we know that the scorelines are a noisy measurement of team strength, so ideally, we want a model that makes it easy to quantify our uncertainty about the underlying strengths.\n",
4927+
"_It occurred to me that this problem is perfect for a Bayesian model. We want to infer the latent parameters (every team's strength) that are generating the data we observe (the scorelines). Moreover, we know that the scorelines are a noisy measurement of team strength, so ideally, we want a model that makes it easy to quantify our uncertainty about the underlying strengths.\n",
49284928
"\n",
49294929
"_So I googled 'Bayesian football' and found this paper, called 'Bayesian hierarchical model for the prediction of football results.' The authors (Gianluca Baio and Marta A. Blangiardo) being Italian, though, the 'football' here is soccer._\n",
49304930
"\n",
@@ -5238,7 +5238,7 @@
52385238
" tau=tau_def, \n",
52395239
" size=num_teams, \n",
52405240
" value=def_starting_points.values) \n",
5241-
"# trick to code the sum to zero contraint\n",
5241+
"# trick to code the sum to zero constraint\n",
52425242
"@pymc.deterministic\n",
52435243
"def atts(atts_star=atts_star):\n",
52445244
" atts = atts_star.copy()\n",

Chapter7_BayesianMachineLearning/DontOverfit.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@
271271
"source": [
272272
"## Develop Tim's model\n",
273273
"\n",
274-
"He mentions that the X variables are from a Unifrom distribution. Let's investigate this:"
274+
"He mentions that the X variables are from a Uniform distribution. Let's investigate this:"
275275
]
276276
},
277277
{

Prologue/Prologue.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@
9090
"* **Chapter X1: Bayesian Markov Models**\n",
9191
" \n",
9292
"* **Chapter X2: Bayesian methods in Machine Learning** \n",
93-
" We explore how to resolve the overfitting problem plus popular ML methods. Also included are probablistic explainations of Ridge Regression and LASSO Regression.\n",
93+
" We explore how to resolve the overfitting problem plus popular ML methods. Also included are probablistic explanations of Ridge Regression and LASSO Regression.\n",
9494
" - Bayesian spam filtering plus *how to defeat Bayesian spam filtering*\n",
9595
" - Tim Saliman's winning solution to Kaggle's *Don't Overfit* problem \n",
9696
" \n",
@@ -120,7 +120,7 @@
120120
"2. The second, preferred, option is to use the nbviewer.ipython.org site, which display IPython notebooks in the browser ([example](http://nbviewer.ipython.org/urls/raw.github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter1_Introduction/Chapter1_Introduction.ipynb)).\n",
121121
"The contents are updated synchronously as commits are made to the book. You can use the Contents section above to link to the chapters.\n",
122122
" \n",
123-
"3. **PDF versions are coming.** PDFs are the least-prefered method to read the book, as pdf's are static and non-interactive. If PDFs are desired, they can be created dynamically using Chrome's builtin print-to-pdf feature.\n",
123+
"3. **PDF versions are coming.** PDFs are the least-preferred method to read the book, as pdf's are static and non-interactive. If PDFs are desired, they can be created dynamically using Chrome's builtin print-to-pdf feature.\n",
124124
" \n",
125125
"\n",
126126
"Installation and configuration\n",

0 commit comments

Comments
 (0)