Skip to content

Commit 7a7d079

Browse files
committed
typo's, edit for style and a python bug
1 parent e4526ae commit 7a7d079

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

Chapter6_Priorities/Priors.ipynb

+7-7
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"metadata": {
3-
"name": "Priors"
3+
"name": ""
44
},
55
"nbformat": 3,
66
"nbformat_minor": 0,
@@ -32,13 +32,13 @@
3232
"## Getting our priorities straight\n",
3333
"\n",
3434
"\n",
35-
"Up until now, we have mostly ignored our choice of priors. This is unfortunate as we can be very expressive with our priors, but we also must be careful about choosing them. This is especially true if we want to be objective, that is, not express any personal beliefs in the priors. \n",
35+
"Up until now, we have mostly ignored our choice of priors. This is unfortunate as we can be very expressive with our priors, but we also must be careful about choosing them. This is especially true if we want to be objective, that is, not to express any personal beliefs in the priors. \n",
3636
"\n",
3737
"###Subjective vs Objective priors\n",
3838
"\n",
3939
"Bayesian priors can be classified into two classes: *objective* priors, which aim to allow the data to influence the posterior the most, and *subjective* priors, which allow the practitioner to express his or her views into the prior. \n",
4040
"\n",
41-
"What is an example of an objective prior? We have seen some already, including the *flat* prior (which is a uniform distribution over the entire possible range of the unknown). Using a flat prior implies we give each possible value an equal weighting. Choosing this type of prior is invoking what is called \"The Principle of Indifference\", literally we have no prior reason to favor one value over another. Calling a flat prior over a restricted space an objective prior is not correct, though it seems similar. If we know $p$ in a Binomial model is greater than 0.5, then $\\text{Uniform}(0.5,1)$ is not an objective prior (since we have used prior knowledge) even though it is \"flat\" over [0.5, 1]. The flat prior must be flat along the *entire* range of possibilities. \n",
41+
"What is an example of an objective prior? We have seen some already, including the *flat* prior, which is a uniform distribution over the entire possible range of the unknown. Using a flat prior implies that we give each possible value an equal weighting. Choosing this type of prior is invoking what is called \"The Principle of Indifference\", literally we have no prior reason to favor one value over another. Calling a flat prior over a restricted space an objective prior is not correct, though it seems similar. If we know $p$ in a Binomial model is greater than 0.5, then $\\text{Uniform}(0.5,1)$ is not an objective prior (since we have used prior knowledge) even though it is \"flat\" over [0.5, 1]. The flat prior must be flat along the *entire* range of possibilities. \n",
4242
"\n",
4343
"Aside from the flat prior, other examples of objective priors are less obvious, but they contain important characteristics that reflect objectivity. For now, it should be said that *rarely* is a objective prior *truly* objective. We will see this later. \n",
4444
"\n",
@@ -132,7 +132,7 @@
132132
"\n",
133133
"If the posterior does not make sense, then clearly one had an idea what the posterior *should* look like (not what one *hopes* it looks like), implying that the current prior does not contain all the prior information and should be updated. At this point, we can discard the current prior and choose a more reflective one.\n",
134134
"\n",
135-
"Gelman [4] suggests that using a uniform distribution with large bounds is often a good choice for objective priors. Although, one should be wary about using Uniform objective priors with large bounds, as they can assign too large of a prior probability to non-intuitive points. Ask: do you really think the unknown could be incredibly large? Often quantities are naturally biased towards 0. A Normal random variable with large variance (small precision) might be a better choice, or an Exponential with a fat tail in the strictly positive (or negative) case. \n",
135+
"Gelman [4] suggests that using a uniform distribution with large bounds is often a good choice for objective priors. Although, one should be wary about using Uniform objective priors with large bounds, as they can assign too large of a prior probability to non-intuitive points. Ask yourself: do you really think the unknown could be incredibly large? Often quantities are naturally biased towards 0. A Normal random variable with large variance (small precision) might be a better choice, or an Exponential with a fat tail in the strictly positive (or negative) case. \n",
136136
"\n",
137137
"If using a particularly subjective prior, it is your responsibility to be able to explain the choice of that prior, else you are no better than the tobacco company's guilty parties. "
138138
]
@@ -180,7 +180,7 @@
180180
"\n",
181181
"$$ \\text{Exp}(\\beta) \\sim \\text{Gamma}(1, \\beta) $$\n",
182182
"\n",
183-
"This additional parameter allows the probability density function to have more flexibility, hence allows the practitioner to express his or her subjective priors more accurately. The density function for a $\\text{Gamma}(\\alpha, \\beta)$ random variable is:\n",
183+
"This additional parameter allows the probability density function to have more flexibility, hence allowing the practitioner to express his or her subjective priors more accurately. The density function for a $\\text{Gamma}(\\alpha, \\beta)$ random variable is:\n",
184184
"\n",
185185
"$$ f(x \\mid \\alpha, \\beta) = \\frac{\\beta^{\\alpha}x^{\\alpha-1}e^{-\\beta x}}{\\Gamma(\\alpha)} $$\n",
186186
"\n",
@@ -358,7 +358,7 @@
358358
"- Psychology: how does punishment and reward affect our behaviour? How do humans learn?\n",
359359
"\n",
360360
"\n",
361-
"The Bayesian solution begins by assuming priors on the probability of winning for each bandit. In our vignette we assumed complete ignorance of the these probabilities. So a very natural prior is the flat prior over 0 to 1. The algorithm proceeds as follows:\n",
361+
"The Bayesian solution begins by assuming priors on the probability of winning for each bandit. In our vignette we assumed complete ignorance of these probabilities. So a very natural prior is the flat prior over 0 to 1. The algorithm proceeds as follows:\n",
362362
"\n",
363363
"For each round:\n",
364364
"\n",
@@ -399,7 +399,7 @@
399399
" \n",
400400
" def pull( self, i ):\n",
401401
" #i is which arm to pull\n",
402-
" return rand() < self.p[i]\n",
402+
" return np.random.rand() < self.p[i]\n",
403403
" \n",
404404
" def __len__(self):\n",
405405
" return len(self.p)\n",

0 commit comments

Comments
 (0)