Skip to content

Added initial text on Jeffreys priors #163

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jan 25, 2014
14 changes: 12 additions & 2 deletions Chapter6_Priorities/Priors.ipynb
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"metadata": {
"name": "Priors"
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
Expand Down Expand Up @@ -1393,7 +1393,17 @@
"\n",
"## Jefferys Priors\n",
"\n",
"TODO."
"Earlier, we talked about objective priors rarely are *objective*. Partly what we mean by this is that we want a prior that doesn't bias our posterior estimates. The flat prior seems like a reasonable choice as it assigns equal probability to all values. \n",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"we talked about objective priors rarely are objective" I think there is a word missing here.

"\n",
"But the flat prior is not transformation invariant. What does this mean? Suppose we have a random variable $ \\bf X $ from Bernoulli($\\theta$). We define the prior on $p(\\theta) = 1$. \n",
"\n",
"PUT PLOT OF THETA HERE\n",
"\n",
"Now, let's transform $\\theta$ with the function $\\psi = log \\frac{\\theta}{1-\\theta}$. This is just a function to stretch $\\theta$ across the real line. Now how likely are different values of $\\psi$ under our transformation.\n",
"\n",
"PUT PLOT OF PSI HERE\n",
"\n",
"Oh no! Our function is no longer flat. It turns out flat priors do carry information in them after all. The point of Jeffreys Priors is to create priors that don't accidentally become informative when you transform the variables you placed them originally on."
]
},
{
Expand Down