Skip to content

Commit a3659bd

Browse files
Merge pull request #547 from jessica-writes-code/jmoore/gws-paper
Update link to Greenhill, Ward, Sacks paper
2 parents efc9398 + b1f4e6b commit a3659bd

File tree

4 files changed

+6
-6
lines changed

4 files changed

+6
-6
lines changed

Diff for: Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -2172,7 +2172,7 @@
21722172
"\n",
21732173
"We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use *Bayesian p-values*. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\n",
21742174
"\n",
2175-
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf), but I'll summarize their use here.\n",
2175+
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5907.2011.00525.x), but I'll summarize their use here.\n",
21762176
"\n",
21772177
"For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \\;\\text{Defect} = 1 | t, \\alpha, \\beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:"
21782178
]

Diff for: Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -2236,7 +2236,7 @@
22362236
"\n",
22372237
"We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use *Bayesian p-values*. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\n",
22382238
"\n",
2239-
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf), but I'll summarize their use here.\n",
2239+
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5907.2011.00525.x), but I'll summarize their use here.\n",
22402240
"\n",
22412241
"For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \\;\\text{Defect} = 1 | t, \\alpha, \\beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:"
22422242
]

Diff for: Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -3884,7 +3884,7 @@
38843884
"\n",
38853885
"We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use *Bayesian p-values*. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [3] than p-value tests. We agree.\n",
38863886
"\n",
3887-
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[4]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf), but I'll summarize their use here.\n",
3887+
"The following graphical test is a novel data-viz approach to logistic regression. The plots are called *separation plots*[4]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible [original paper](https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5907.2011.00525.x), but I'll summarize their use here.\n",
38883888
"\n",
38893889
"For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \\;\\text{Defect} = 1 | t, \\alpha, \\beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:"
38903890
]
@@ -4028,7 +4028,7 @@
40284028
"def separation_plot( p, y, **kwargs ):\n",
40294029
" \"\"\"\n",
40304030
" This function creates a separation plot for logistic and probit classification. \n",
4031-
" See http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf\n",
4031+
" See https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5907.2011.00525.x\n",
40324032
" \n",
40334033
" p: The proportions/probabilities, can be a nxM matrix which represents M models.\n",
40344034
" y: the 0-1 response variables.\n",

Diff for: Chapter2_MorePyMC/separation_plot.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# separation plot
22
# Author: Cameron Davidson-Pilon,2013
3-
# see http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf
3+
# see https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5907.2011.00525.x
44

55

66
import matplotlib.pyplot as plt
@@ -11,7 +11,7 @@
1111
def separation_plot( p, y, **kwargs ):
1212
"""
1313
This function creates a separation plot for logistic and probit classification.
14-
See http://mdwardlab.com/sites/default/files/GreenhillWardSacks.pdf
14+
See https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5907.2011.00525.x
1515
1616
p: The proportions/probabilities, can be a nxM matrix which represents M models.
1717
y: the 0-1 response variables.

0 commit comments

Comments
 (0)