Skip to content

Commit 6991fed

Browse files
merging after that brutal dropbox mess
2 parents 1c57865 + ccbda98 commit 6991fed

File tree

6 files changed

+413
-232
lines changed

6 files changed

+413
-232
lines changed

Chapter2_MorePyMC/MorePyMC.ipynb

Lines changed: 68 additions & 64 deletions
Large diffs are not rendered by default.

Chapter3_MCMC/IntroMCMC.ipynb

Lines changed: 126 additions & 95 deletions
Large diffs are not rendered by default.

Chapter6_Priorities/Priors.ipynb

Lines changed: 102 additions & 70 deletions
Large diffs are not rendered by default.

Chapter6_Priorities/other_strats.py

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
#other strats.
2+
# TODO: UBC strat, epsilon-greedy
3+
4+
import numpy as np
5+
from pymc import rbeta
6+
7+
rand = np.random.rand
8+
9+
10+
11+
class GeneralBanditStrat( object ):
12+
13+
"""
14+
Implements a online, learning strategy to solve
15+
the Multi-Armed Bandit problem.
16+
17+
parameters:
18+
bandits: a Bandit class with .pull method
19+
choice_function: accepts a self argument (which gives access to all the variables), and
20+
returns and int between 0 and n-1
21+
methods:
22+
sample_bandits(n): sample and train on n pulls.
23+
24+
attributes:
25+
N: the cumulative number of samples
26+
choices: the historical choices as a (N,) array
27+
bb_score: the historical score as a (N,) array
28+
29+
"""
30+
31+
def __init__(self, bandits, choice_function):
32+
33+
self.bandits = bandits
34+
n_bandits = len( self.bandits )
35+
self.wins = np.zeros( n_bandits )
36+
self.trials = np.zeros(n_bandits )
37+
self.N = 0
38+
self.choices = []
39+
self.score = []
40+
self.choice_function = choice_function
41+
42+
def sample_bandits( self, n=1 ):
43+
44+
score = np.zeros( n )
45+
choices = np.zeros( n )
46+
47+
for k in range(n):
48+
#sample from the bandits's priors, and select the largest sample
49+
choice = self.choice_function(self)
50+
51+
#sample the chosen bandit
52+
result = self.bandits.pull( choice )
53+
54+
#update priors and score
55+
self.wins[ choice ] += result
56+
self.trials[ choice ] += 1
57+
score[ k ] = result
58+
self.N += 1
59+
choices[ k ] = choice
60+
61+
self.score = np.r_[ self.score, score ]
62+
self.choices = np.r_[ self.choices, choices ]
63+
return
64+
65+
66+
def bayesian_bandit_choice(self):
67+
return np.argmax( rbeta( 1 + self.wins, 1 + self.trials - self.wins) )
68+
69+
def max_mean( self ):
70+
"""pick the bandit with the current best observed proportion of winning """
71+
return np.argmax( self.wins / ( self.trials +1 ) )
72+
73+
def lower_credible_choice( self ):
74+
"""pick the bandit with the best LOWER BOUND. See chapter 5"""
75+
def lb(a,b):
76+
return a/(a+b) - 1.65*np.sqrt( (a*b)/( (a+b)**2*(a+b+1) ) )
77+
a = self.wins + 1
78+
b = self.trials - self.wins + 1
79+
return np.argmax( lb(a,b) )
80+
81+
def upper_credible_choice( self ):
82+
"""pick the bandit with the best LOWER BOUND. See chapter 5"""
83+
def lb(a,b):
84+
return a/(a+b) + 1.65*np.sqrt( (a*b)/( (a+b)**2*(a+b+1) ) )
85+
a = self.wins + 1
86+
b = self.trials - self.wins + 1
87+
return np.argmax( lb(a,b) )
88+
89+
def random_choice( self):
90+
return np.random.randint( 0, len( self.wins ) )
91+
92+
93+
class Bandits(object):
94+
"""
95+
This class represents N bandits machines.
96+
97+
parameters:
98+
p_array: a (n,) Numpy array of probabilities >0, <1.
99+
100+
methods:
101+
pull( i ): return the results, 0 or 1, of pulling
102+
the ith bandit.
103+
"""
104+
def __init__(self, p_array):
105+
self.p = p_array
106+
self.optimal = np.argmax(p_array)
107+
108+
def pull( self, i ):
109+
#i is which arm to pull
110+
return rand() < self.p[i]
111+
112+
def __len__(self):
113+
return len(self.p)

Prologue/Prologue.ipynb

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,8 @@
2525
"The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a *so-what* feeling about Bayesian inference. In fact, this was the author's own prior opinion.\n",
2626
"\n",
2727
"\n",
28-
"<div style=\"float: right;\"><img title=\"created by Stef Gibson at StefGibson.com\"style=\"float: right;\" src=\"http://i.imgur.com/6DKYbPb.png?1\" align=right height = 390 /></div>\n",
28+
"<div style=\"float: right; margin-left:30px\"><img title=\"created by Stef Gibson at StefGibson.com\"style=\"float: right;\" src=\"http://i.imgur.com/6DKYbPb.png?1\" align=right height = 390 /></div>\n",
29+
"\n",
2930
"\n",
3031
"After some recent success of Bayesian methods in machine-learning competitions, I decided to investigate the subject again. Even with my mathematical background, it took me three straight-days of reading examples and trying to put the pieces together to understand the methods. There was simplely not enough literature bridging theory to practice. The problem with my misunderstanding was the disconnect between Bayesian mathematics and probabilistic programming. That being said, I suffered then so the reader would not have to now. This book attempts to bridge the gap.\n",
3132
"\n",
@@ -292,4 +293,4 @@
292293
"metadata": {}
293294
}
294295
]
295-
}
296+
}

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Probabilistic Programming and Bayesian Methods for Hackers
88
The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a *so-what* feeling about Bayesian inference. In fact, this was the author's own prior opinion.
99

1010

11-
<div style="float: right;"><img title="created by Stef Gibson at StefGibson.com"style="float: right;" src="http://i.imgur.com/6DKYbPb.png?1" align=right height = 390 /></div>
11+
<div style="float: right; margin-left: 30px;"><img title="created by Stef Gibson at StefGibson.com"style="float: right;margin-left: 30px;" src="http://i.imgur.com/6DKYbPb.png?1" align=right height = 390 /></div>
1212

1313
After some recent success of Bayesian methods in machine-learning competitions, I decided to investigate the subject again. Even with my mathematical background, it took me three straight-days of reading examples and trying to put the pieces together to understand the methods. There was simplely not enough literature bridging theory to practice. The problem with my misunderstanding was the disconnect between Bayesian mathematics and probabilistic programming. That being said, I suffered then so the reader would not have to now. This book attempts to bridge the gap.
1414

0 commit comments

Comments
 (0)