You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/likelihood_bayes.md
+304Lines changed: 304 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -369,6 +369,310 @@ We thus conclude that the likelihood ratio process is a key ingredient of the f
369
369
a Bayesian's posteior probabilty that nature has drawn history $w^t$ as repeated draws from density
370
370
$g$.
371
371
372
+
373
+
INSERT NEW BEGINS
374
+
375
+
### Behavior of posterior probabilities $\{\pi_t\}$ under the subjective probability distribution
376
+
377
+
378
+
379
+
#### A perspective on Bayes's law as a theory of learning
380
+
381
+
We'll continue with our setting in which a McCall worker knows that successive
382
+
draws of his wage are drawn from either $F$ or $G$, but does not know which of these two distributions
383
+
nature has drawn once-and-for-all before time $0$.
384
+
385
+
We'll review and reiterate and rearrange some formulas that we have encountered above and in associated lectures.
386
+
387
+
The worker's initial beliefs induce a joint probability distribution
388
+
over a potentially infinite sequence of draws $w_0, w_1, \ldots $.
389
+
390
+
Bayes' law is simply an application of laws of
391
+
probability to compute the conditional distribution of the $t$th draw $w_t$ conditional on $[w_0, \ldots, w_{t-1}]$.
392
+
393
+
After our worker puts a subjective probability $\pi_{-1}$ on nature having selected distribution $F$, we have in effect assumes from the start that the decision maker **knows** the joint distribution for the process $\{w_t\}_{t=0}$.
394
+
395
+
We assume that the workers also knows the laws of probability theory.
396
+
397
+
A respectable view is that Bayes' law is less a theory of learning than a statement about the consequences of information inflows for a decision maker who thinks he knows the truth (i.e., a joint probability distribution) from the beginning.
398
+
399
+
400
+
#### Mechanical details again
401
+
402
+
At time $0$ **before** drawing a wage offer, the worker attaches probability $\pi_{-1} \in (0,1)$ to the distribution being $F$.
403
+
404
+
Before drawing a wage at time $0$, the worker thus believes that the density of $w_0$
\Bigl[ \pi_{t-1} f(w) + (1-\pi_{t-1})g(w) \Bigr] d w \cr
450
+
& = \pi_{t-1} \int f(w) dw \cr
451
+
& = \pi_{t-1}, \cr}
452
+
$$
453
+
454
+
so that the process $\pi_t$ is a **martingale**.
455
+
456
+
Indeed, it is a **bounded martingale** because each $\pi_t$, being a probability,
457
+
is between $0$ and $1$.
458
+
459
+
460
+
In the first line in the above string of equalities, the term in the first set of brackets
461
+
is just $\pi_t$ as a function of $w_{t}$, while the term in the second set of brackets is the density of $w_{t}$ conditional
462
+
on $w_{t-1}, \ldots , w_0$ or equivalently conditional on the *sufficient statistic* $\pi_{t-1}$ for $w_{t-1}, \ldots , w_0$.
463
+
464
+
Notice that here we are computing $E(\pi_t | \pi_{t-1})$ under the **subjective** density described in the second
465
+
term in brackets.
466
+
467
+
Because $\{\pi_t\}$ is a bounded martingale sequence, it follows from the **martingale convergence theorem** that $\pi_t$ converges almost surely to a random variable in $[0,1]$.
468
+
469
+
Practically, this means that probability one is attached to sample paths
470
+
$\{\pi_t\}_{t=0}^\infty$ that converge.
471
+
472
+
According to the theorem, it different sample paths can converge to different limiting values.
473
+
474
+
Thus, let $\{\pi_t(\omega)\}_{t=0}^\infty$ denote a particular sample path indexed by a particular $\omega
475
+
\in \Omega$.
476
+
477
+
We can think of nature as drawing an $\omega \in \Omega$ from a probability distribution
478
+
${\textrm{Prob}} \Omega$ and then generating a single realization (or _simulation_) $\{\pi_t(\omega)\}_{t=0}^\infty$ of the process.
479
+
480
+
The limit points of $\{\pi_t(\omega)\}_{t=0}^\infty$ as $t \rightarrow +\infty$ are realizations of a random variable that is swept out as we sample $\omega$ from $\Omega$ and construct repeated draws of $\{\pi_t(\omega)\}_{t=0}^\infty$.
481
+
482
+
483
+
By staring at law of motion (44) or (56), we can figure out some things about the probability distribution of the limit points
Combining this equation with equation (20), we deduce that
548
+
the probability that ${\textrm{Prob}(\Omega)}$ attaches to
549
+
$\pi_\infty(\omega)$ being $1$ must be $\pi_{-1}$.
550
+
551
+
552
+
Thus, under the worker's subjective distribution, $\pi_{-1}$ of the sample paths
553
+
of $\{\pi_t\}$ will converge pointwise to $1$ and $1 - \pi_{-1}$ of the sample paths will
554
+
converge pointwise to $0$.
555
+
556
+
557
+
558
+
#### Some simulations
559
+
560
+
Let's watch the martingale convergence theorem at work in some simulations of our learning model under the worker's subjective distribution.
561
+
562
+
Let us simulate $\left\{ \pi_{t}\right\}_{t=0}^{T}$, $\left\{ w_{t}\right\}_{t=0}^{T}$ paths where for each $t\geq0$, $w_t$ is drawn from the subjective distribution
0 commit comments