You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We begin by reviewing the setting in {doc}`this lecture <likelihood_ratio_process>`, which we adopt here too.
@@ -370,13 +383,21 @@ a Bayesian's posteior probabilty that nature has drawn history $w^t$ as repeated
370
383
$g$.
371
384
372
385
373
-
INSERT NEW BEGINS
374
386
375
-
### Behavior of posterior probabilities $\{\pi_t\}$ under the subjective probability distribution
376
387
388
+
## Behavior of posterior probability $\{\pi_t\}$ under the subjective probability distribution
389
+
390
+
391
+
We'll end this lecture by briefly studying what our Baysian learner expects to learn under the
392
+
subjective beliefs $\pi_t$ cranked out by Bayes' law.
377
393
394
+
This will provide us with some perspective on our application of Bayes's law as a theory of learning.
378
395
379
-
#### A perspective on Bayes's law as a theory of learning
396
+
As we shall see, at each time $t$, the Bayesian learner knows that he will be surprised.
397
+
398
+
But he expects that new information will not lead him to change his beliefs.
399
+
400
+
And it won't on average under his subjective beliefs.
380
401
381
402
We'll continue with our setting in which a McCall worker knows that successive
382
403
draws of his wage are drawn from either $F$ or $G$, but does not know which of these two distributions
@@ -397,7 +418,7 @@ We assume that the workers also knows the laws of probability theory.
397
418
A respectable view is that Bayes' law is less a theory of learning than a statement about the consequences of information inflows for a decision maker who thinks he knows the truth (i.e., a joint probability distribution) from the beginning.
398
419
399
420
400
-
####Mechanical details again
421
+
### Mechanical details again
401
422
402
423
At time $0$ **before** drawing a wage offer, the worker attaches probability $\pi_{-1} \in (0,1)$ to the distribution being $F$.
403
424
@@ -423,16 +444,15 @@ More generally, after making the $t$th draw and having observed $w_t, w_{t-
423
444
the probability that $w_{t+1}$ is being drawn from distribution $F$ is
@@ -480,17 +500,19 @@ ${\textrm{Prob}} \Omega$ and then generating a single realization (or _simulatio
480
500
The limit points of $\{\pi_t(\omega)\}_{t=0}^\infty$ as $t \rightarrow +\infty$ are realizations of a random variable that is swept out as we sample $\omega$ from $\Omega$ and construct repeated draws of $\{\pi_t(\omega)\}_{t=0}^\infty$.
481
501
482
502
483
-
By staring at law of motion (44) or (56), we can figure out some things about the probability distribution of the limit points
503
+
By staring at law of motion {eq}`eq_recur1` or {eq}`eq:like44` , we can figure out some things about the probability distribution of the limit points
The above graphs display how the distribution of $\pi_t$ across realizations are moving toward
685
+
limit points that we described above and that put all probability either on $0$ or on $1$.
686
+
687
+
688
+
653
689
Now let's use our Python code to generate a table that checks out our earlier claims about the
654
690
probability distribution of the pointwise limits $\pi_{\infty}(\omega)$.
655
691
656
692
We'll use our simulations to generate a histogram of this distribution.
657
693
694
+
In the following table, the left column in bold face reports an assumed value of $\pi_{-1}$.
695
+
696
+
The second column reports the fraction of $N = 10000$ simulations for which $\pi_{t}$ had converged to $0$ at the terminal date $T=500$ for each simulation.
697
+
698
+
The third column reports the fraction of $N = 10000$ simulations for which $\pi_{t}$ had converged to $1$ as the terminal date $T=500$ for each simulation.
0 commit comments