You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
which becomes expected utility $\mu_u$ when $\theta^{-1} = 0$.
703
+
704
+
The right side of equation {eq}`tom200` is a special case of **stochastic differential utility** preferences in which consumption plans are ranked not just by their expected utilities $\mu_u$ but also the variances $\sigma_u^2$ of their expected utilities.
697
705
698
706
## Ex post Bayesian preferences
699
707
@@ -1115,25 +1123,23 @@ ax2.set_xlabel(r'$c_1$')
1115
1123
ax2.legend();
1116
1124
```
1117
1125
1118
-
Evidently, for a given $\eta$ and a given $(c_1, c_2)$ off the 45 degree line, by solving
1119
-
equations {eq}`tom7` and {eq}`tom20`, we can find $\tilde \theta (\eta, c)$
1120
-
and $\tilde \eta(\theta,c)$ that make the indifference curves for
1121
-
multiplier and constraint preferences be tangent to one another at a
1122
-
given allocation $c$.
1126
+
**Kink at 45 degree line**
1127
+
1123
1128
1124
-
For fixed $\eta$, a given plan $c$, and
1125
-
a utility function increasing in $c$, the worst case probabilities are
1126
-
$\hat \pi_1 < .5$ when $c_1 > c_2$ and $\hat \pi_1 > .5$ when
1129
+
Notice the kink in the indifference curve for constraint preferences at the 45 degree line.
1130
+
1131
+
To understand the source of the kink, consider how the Lagrange multiplier and worst-case probabilities vary with the consumption plan under constraint preferences.
1132
+
1133
+
For fixed $\eta$, a given plan $c$, and a utility function increasing in $c$, worst case probabilities are **fixed numbers** $\hat \pi_1 < .5$ when $c_1 > c_2$ and $\hat \pi_1 > .5$ when
1127
1134
$c_2 > c_1$.
1128
1135
1129
-
The discontinuity in the worst case $\hat \pi_1$ at the 45
1130
-
degree line accounts for the kink in the indifference curve for
1131
-
constraint preferences associated with a particular positive entropy
1132
-
$\eta$.
1136
+
This pattern makes the Lagrange multiplier $\tilde \theta$ vary discontinuously at $\hat \pi_1 = .5$.
1137
+
1138
+
The discontinuity in the worst case $\hat \pi_1$ at the 45 degree line accounts for the kink at the 45 degree line in an indifference curve for constraint preferences associated with a given positive entropy constraint $\eta$.
1133
1139
1134
1140
The code for generating the preceding figure is somewhat intricate we formulate a root finding problem for finding indifference curves.
1135
1141
1136
-
Here is a brief literary description of the method we used.
1142
+
Here is a brief literary description of the method we use.
1137
1143
1138
1144
**Parameters**
1139
1145
@@ -1167,10 +1173,17 @@ $$
1167
1173
**Remark:** It is tricky to get the algorithm to work properly for all values of $c_{1}$. In particular, parameters were chosen with [graduate student descent](https://sciencedryad.wordpress.com/2014/01/25/grad-student-descent/).
1168
1174
1169
1175
1176
+
**Tangent indifference curves off 45 degree line**
1177
+
1178
+
For a given $\eta$ and a given allocatin $(c_1, c_2)$ off the 45 degree line, by solving
1179
+
equations {eq}`tom7` and {eq}`tom20`, we can find $\tilde \theta (\eta, c)$
1180
+
and $\tilde \eta(\theta,c)$ that make indifference curves for
1181
+
multiplier and constraint preferences be tangent to one another.
1182
+
1170
1183
The following figure shows indifference curves for
1171
1184
multiplier and constraint preferences through a point off the 45 degree
1172
1185
line, namely, $(c(1),c(2)) = (3,1)$, at which $\eta$ and $\theta$ are
1173
-
set to render the indifference curves for constraint and multiplier
1186
+
adjusted to render the indifference curves for constraint and multiplier
1174
1187
preferences tangent.
1175
1188
1176
1189
@@ -1239,7 +1252,8 @@ ax2.set_xlabel(r'$c_1$')
1239
1252
ax2.legend();
1240
1253
```
1241
1254
1242
-
Note that all three lines of the left graph intersect at (1, 3). While the intersection at (3, 1) is hard-coded, the intersection at (1,3) arises from the computation which is a good sign.
1255
+
Note that all three lines of the left graph intersect at (1, 3). While the intersection at (3, 1) is hard-coded, the intersection at (1,3) arises from the computation, which confirms that the code seems to be
1256
+
working properly.
1243
1257
1244
1258
1245
1259
As we move along the (kinked) indifference curve for the constraint
@@ -1447,11 +1461,7 @@ model uncertainty.
1447
1461
1448
1462
## Iso-utility and iso-entropy curves and expansion paths
1449
1463
1450
-
The following figures show
1451
-
iso-entropy and iso-utility lines for the special case in which $I = 3$,
1452
-
$\pi_1 = .3, \pi_2 = .4$, and the utility function is
1453
-
$u(c)= \frac{c^{1-\alpha}}{1-\alpha}$ with $\alpha =0$ and $\alpha =3$,
1454
-
respectively, for the fixed plan $c(1) = 1, c(2) =2 , c(3) =3$.
1464
+
The following figures show iso-entropy and iso-utility lines for the special case in which $I = 3$, $\pi_1 = .3, \pi_2 = .4$, and the utility function is $u(c)= \frac{c^{1-\alpha}}{1-\alpha}$ with $\alpha =0$ and $\alpha =3$, respectively, for the fixed plan $c(1) = 1, c(2) =2 , c(3) =3$.
1455
1465
1456
1466
The iso-utility lines are the level curves of
1457
1467
@@ -1816,25 +1826,27 @@ Beyond the helpful mathematical fact that it leads directly to
1816
1826
convenient exponential twisting formulas {eq}`tom6` and
1817
1827
{eq}`tom12` for worst-case probability distortions,
1818
1828
there are two related justifications for using entropy to measure
1819
-
discrepancies between probability distribution. One arises from the role
1820
-
of entropy in statistical tests for discriminating between models. The
1821
-
other comes from axioms.
1829
+
discrepancies between probability distribution.
1830
+
1831
+
One arises from the role of entropy in statistical tests for discriminating between models.
1832
+
1833
+
The other comes from axioms.
1822
1834
1823
1835
### Entropy and statistical detection
1824
1836
1825
1837
Robust control theory starts with a decision maker who has constructed a
1826
1838
good baseline approximating model whose free parameters he has estimated
1827
-
to fit historical data well. The decision maker recognizes that actual
1828
-
outcomes might be generated by one of a vast number of other models that
1829
-
fit the historical data nearly as well as his. Therefore, he wants to
1830
-
evaluate outcomes under a set of alternative models that are plausible
1831
-
in the sense of being statistically close to his model. He uses relative
1832
-
entropy to quantify what he means by close.
1839
+
to fit historical data well.
1840
+
1841
+
The decision maker recognizes that actual outcomes might be generated by one of a vast number of other models that fit the historical data nearly as well as his.
1842
+
1843
+
Therefore, he wants to evaluate outcomes under a set of alternative models that are plausible
1844
+
in the sense of being statistically close to his model.
1845
+
1846
+
He uses relative entropy to quantify what close means.
1833
1847
1834
1848
{cite}`AHS_2003` and {cite}`BHS_2009`describe links between entropy and large deviations
1835
-
bounds on test statistics for discriminating between models, in
1836
-
particular, statistics that describe the probability of making an error
1837
-
in applying a likelihood ratio test to decide whether model A or model B
1849
+
bounds on test statistics for discriminating between models, in particular, statistics that describe the probability of making an error in applying a likelihood ratio test to decide whether model A or model B
1838
1850
generated a data record of length $T$.
1839
1851
1840
1852
For a given sample size, an
@@ -1844,12 +1856,14 @@ use detection error probabilities to calibrate reasonable values of $\eta$.
1844
1856
1845
1857
{cite}`AHS_2003` and {cite}`HansenSargent2008` also
1846
1858
use detection error probabilities to calibrate reasonable values of the
1847
-
penalty parameter $\theta$ in multiplier preferences. For a fixed sample
1848
-
size and a fixed $\theta$, they would calculate the worst-case
1859
+
penalty parameter $\theta$ in multiplier preferences.
1860
+
1861
+
For a fixed sample size and a fixed $\theta$, they would calculate the worst-case
1849
1862
$\hat m_i(\theta)$, an associated entropy $\eta(\theta)$, and an
1850
1863
associated detection error probability. In this way they build up a
1851
-
detection error probability as a function of $\theta$. They then invert
1852
-
this function to calibrate $\theta$ to deliver a reasonable detection
1864
+
detection error probability as a function of $\theta$.
1865
+
1866
+
They then invert this function to calibrate $\theta$ to deliver a reasonable detection
1853
1867
error probability.
1854
1868
1855
1869
To indicate outcomes from this approach, the following figure
@@ -1923,21 +1937,22 @@ The density for the approximating model is
0 commit comments