Skip to content

Commit 411cd08

Browse files
authored
Merge pull request #256 from Smit-create/i-176
Few bug fixes on convergence conditions
2 parents 841f62e + 6f29c33 commit 411cd08

File tree

11 files changed

+50
-62
lines changed

11 files changed

+50
-62
lines changed

lectures/_static/lecture_specific/coleman_policy_iter/solve_time_iter.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,9 @@ def solve_model_time_iter(model, # Class with model information
1717
print(f"Error at iteration {i} is {error}.")
1818
σ = σ_new
1919

20-
if i == max_iter:
20+
if error > tol:
2121
print("Failed to converge!")
22-
23-
if verbose and i < max_iter:
22+
elif verbose:
2423
print(f"\nConverged in {i} iterations.")
2524

2625
return σ_new

lectures/_static/lecture_specific/optgrowth/solve_model.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,9 @@ def solve_model(og,
2121
print(f"Error at iteration {i} is {error}.")
2222
v = v_new
2323

24-
if i == max_iter:
24+
if error > tol:
2525
print("Failed to converge!")
26-
27-
if verbose and i < max_iter:
26+
elif verbose:
2827
print(f"\nConverged in {i} iterations.")
2928

3029
return v_greedy, v_new

lectures/cake_eating_numerical.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -88,11 +88,11 @@ The basic idea is:
8888

8989
1. Take an arbitary intial guess of $v$.
9090
1. Obtain an update $w$ defined by
91-
91+
9292
$$
9393
w(x) = \max_{0\leq c \leq x} \{u(c) + \beta v(x-c)\}
9494
$$
95-
95+
9696
1. Stop if $w$ is approximately equal to $v$, otherwise set
9797
$v=w$ and go back to step 2.
9898

@@ -299,10 +299,9 @@ def compute_value_function(ce,
299299
300300
v = v_new
301301
302-
if i == max_iter:
302+
if error > tol:
303303
print("Failed to converge!")
304-
305-
if verbose and i < max_iter:
304+
elif verbose:
306305
print(f"\nConverged in {i} iterations.")
307306
308307
return v_new
@@ -657,10 +656,9 @@ def iterate_euler_equation(ce,
657656
658657
σ = σ_new
659658
660-
if i == max_iter:
659+
if error > tol:
661660
print("Failed to converge!")
662-
663-
if verbose and i < max_iter:
661+
elif verbose:
664662
print(f"\nConverged in {i} iterations.")
665663
666664
return σ
@@ -685,4 +683,4 @@ plt.show()
685683
```
686684

687685
```{solution-end}
688-
```
686+
```

lectures/career.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -300,12 +300,11 @@ def solve_model(cw,
300300
print(f"Error at iteration {i} is {error}.")
301301
v = v_new
302302
303-
if i == max_iter and error > tol:
303+
if error > tol:
304304
print("Failed to converge!")
305305
306-
else:
307-
if verbose:
308-
print(f"\nConverged in {i} iterations.")
306+
elif verbose:
307+
print(f"\nConverged in {i} iterations.")
309308
310309
return v_new
311310
```
@@ -545,4 +544,4 @@ has become more concentrated around the mean, making high-paying jobs
545544
less realistic.
546545

547546
```{solution-end}
548-
```
547+
```

lectures/ifp_advanced.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -494,10 +494,9 @@ def solve_model_time_iter(model, # Class with model information
494494
print(f"Error at iteration {i} is {error}.")
495495
a_vec, σ_vec = np.copy(a_new), np.copy(σ_new)
496496
497-
if i == max_iter:
497+
if error > tol:
498498
print("Failed to converge!")
499-
500-
if verbose and i < max_iter:
499+
elif verbose:
501500
print(f"\nConverged in {i} iterations.")
502501
503502
return a_new, σ_new

lectures/jv.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -362,10 +362,9 @@ def solve_model(jv,
362362
print(f"Error at iteration {i} is {error}.")
363363
v = v_new
364364
365-
if i == max_iter:
365+
if error > tol:
366366
print("Failed to converge!")
367-
368-
if verbose and i < max_iter:
367+
elif verbose:
369368
print(f"\nConverged in {i} iterations.")
370369
371370
return v_new
@@ -569,4 +568,4 @@ This seems reasonable and helps us confirm that our dynamic programming
569568
solutions are probably correct.
570569

571570
```{solution-end}
572-
```
571+
```

lectures/mccall_correlated.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -281,10 +281,9 @@ def compute_fixed_point(js,
281281
print(f"Error at iteration {i} is {error}.")
282282
f_in[:] = f_out
283283
284-
if i == max_iter:
284+
if error > tol:
285285
print("Failed to converge!")
286-
287-
if verbose and i < max_iter:
286+
elif verbose:
288287
print(f"\nConverged in {i} iterations.")
289288
290289
return f_out
@@ -453,4 +452,4 @@ plt.show()
453452
The figure shows that more patient individuals tend to wait longer before accepting an offer.
454453

455454
```{solution-end}
456-
```
455+
```

lectures/mccall_model.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ economists to inject randomness into their models.)
9191

9292
In this lecture, we adopt the following simple environment:
9393

94-
* $\{s_t\}$ is IID, with $q(s)$ being the probability of observing state $s$ in $\mathbb{S}$ at each point in time,
94+
* $\{s_t\}$ is IID, with $q(s)$ being the probability of observing state $s$ in $\mathbb{S}$ at each point in time,
9595
* the agent observes $s_t$ at the start of $t$ and hence knows
9696
$w_t = w(s_t)$,
9797
* the set $\mathbb S$ is finite.
@@ -120,7 +120,7 @@ The variable $y_t$ is income, equal to
120120
* unemployment compensation $c$ when unemployed
121121

122122
The worker knows that $\{s_t\}$ is IID with common
123-
distribution $q$ and uses knowledge when he or she computes mathematical expectations of various random variables that are functions of
123+
distribution $q$ and uses knowledge when he or she computes mathematical expectations of various random variables that are functions of
124124
$s_t$.
125125

126126
### A Trade-Off
@@ -134,7 +134,7 @@ To decide optimally in the face of this trade-off, we use dynamic programming.
134134

135135
Dynamic programming can be thought of as a two-step procedure that
136136

137-
1. first assigns values to "states"
137+
1. first assigns values to "states"
138138
1. then deduces optimal actions given those values
139139

140140
We'll go through these steps in turn.
@@ -160,16 +160,16 @@ Let $v^*(s)$ be the optimal value of the problem when $s \in \mathbb{S}$ for a
160160

161161

162162

163-
Thus, the function $v^*(s)$ is the maximum value of objective
163+
Thus, the function $v^*(s)$ is the maximum value of objective
164164
{eq}`objective` for a previously unemployed worker who has offer $w(s)$ in hand and has yet to choose whether to accept it.
165165

166166
Notice that $v^*(s)$ is part of the **solution** of the problem, so it isn't obvious that it is a good idea to start working on the problem by focusing on $v^*(s)$.
167167

168168
There is a chicken and egg problem: we don't know how to compute $v^*(s)$ because we don't yet know
169169
what decisions are optimal and what aren't!
170170

171-
But it turns out to be a really good idea by asking what properties the optimal value function $v^*(s)$ must have in order it
172-
to qualify as an optimal value function.
171+
But it turns out to be a really good idea by asking what properties the optimal value function $v^*(s)$ must have in order it
172+
to qualify as an optimal value function.
173173

174174
Think of $v^*$ as a function that assigns to each possible state
175175
$s$ the maximal expected discounted income stream that can be obtained with that offer in
@@ -192,7 +192,7 @@ for every possible $s$ in $\mathbb S$.
192192
Notice how the function $v^*(s)$ appears on both the right and left sides of equation {eq}`odu_pv` -- that is why it is called
193193
a **functional equation**, i.e., an equation that restricts a **function**.
194194

195-
This important equation is a version of a **Bellman equation**, an equation that is
195+
This important equation is a version of a **Bellman equation**, an equation that is
196196
ubiquitous in economic dynamics and other fields involving planning over time.
197197

198198
The intuition behind it is as follows:
@@ -218,7 +218,7 @@ Once we have this function in hand we can figure out how behave optimally (i.e.
218218

219219
All we have to do is select the maximal choice on the r.h.s. of {eq}`odu_pv`.
220220

221-
The optimal action in state $s$ can be thought of as a part of a **policy** that maps a
221+
The optimal action in state $s$ can be thought of as a part of a **policy** that maps a
222222
state into an action.
223223

224224
Given *any* $s$, we can read off the corresponding best choice (accept or
@@ -351,7 +351,7 @@ Moreover, it's immediate from the definition of $T$ that this fixed
351351
point is $v^*$.
352352

353353
A second implication of the Banach contraction mapping theorem is that
354-
$\{ T^k v \}$ converges to the fixed point $v^*$ regardless of the initial
354+
$\{ T^k v \}$ converges to the fixed point $v^*$ regardless of the initial
355355
$v \in \mathbb R^n$.
356356

357357
### Implementation
@@ -386,7 +386,7 @@ We are going to use Numba to accelerate our code.
386386

387387
* See, in particular, the discussion of `@jitclass` in [our lecture on Numba](https://python-programming.quantecon.org/numba.html).
388388

389-
The following helps Numba by providing some information about types
389+
The following helps Numba by providing some information about types
390390

391391
```{code-cell} python3
392392
mccall_data = [
@@ -490,15 +490,15 @@ def compute_reservation_wage(mcm,
490490
n = len(w)
491491
v = w / (1 - β) # initial guess
492492
v_next = np.empty_like(v)
493-
i = 0
493+
j = 0
494494
error = tol + 1
495-
while i < max_iter and error > tol:
495+
while j < max_iter and error > tol:
496496
497497
for i in range(n):
498498
v_next[i] = np.max(mcm.state_action_values(i, v))
499499
500500
error = np.max(np.abs(v_next - v))
501-
i += 1
501+
j += 1
502502
503503
v[:] = v_next # copy contents into v
504504

lectures/navy_captain.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -538,7 +538,7 @@ def solve_model(wf, tol=1e-4, max_iter=1000):
538538
i += 1
539539
h = h_new
540540
541-
if i == max_iter:
541+
if error > tol:
542542
print("Failed to converge!")
543543
544544
return h_new
@@ -621,25 +621,25 @@ conditioning on knowing for sure that nature has selected $f_{0}$,
621621
in the first case, or $f_{1}$, in the second case.
622622

623623
1. under $f_{0}$,
624-
624+
625625
$$
626626
V^{0}\left(\pi\right)=\begin{cases}
627627
0 & \text{if }\alpha\leq\pi,\\
628628
c+EV^{0}\left(\pi^{\prime}\right) & \text{if }\beta\leq\pi<\alpha,\\
629629
\bar L_{1} & \text{if }\pi<\beta.
630630
\end{cases}
631631
$$
632-
632+
633633
1. under $f_{1}$
634-
634+
635635
$$
636636
V^{1}\left(\pi\right)=\begin{cases}
637637
\bar L_{0} & \text{if }\alpha\leq\pi,\\
638638
c+EV^{1}\left(\pi^{\prime}\right) & \text{if }\beta\leq\pi<\alpha,\\
639639
0 & \text{if }\pi<\beta.
640640
\end{cases}
641641
$$
642-
642+
643643

644644
where
645645
$\pi^{\prime}=\frac{\pi f_{0}\left(z^{\prime}\right)}{\pi f_{0}\left(z^{\prime}\right)+\left(1-\pi\right)f_{1}\left(z^{\prime}\right)}$.
@@ -1118,4 +1118,3 @@ plt.title('Uncond. distribution of log likelihood ratio at frequentist t')
11181118
11191119
plt.show()
11201120
```
1121-

lectures/odu.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -149,10 +149,10 @@ $$
149149

150150

151151

152-
The worker's time $t$ subjective belief about the the distribution of $W_t$ is
152+
The worker's time $t$ subjective belief about the the distribution of $W_t$ is
153153

154154
$$
155-
\pi_t f + (1 - \pi_t) g,
155+
\pi_t f + (1 - \pi_t) g,
156156
$$
157157

158158
where $\pi_t$ updates via
@@ -427,10 +427,9 @@ def solve_model(sp,
427427
print(f"Error at iteration {i} is {error}.")
428428
v = v_new
429429
430-
if i == max_iter:
430+
if error > tol:
431431
print("Failed to converge!")
432-
433-
if verbose and i < max_iter:
432+
elif verbose:
434433
print(f"\nConverged in {i} iterations.")
435434
436435
@@ -731,10 +730,9 @@ def solve_wbar(sp,
731730
print(f"Error at iteration {i} is {error}.")
732731
w = w_new
733732
734-
if i == max_iter:
733+
if error > tol:
735734
print("Failed to converge!")
736-
737-
if verbose and i < max_iter:
735+
elif verbose:
738736
print(f"\nConverged in {i} iterations.")
739737
740738
return w_new
@@ -1178,4 +1176,3 @@ after having acquired less information about the wage distribution.
11781176
```{code-cell} python3
11791177
job_search_example(1, 1, 3, 1.2, c=0.1)
11801178
```
1181-

lectures/wald_friedman.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -526,7 +526,7 @@ def solve_model(wf, tol=1e-4, max_iter=1000):
526526
i += 1
527527
h = h_new
528528
529-
if i == max_iter:
529+
if error > tol:
530530
print("Failed to converge!")
531531
532532
return h_new
@@ -902,11 +902,11 @@ Wald summarizes Neyman and Pearson's setup as follows:
902902

903903
> Neyman and Pearson show that a region consisting of all samples
904904
> $(z_1, z_2, \ldots, z_n)$ which satisfy the inequality
905-
>
905+
>
906906
> $$
907907
\frac{ f_1(z_1) \cdots f_1(z_n)}{f_0(z_1) \cdots f_0(z_n)} \geq k
908908
$$
909-
>
909+
>
910910
> is a most powerful critical region for testing the hypothesis
911911
> $H_0$ against the alternative hypothesis $H_1$. The term
912912
> $k$ on the right side is a constant chosen so that the region

0 commit comments

Comments
 (0)