Skip to content

Fix quantecon namespace updates #138

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 8, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions lectures/robustness.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,10 @@ jupytext:
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: Python 3
display_name: Python 3 (ipykernel)
language: python
name: python3
---
Expand All @@ -29,10 +31,9 @@ kernelspec:

In addition to what's in Anaconda, this lecture will need the following libraries:

```{code-cell} ipython
---
tags: [hide-output]
---
```{code-cell} ipython3
:tags: [hide-output]

!pip install --upgrade quantecon
```

Expand Down Expand Up @@ -79,7 +80,7 @@ In reading this lecture, please don't think that our decision-maker is paranoid

Let's start with some imports:

```{code-cell} ipython
```{code-cell} ipython3
import pandas as pd
import numpy as np
from scipy.linalg import eig
Expand Down Expand Up @@ -941,7 +942,7 @@ We compute value-entropy correspondences for two policies

The code for producing the graph shown above, with blue being for the robust policy, is as follows

```{code-cell} python3
```{code-cell} ipython3
# Model parameters

a_0 = 100
Expand Down Expand Up @@ -987,7 +988,7 @@ def evaluate_policy(θ, F):
as well as the entropy level.
"""

rlq = qe.robustlq.RBLQ(Q, R, A, B, C, β, θ)
rlq = qe.RBLQ(Q, R, A, B, C, β, θ)
K_F, P_F, d_F, O_F, o_F = rlq.evaluate_F(F)
x0 = np.array([[1.], [0.], [0.]])
value = - x0.T @ P_F @ x0 - d_F
Expand Down Expand Up @@ -1044,11 +1045,11 @@ def value_and_entropy(emax, F, bw, grid_size=1000):


# Compute the optimal rule
optimal_lq = qe.lqcontrol.LQ(Q, R, A, B, C, beta=β)
optimal_lq = qe.LQ(Q, R, A, B, C, beta=β)
Po, Fo, do = optimal_lq.stationary_values()

# Compute a robust rule given θ
baseline_robust = qe.robustlq.RBLQ(Q, R, A, B, C, β, θ)
baseline_robust = qe.RBLQ(Q, R, A, B, C, β, θ)
Fb, Kb, Pb = baseline_robust.robust_rule()

# Check the positive definiteness of worst-case covariance matrix to
Expand Down Expand Up @@ -1189,4 +1190,3 @@ latter is just $\hat P$.
```{hint}
Use the fact that $\hat P = \mathcal B( \mathcal D( \hat P))$
```