Skip to content

[lake_model] Comments #169

Open
Open
@oyamad

Description

@oyamad
  • This part https://github.com/QuantEcon/lecture-python.myst/blame/main/lectures/lake_model.md#L260-L274 is not very wise (computing the stationary distribution of a 2-state (column-)stochastic matrix by iteration). As shown in the Finite Markov Chains chapter, and to be discussed in the current chapter, it can be simply computed exactly (up to floating points errors) by

    def rate_steady_state(self):
        x = np.array([self.A_hat[0, 1], self.A_hat[1, 0]])
        return x / x.sum()
  • The discussion in "Aggregate Dynamics":
    This part is hard to read. The same discussion as in "Finite Markov Chains" is given in a different language without any indication: From the discussion in "Finite Markov Chains", we know that

    • the (column-)stochastic matrix A_hat has a stationary distribution (or equivalently, it has a nonnegative eigenvector with eigenvalue one); and
    • A_hat being (irreducible and) aperiodic (or equivalently, the other eigenvalues are less than one in magnitude), from any initial distribution we have convergence to the (unique) stationary distribution.

    In my view, this new language (with eigenvalues) is not necessary, and it would be enough to refer to the previous discussion in "Finite Markov Chains" (as to be done below).

  • There are a few places where the inner product of two vectors (1d-ndarrays) a and b is computed by

    np.sum(a * b)

    instead of

    a @ b

    Is there any purpose for this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions