Skip to content

Random walk, random method #3682

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Nov 18, 2019
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions RELEASE-NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
- Sampling from variational approximation now allows for alternative trace backends [#3550].
- Infix `@` operator now works with random variables and deterministics [#3619](https://github.com/pymc-devs/pymc3/pull/3619).
- [ArviZ](https://arviz-devs.github.io/arviz/) is now a requirement, and handles plotting, diagnostics, and statistical checks.
- Can use GaussianRandomWalk in sample_prior_predictive [#3682](https://github.com/pymc-devs/pymc3/pull/3682)
- Now 11 years of S&P returns in data set[#3682](https://github.com/pymc-devs/pymc3/pull/3682)

### Maintenance
- Moved math operations out of `Rice`, `TruncatedNormal`, `Triangular` and `ZeroInflatedNegativeBinomial` `random` methods. Math operations on values returned by `draw_values` might not broadcast well, and all the `size` aware broadcasting is left to `generate_samples`. Fixes [#3481](https://github.com/pymc-devs/pymc3/issues/3481) and [#3508](https://github.com/pymc-devs/pymc3/issues/3508)
Expand Down
394 changes: 315 additions & 79 deletions docs/source/notebooks/stochastic_volatility.ipynb

Large diffs are not rendered by default.

62 changes: 50 additions & 12 deletions pymc3/distributions/timeseries.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from scipy import stats
import theano.tensor as tt
from theano import scan

Expand Down Expand Up @@ -166,17 +167,22 @@ def logp(self, value):


class GaussianRandomWalk(distribution.Continuous):
R"""
Random Walk with Normal innovations
R"""Random Walk with Normal innovations

Parameters
----------
mu: tensor
innovation drift, defaults to 0.0
For vector valued mu, first dimension must match shape of the random walk, and
the first element will be discarded (since there is no innovation in the first timestep)
sigma : tensor
sigma > 0, innovation standard deviation (only required if tau is not specified)
For vector valued sigma, first dimension must match shape of the random walk, and
the first element will be discarded (since there is no innovation in the first timestep)
tau : tensor
tau > 0, innovation precision (only required if sigma is not specified)
For vector valued tau, first dimension must match shape of the random walk, and
the first element will be discarded (since there is no innovation in the first timestep)
init : distribution
distribution for initial value (Defaults to Flat())
"""
Expand All @@ -187,9 +193,10 @@ def __init__(self, tau=None, init=Flat.dist(), sigma=None, mu=0.,
if sd is not None:
sigma = sd
tau, sigma = get_tau_sigma(tau=tau, sigma=sigma)
self.tau = tau = tt.as_tensor_variable(tau)
self.sigma = self.sd = sigma = tt.as_tensor_variable(sigma)
self.mu = mu = tt.as_tensor_variable(mu)
self.tau = tt.as_tensor_variable(tau)
sigma = tt.as_tensor_variable(sigma)
self.sigma = self.sd = sigma
self.mu = tt.as_tensor_variable(mu)
self.init = init
self.mean = tt.as_tensor_variable(0.)

Expand All @@ -206,15 +213,46 @@ def logp(self, x):
-------
TensorVariable
"""
sigma = self.sigma
mu = self.mu
init = self.init
if x.ndim > 0:
x_im1 = x[:-1]
x_i = x[1:]
if self.sigma.ndim > 0:
sigma = self.sigma[:-1]
else:
sigma = self.sigma
if self.mu.ndim > 0:
mu = self.mu[:-1]
else:
mu = self.mu

x_im1 = x[:-1]
x_i = x[1:]
innov_like = Normal.dist(mu=x_im1 + mu, sigma=sigma).logp(x_i)
return self.init.logp(x[0]) + tt.sum(innov_like)
return self.init.logp(x)

def random(self, point=None, size=None):
"""Draw random values from GaussianRandomWalk.

innov_like = Normal.dist(mu=x_im1 + mu, sigma=sigma).logp(x_i)
return init.logp(x[0]) + tt.sum(innov_like)
Parameters
----------
point : dict, optional
Dict of variable values on which random values are to be
conditioned (uses default point if not specified).
size : int, optional
Desired size of random sample (returns one sample if not
specified).

Returns
-------
array
"""
sigma, mu = distribution.draw_values([self.sigma, self.mu], point=point, size=size)
return distribution.generate_samples(self._random, sigma=sigma, mu=mu, size=size,
dist_shape=self.shape)

def _random(self, sigma, mu, size):
"""Implement a Gaussian random walk as a cumulative sum of normals."""
rv = stats.norm(mu, sigma)
return rv.rvs(size).cumsum(axis=0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Careful with axis=0. If mu is an RV, its drawn value will have the size prepend, and that will shift the time series axis further to the right.


def _repr_latex_(self, name=None, dist=None):
if dist is None:
Expand Down
Loading