-
-
Notifications
You must be signed in to change notification settings - Fork 62
fix autoreparam because dims are no longer static #363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
c87db47
2d80fdf
9ec41cd
949d2e9
a3907f0
d2b3584
c2cdbdc
75c98ec
a9f582c
82a1a24
d5355a5
bcdea46
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,4 @@ | ||
import logging | ||
from dataclasses import dataclass | ||
from functools import singledispatch | ||
from typing import Dict, List, Optional, Sequence, Tuple, Union | ||
|
@@ -8,7 +9,6 @@ | |
import pytensor.tensor as pt | ||
import scipy.special | ||
from pymc.distributions import SymbolicRandomVariable | ||
from pymc.exceptions import NotConstantValueError | ||
from pymc.logprob.transforms import Transform | ||
from pymc.model.fgraph import ( | ||
ModelDeterministic, | ||
|
@@ -19,10 +19,12 @@ | |
model_from_fgraph, | ||
model_named, | ||
) | ||
from pymc.pytensorf import constant_fold, toposort_replace | ||
from pymc.pytensorf import toposort_replace | ||
from pytensor.graph.basic import Apply, Variable | ||
from pytensor.tensor.random.op import RandomVariable | ||
|
||
_log = logging.getLogger("pmx") | ||
|
||
|
||
@dataclass | ||
class VIP: | ||
|
@@ -174,15 +176,19 @@ def vip_reparam_node( | |
) -> Tuple[ModelDeterministic, ModelNamed]: | ||
if not isinstance(node.op, RandomVariable | SymbolicRandomVariable): | ||
raise TypeError("Op should be RandomVariable type") | ||
rv = node.default_output() | ||
try: | ||
[rv_shape] = constant_fold([rv.shape]) | ||
except NotConstantValueError: | ||
raise ValueError("Size should be static for autoreparametrization.") | ||
# FIXME: This is wrong when size is None | ||
_, size, *_ = node.inputs | ||
ferrine marked this conversation as resolved.
Show resolved
Hide resolved
|
||
eval_size = size.eval(mode="FAST_COMPILE") | ||
if eval_size is not None: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. eval_size will never be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wait why are you evaling size instead of rv.shape?
That's the one you should use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was afraid I get the dependence on the original RV, ah, I do not seem to do There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If you use eval you can only get a numerical output There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you need shape of transformed or original? If it is the transformed, you could request model.initial_point before starting the dispatch process? Alternatively we should be able to use shape inference but may need to implement the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you point me to which test failed when you tried There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This looks like an interesting solution There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Seems to give a point in the sampling space, I need the untransformed one |
||
rv_shape = tuple(eval_size) | ||
else: | ||
rv_shape = () | ||
lam_name = f"{name}::lam_logit__" | ||
_log.debug(f"Creating {lam_name} with shape of {rv_shape}") | ||
ricardoV94 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
logit_lam_ = pytensor.shared( | ||
ricardoV94 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
np.zeros(rv_shape), | ||
shape=rv_shape, | ||
name=f"{name}::lam_logit__", | ||
name=lam_name, | ||
) | ||
logit_lam = model_named(logit_lam_, *dims) | ||
lam = pt.sigmoid(logit_lam) | ||
|
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to raise NotImplementedError for op.ndim_supp>0?
Not sure if for those you would want one lambda per shape or size, mentioning because those will always be different for multivariate RVs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will not pass the dispatch anyway, it is not needed
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ndim_supp>0, will never be supported I think, but I'm not sure, maybe it will, e.g. Dirichlet, there are some reparameterizations with Gamma https://stats.stackexchange.com/questions/548620/reparameterization-trick-for-the-dirichlet-distribution
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per size for sure 100%
I believe that if I work with multivariate rv, it will be the only way to have one lambda per size (one per independent draw)