-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Logit-normal distribution #2100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Scale parameter (tau > 0). | ||
""" | ||
|
||
def __init__(self, mu=0.5, tau=None, *args, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We switched to parameterize by default using the sd
.
See the other distribution unittests, although I'm not sure there is a Logit-Normal in scipy? |
Ping @denadai2 |
What's the status of this PR? |
Inactive it seems. |
unfortunately I didn't have time to pursue it. I'm trying to make pymc3 work a bit more with theano (float32 vs float64) |
@denadai2 No worries, maybe someone else picks it up. float32 is a worthwhile endeavour! |
I had a quick look - I couldn't find a logit-normal in scipy. Does anyone have another solution? |
@springcoil I guess that if you want the pdf you can do something like this: stats.norm.pdf(logit(x), loc=0, scale=1) * 1/(x * (1-x)) and if you want to generate random numbers try expit(stats.norm.rvs(loc=0, scale=1) both |
@@ -643,6 +644,47 @@ def logp(self, value): | |||
tau > 0) | |||
|
|||
|
|||
class Logitnormal(PositiveContinuous): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose it is unit continuous
What is the status of the PR? This distribution is very practical and good to have |
Hi @denadai2 I took you PR and I added a few lines to your code, the biggest differences are:
Do you want to continue from here with this PR? Tests are still missing. class LogitNormal(UnitContinuous):
R"""
Logit-Normal log-likelihood.
.. math::
f(x \mid \mu, \tau) =
\frac{1}{x(1-x)} \sqrt{\frac{\tau}{2\pi}}
\exp\left\{ -\frac{\tau}{2} (logit(x)-\mu)^2 \right\}
======== =========================================================================
Support :math:`x \in (0, 1)`
Mean no analytical solution
Variance no analytical solution
======== =========================================================================
.. plot::
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
from scipy.special import logit
x = np.linspace(0, 1, 1000)
fig, ax = plt.subplots()
f = lambda mu, sd : st.norm.pdf(logit(x), loc=mu, scale=sd) * 1/(x * (1-x))
plot_pdf = lambda a, b : ax.plot(x, f(a,b), label=r'$\mu$={0}, $\sigma$={1}'.format(a,b))
plot_pdf(0, 0.3)
plot_pdf(0, 1)
plot_pdf(0, 2)
plot_pdf(1, 1)
plt.legend(loc='upper right', frameon=False)
ax.set(xlim=[0, 1], ylim=[0, 6], xlabel='x', ylabel='f(x)')
plt.show()
Parameters
----------
mu : float
Location parameter.
sd : float
Scale parameter (sd > 0).
tau : float
Scale parameter (tau > 0).
"""
def __init__(self, mu=0, sd=None, tau=None, **kwargs):
self.mu = mu = tt.as_tensor_variable(mu)
tau, sd = get_tau_sd(tau=tau, sd=sd)
self.sd = tt.as_tensor_variable(sd)
self.tau = tau = tt.as_tensor_variable(tau)
self.median = pm.math.sigmoid(mu)
assert_negative_support(sd, 'sd', 'LogitNormal')
assert_negative_support(tau, 'tau', 'LogitNormal')
super(LogitNormal, self).__init__(**kwargs)
def random(self, point=None, size=None, repeat=None):
mu, _, sd = draw_values([self.mu, self.tau, self.sd],
point=point)
return expit(generate_samples(stats.norm.rvs, loc=mu, scale=sd,
dist_shape=self.shape,
size=size))
def logp(self, value):
sd = self.sd
mu = self.mu
tau = self.tau
return bound(-0.5 * tau * (logit(value) - mu) ** 2
+ 0.5 * tt.log(tau / (2. * np.pi))
- tt.log(value * (1 - value)), value > 0, value < 1, tau > 0) |
@denadai2 any update on this? |
close in favor of #2877 |
I'm sorry but I need an help finishing this... :(