-
-
Notifications
You must be signed in to change notification settings - Fork 269
Adapt and run blackbox likelihood tutorial #28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Skimming over the notebook that ran this PR it seems quite nice! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is super thorough @OriolAbril, thanks a lot 😍
I left a few comments below -- hope it's useful!
View / edit / reply to this conversation on ReviewNB AlexAndorra commented on 2021-01-24T11:15:53Z
|
View / edit / reply to this conversation on ReviewNB AlexAndorra commented on 2021-01-24T11:15:54Z
|
View / edit / reply to this conversation on ReviewNB AlexAndorra commented on 2021-01-24T11:15:54Z "We can now check that the gradient Op works |
View / edit / reply to this conversation on ReviewNB AlexAndorra commented on 2021-01-24T11:15:55Z Ideally, I would add a few lines explaining what to make of these summaries |
View / edit / reply to this conversation on ReviewNB MarcoGorelli commented on 2021-01-29T12:13:10Z the "custom distributions" link is dead
|
Thanks for doing this, I too was unable to run the Cython version, your work here is much appreciated |
Ping @OriolAbril |
I have this in mind, but as we won't add the |
this is really cool!!!! |
Hello, Thanks for the awesome tutorial! I have a question about the normal_gradients function. I don't understand how this works to take the gradient of the likelihood. Also, this comment is present in the function: "the derivatives are calculated using the central difference, using an iterative method to check that the values converge as step size decreases." However, I do not see how the normal_gradients function is accomplishing this. There does not seem to be any iteration. Can you please explain how the normal_gradients function works? Thank you! |
Hi, can you ask on https://discourse.pymc.io? this way it will be easier for other people with the same question to see and you might also get answers faster as more people follow that but not this PR |
Hi Rahul, I couldn't find your post on discourse.pymc.io. I also had the same question but after looking at the notebook again I understand what the function is doing. In contrast to the description, Explanation of what I mean by symbolic derivativeWe are trying to calculate the gradient of the log likelihood. The log likelihood has been defined by the following functions def my_model(theta, x):
m, c = theta
return m * x + c
def my_loglike(theta, x, data, sigma):
model = my_model(theta, x)
return -(0.5 / sigma**2) * np.sum((data - model) ** 2) Symbolically we could write this as This function is simple enough that we can calculate the gradient by hand as def normal_gradients(theta, x, data, sigma):
grads = np.empty(2)
aux_vect = data - my_model(theta, x) # /(2*sigma**2)
grads[0] = np.sum(aux_vect * x)
grads[1] = np.sum(aux_vect)
return grads If you link me the discourse post I am happy to repeat this answer there. |
I was unable to run the cython notebook due to cython compilation issues, so I decided to create a numpy version of the same concept instead.
The immediate goal was to test the observed/givens split which seems to be on the right track and then I figured I could share this version too even though I'm not much of a fan of the blackbox likelihood it seems it is quite common.