Skip to content

Optimizations for Det should also apply to SlogDet #1039

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ricardoV94 opened this issue Oct 18, 2024 · 2 comments · Fixed by #1041
Closed

Optimizations for Det should also apply to SlogDet #1039

ricardoV94 opened this issue Oct 18, 2024 · 2 comments · Fixed by #1041
Labels

Comments

@ricardoV94
Copy link
Member

Description

import pytensor
import pytensor.tensor as pt

x_diag = pt.vector("x_diag")
x = pt.diag(x_diag)
y = pt.log(pt.linalg.det(x))

pytensor.function([x_diag], y).dprint()
# Log [id A] 1
#  └─ Prod{axes=None} [id B] 0
#     └─ x_diag [id C]

_, y = pt.linalg.slogdet(x)
pytensor.function([x_diag], y).dprint(depth=3)
# SLogDet.1 [id A] 4
#  └─ AdvancedSetSubtensor [id B] 3
#     ├─ Alloc [id C] 2
#     ├─ x_diag [id D]
#     ├─ ARange{dtype='int64'} [id E] 1
#     └─ ARange{dtype='int64'} [id E] 1
#        └─ ···
@ricardoV94
Copy link
Member Author

ricardoV94 commented Oct 19, 2024

We probably should have linalg.slogdet just return sign(det(x)), log(abs(det(x))) and only later specialize to the SlogDet Op.

Then we don't need to worry about the two forms of Det during linalg rewrites

@ricardoV94
Copy link
Member Author

Another point brought up by @jessegrabowski is that SlogDet doesn't have grad. One more reason to only introduce it as a specialization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant