Expose matrix multiplication operations with conjugate transposes of the inputs #51750
Labels
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: complex
Related to complex number support in PyTorch
module: linear algebra
Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Uh oh!
There was an error while loading. Please reload this page.
🚀 Feature
BLAS libraries that pytorch uses have an option of doing implicit conjugate transpose of an argument ('h' option), but pytorch does not have bindings to those, and does not expose a way to call them. This option can be useful to speed up backward pass through matrix multiplication ops, because we could potentially avoid materializing conjugate.
It's not fully clear how to best expose this to user, or whether it should be exposed at all, as opposed to being called in the backward when necessary.
Pytorch sets
t
argument to blas calls depending on the strides of the input matrices,h
cannot be set independently, it's possible to set it only if physical memory layout corresponds to transposed matrix, so UX here is not very clear. This issue is to discuss what exposure we want.Related: we have
dot
andvdot
functions, wherevdot
does an implicit conjugate of an argument.Also related: #45063, where some comments discuss the possibility of adding conjugate views.
cc @ezyang @anjali411 @dylanbespalko @mruberry @jianyuh @nikitaved @pearu @heitorschueroff @walterddr @IvanYashchuk
The text was updated successfully, but these errors were encountered: