Skip to content

BUG: Fix matmul with out= arrays #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed

BUG: Fix matmul with out= arrays #90

wants to merge 2 commits into from

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Mar 19, 2023

fixes gh-85

matmul(x1, x2, out) does not broadcast x1, x2 against out, like other ufuncs do.

matmul(x1, x2, out) does not broadcast x1, x2 against out, like
other ufuncs do.
matmul = np.matmul

def setup_method(self):
self.matmul = np.matmul
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is a bit curious. Storing ufuncs as class attributes seems to work ok in scripts/interactive interpreter, but breaks down in combination with pytest:

the argument processing/normalization machinery picks up the extra self-like argument, which leads to

>           tensor = torch.as_tensor(obj)
E           RuntimeError: Could not infer dtype of TestMatmul

so there's some spooky action on a distance between meta-stuff pytest is doing and what we do here.

@@ -5974,7 +5981,7 @@ def test_out_contiguous(self):
# test out non-contiguous
out = np.ones((5, 2, 2), dtype=float)
c = self.matmul(a, b, out=out[..., 0])
assert c.base is out
assert c._tensor._base is out._tensor # FIXME: self.tensor (no underscore)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will get cleaned up before merging, after the base of the stack gets in.

Note that NumPy does not support in-place __imatmul__, but PyTorch does. So do we then.
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I think the broadcasting issue for matmul will simply go away when we implement the out= semantics by doing the broadcasting wrt. the out parameter after the computation rather than before. The other matmul fixes from this PR LGTM, so what about simply merging those and revisiting the out= behaviour later?

@ev-br
Copy link
Collaborator Author

ev-br commented Mar 21, 2023

The other matmul fixes from this PR LGTM, so what about simply merging those and revisiting the out= behaviour later?

Works for me :-). Merging this PR is blocked by the normalizations PR though, as this one is at the top of the stack.

@ev-br ev-br mentioned this pull request Mar 23, 2023
@ev-br
Copy link
Collaborator Author

ev-br commented Mar 23, 2023

Merged in #91

@ev-br ev-br closed this Mar 23, 2023
@ev-br ev-br deleted the matmul2 branch March 23, 2023 15:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants