-
Notifications
You must be signed in to change notification settings - Fork 4
BUG: Fix matmul with out= arrays #90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
matmul(x1, x2, out) does not broadcast x1, x2 against out, like other ufuncs do.
matmul = np.matmul | ||
|
||
def setup_method(self): | ||
self.matmul = np.matmul |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is a bit curious. Storing ufuncs as class attributes seems to work ok in scripts/interactive interpreter, but breaks down in combination with pytest:
the argument processing/normalization machinery picks up the extra self
-like argument, which leads to
> tensor = torch.as_tensor(obj)
E RuntimeError: Could not infer dtype of TestMatmul
so there's some spooky action on a distance between meta-stuff pytest is doing and what we do here.
@@ -5974,7 +5981,7 @@ def test_out_contiguous(self): | |||
# test out non-contiguous | |||
out = np.ones((5, 2, 2), dtype=float) | |||
c = self.matmul(a, b, out=out[..., 0]) | |||
assert c.base is out | |||
assert c._tensor._base is out._tensor # FIXME: self.tensor (no underscore) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will get cleaned up before merging, after the base of the stack gets in.
Note that NumPy does not support in-place __imatmul__, but PyTorch does. So do we then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I think the broadcasting issue for matmul will simply go away when we implement the out=
semantics by doing the broadcasting wrt. the out parameter after the computation rather than before. The other matmul fixes from this PR LGTM, so what about simply merging those and revisiting the out=
behaviour later?
Works for me :-). Merging this PR is blocked by the normalizations PR though, as this one is at the top of the stack. |
Merged in #91 |
fixes gh-85
matmul(x1, x2, out)
does not broadcast x1, x2 against out, like other ufuncs do.