Skip to content

Rebase #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 23, 2023
Merged

Rebase #91

merged 7 commits into from
Mar 23, 2023

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Mar 23, 2023

ev-br added 7 commits March 23, 2023 17:38
The implementation is a bit simpler than numpy: we do not have the notion
of ufunc loop types (np.add.types etc), so we just cast input tensors to the
`result_type(dtype, out.dtype)`, and ask pytorch to do computations in that
dtype.
matmul(x1, x2, out) does not broadcast x1, x2 against out, like
other ufuncs do.
Note that NumPy does not support in-place __imatmul__, but PyTorch does. So do we then.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant