-
Notifications
You must be signed in to change notification settings - Fork 4
ndarray dunders / binary ufuncs #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
They will go at some point anyway.
71bb501
to
1acb5aa
Compare
69bfb8e
to
e96e08b
Compare
e96e08b
to
d92b127
Compare
OK, I think this is it for now, this three-PR "stack" no longer WIP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+903 −1,551 I think the numbers speak for themselves!
I left a few things that I believe can be improved, but it looks mostly good!
|
||
def __gt__(self, other): | ||
return asarray(self._tensor > asarray(other).get()) | ||
return _ufunc_impl.greater(self, asarray(other)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps simpler to just call the functions in _binary_ufuncs
? This way you could simply do
import _binary_ufuncs as bfun
__gt__ = bfun.greater
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With a one-line helper function you could also golf this way all the in-place and reverse dunder methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly, _binary_ufuncs
has from _ndarray import asarray
, so it's an import loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There has to be a way to untangle this. A (dirty) way of working around this for now would be to do a local import of the module within the class. But more generally, perhaps @rgommers has some ideas on how to work around these issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I nearly did a local import; however this feels almost the same level of dirty, and I'd rather hammer it until it works with a reasonable coverage, and think through a general refactor when it does. A refactor is definitely in order (together with asarray/_from_tensor_and_base rationalization we discussed in a parallel thread, a reasonable solution to asarray imports in dtypes.py etc etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, if I understand the big picture correctly, the main issue is:
asarray
uses thendarray
constructor- ndarray uses functions and uses
asarray
internally in its methods - those functions use
asarray
internally as well
Without trying to grasp the full detail here, and assuming that the solution to the circular import thing is not to throw everything into a single huge .py
file, I suspect that the main problem is a lack of separation between public and private APIs. For the functions containing the main logic (item 3 above), if they were private and could assume that all inputs were already all ndarray
(or maybe better, all torch.Tensor
), they wouldn't have to use asarray
. The public API would then be a very thin layer with the correct signatures, wrap/unwrap of input arguments and return values, and forwarding everything else to the private API. The private API would never call back to the public API layer.
Thanks for the reviews and discussions! Am merging this to make iterating on private/public split easier. |
This is a third PR in a "stack" of gh-12 -> gh-16 -> this one.