-
Notifications
You must be signed in to change notification settings - Fork 4
Merge wrapper and implementations #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The merge LGTM. We are still missing to merge the other files within _detail
, but this first step looks good.
I took this chance to go through the implementations here and flag a few issues with them. Just minor things all throughout.
Style Nit. Very often you name a variable result, just to return it in the next line. I think it's cleaner and more concise to simply do return torch.fn
.
There's a number of places where we use methods rather than their torch.fn
versions. It'd be good to be consistent all throughout. I flagged a few of these, but there's more (roll, squeeze, reshape...)
torch_np/_funcs.py
Outdated
# XXX: this is a stub | ||
if a is False: | ||
return a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this construction mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if a is False: return False
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but a
is an ArrrayLike
, when is it going to be the literal False
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, fair enough. Thanks for spotting this. Removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This uncovered a latent bug:
@normalizer(return_on_failure=False)
def isscalar(a: ArrayLike):
# XXX: this is a stub
> return a.numel() == 1
E AttributeError: 'bool' object has no attribute 'numel'
It may be better to just implement this function without the normalizer and remove the return_on_failure
flag, as you initially had?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All right. Here's what's going on: isscalar(np.int)
(np.int
is not a scalar!) fails to convert its argument to tensor, and the return_on_failure
code path feeds False
into the the body of the function. This is in fact requested by the normalizer argument three lines above, via normalizer(return_on_failure=False)
. So, can either
- put back the
if a is False: return False
clause + a code comment to not confuse future readers - rip the
return_on_failure
code path and normalize manually (and possibly will need to normalize manually other functions from the not-yet-implemented numpy apis, who knows)
Your preference @lezcano ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably the simplest option is the second one, as this function can be implemented as
def isscalar(a):
try:
return asarray(a).size == 1
except:
return False
and it's very much one of a kind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, remove_on_failure
is gone.
e75772d
to
f9f9ba4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing two minor things. Approving this contingent on fixing those two.
This one got us -600LOC, not too bad!
a126f71
to
c81f677
Compare
CI is green, the two last comments are addressed, merging. |
Undo the split between decorated functions and torch implementations: the formers are nothing but fall-throught to tha latters now that all argument pre/postprocessing is contained in normalization decorators.
While at it, merge
_funcs.py
and_wrapper.py
, the split was purely technical.The diff looks scary, but this is mainly just shuffling things around and trimming a small amount of dead weight.