Skip to content

MAINT: split remaining functions into normalizations and implementations #82

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 23, 2023

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Mar 11, 2023

This finishes up the move of all work to be done on the torch level in detail, and the wrappers are as thin as it gets AFAICS.

@ev-br ev-br requested a review from lezcano March 11, 2023 17:29
@ev-br ev-br force-pushed the move_to_impl_take_2 branch from 92e80b4 to 7ca52f0 Compare March 11, 2023 17:51
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor points, otherwise it LGTM

Comment on lines 787 to 790
from ._ndarray import asarray

t = asarray(a).get()
return t.numel() == 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't feel right. Even if this function is trivial, it should have a split like all the others.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, done in 248c02b

To me, it feels squarely in the cure being worse than a disease territory but hey.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, to put this a bit more in context, making everything uniform in a strict way, even for simple ops like this comes from the following plan:

  • Split everything into preprocessing and actual implementation
  • Automatise as much of the preprocessing as possible via decorators or any other method. This induces a split API / impl.
  • Have a look at how many exceptions do we have left after this. If there's not that many, then that means that we were able to automatise the preprocessing well-enough and the API / impl does not make sense anymore. Then, impl can be moved to the API, again, and have the API part completely removed.

Note that his is just a 3 step program towards a large refactoring. As any refactoring, this could have been done in 1 step, but, if you do that, you may find that step 3 above does not really hold and you cannot really do the refactoring. With this plan, if 3 does not hold, we stay at 2, which is still a better state than the initial one.

Copy link
Collaborator Author

@ev-br ev-br Mar 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ISTM we're on the same page here on the big picture. The technical Q going forward is that the normalizer decorator allows optional arguments here, which it did not in gh-70. I can either move it over there or leave it here at the top of the 'stack' of gh-70 <- gh-80 <- gh-81 <-gh-82 , whichever makes it easier to review.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine leaving it here, unless you really prefer to float it to gh70.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I belive you were right. After seeing the machinery necessary to accommodate this one function, I believe that the original solution was good enough. Sorry for the trouble. If you feel like it, feel free to revert.

On a different note, I believe we will want parameters to normalizer at some point .Let's discuss some possibilities in a week, once I'm back and I've reviewed gh70.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, we'll need parameters anyway, and there is a solution now, can as well keep it. Let's discuss it when you're back, enjoy your vacation!

@ev-br ev-br force-pushed the move_to_impl_take_2 branch 2 times, most recently from 80119ae to a0cb202 Compare March 12, 2023 08:49
@ev-br ev-br force-pushed the impl_package branch 2 times, most recently from 4c2466a to d923fc8 Compare March 23, 2023 00:01
Base automatically changed from impl_package to main March 23, 2023 00:06
@ev-br ev-br force-pushed the move_to_impl_take_2 branch from cd778db to 78b64c6 Compare March 23, 2023 00:13
ev-br added 4 commits March 23, 2023 17:00
…eturn a value

On failure,  @ normalizer without arguments (default) raises, while
@Normalizer(return_or_failure=value) return a specified `value`.

This is useful, e.g. for

@-normalizer
def is_scalar(a: ArrayLike):
   ....

Consider is_scalar(int): the argument cannot be converted to tensor, so
the function returns False. But the actual failure is in the decorator,
so cannot do try-except in the body of the function!
any(...) is bad for torch dynamo, so remove the check. The same check is done
internally in torch.quantile, so we're only paying by a RuntimeError instead
of NumPy's ValueError.
@ev-br ev-br force-pushed the move_to_impl_take_2 branch from 78b64c6 to 5fea60f Compare March 23, 2023 14:19
@ev-br ev-br merged commit 37d6f4f into main Mar 23, 2023
@ev-br ev-br deleted the move_to_impl_take_2 branch March 23, 2023 14:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants