Skip to content

add wrappers for np.fft #123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 24, 2023
Merged

add wrappers for np.fft #123

merged 5 commits into from
Apr 24, 2023

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Apr 22, 2023

Is mostly straightforward, as pytorch API is nearly identical indeed. The main difference seems to be that numpy.fft returns 64-bit transforms for all input dtypes. We now have set_default_dtype though, so we probably want fft routines to react to it:

>>> a = np.arange(5, dtype=np.float32)
>>> np.fft.fft(a).dtype == np.complex128            # this is the (numpy) way
True
>>> np.set_default_dtype("pytorch")
>>> np.fft.fft(a).dtype == np.complex64

x = random(30) + 1j*random(30)
assert_allclose(fft1(x), np.fft.fft(x), atol=1e-6)
assert_allclose(fft1(x), np.fft.fft(x, norm="backward"), atol=1e-6)
assert_allclose(fft1(x), np.fft.fft(x), atol=2e-5)
Copy link
Collaborator Author

@ev-br ev-br Apr 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to bump the tolerance. Not much we can do I guess, this falls straight through to pytorch.

@ev-br ev-br changed the title WIP: np.fft add wrappers for np.fft Apr 23, 2023
@ev-br ev-br requested a review from lezcano April 23, 2023 08:23
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

@ev-br ev-br merged commit aaabfda into main Apr 24, 2023
@ev-br ev-br deleted the fft branch April 24, 2023 06:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants