Skip to content

add ndarrays.flags.f_contiguous #62

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 23, 2023
Merged

add ndarrays.flags.f_contiguous #62

merged 2 commits into from
Feb 23, 2023

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Feb 22, 2023

Implement f_contguous flag, also heuristics for owndata and writeable flags.

# check if F contiguous
from itertools import accumulate

f_strides = tuple(accumulate(list(self._tensor.shape), func=lambda x, y: x * y))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is of course just torch.cumprod(self._tensor.shape, 0); my instinct from numpy usage is to avoid array operations for small tuples etc.

Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A simpler implementation in numpy would be x.T.is_contiguous(), using the terrible semantics numpy gave to the T attribute. Either sgtm

@ev-br
Copy link
Collaborator Author

ev-br commented Feb 23, 2023

A simpler implementation in numpy would be x.T.is_contiguous(), using the terrible semantics numpy gave to the T attribute.

Exactly. I considered doing just that, but this would need to live in the wrapper level and indeed rely on numpy semantics, and we're trying to push things to the torch level, are we.

@ev-br ev-br merged commit 6a86527 into main Feb 23, 2023
@ev-br ev-br deleted the f_contiguous branch February 23, 2023 09:03
@lezcano
Copy link
Collaborator

lezcano commented Feb 23, 2023

if things are implemented in the wrapper, we can certainly use those to implement others as those will redirect ultimately ti PyTorch ops, so that's completely fine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants