-
Notifications
You must be signed in to change notification settings - Fork 4
The use of tuple
in annotations is Python 3.9
#111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
typing.Tuple maybe? |
I wanted to use the import as it's more future proof. |
that doesn't ring any bells, sorry. I'd have thought that this should "just work" with |
From a cursory look, the string behavior seems to be what https://peps.python.org/pep-0563/ mandates. For one,
So we maybe can get away with simply changing the dict keys to be strings, not types themselves: https://github.com/Quansight-Labs/numpy_pytorch_interop/blob/main/torch_np/_normalizations.py#L99
I'll take a closer look next week unless someone beats me to it. EDIT: however, I'm really not sure what the intended direction of this whole annotation story is, and what is the language-lawyer way to do it. So if tweaking our usage is not very invasive and if it can be made to work on python 3.8 and above (will need some CI), I'd take a pragmatic approach. EDIT2:
Or duplicating string and class forms:
|
Changing the dict to strings and adding the |
Just dumping it here for now: the diff below makes tests pass locally for me with python 3.8 and pytorch 2.0.0. diff --git a/torch_np/_funcs_impl.py b/torch_np/_funcs_impl.py
index 53b69bb..cdf7e7b 100644
--- a/torch_np/_funcs_impl.py
+++ b/torch_np/_funcs_impl.py
@@ -5,6 +5,7 @@ pytorch tensors.
"""
# Contents of this module ends up in the main namespace via _funcs.py
# where type annotations are used in conjunction with the @normalizer decorator.
+from __future__ import annotations
import builtins
import math
diff --git a/torch_np/_normalizations.py b/torch_np/_normalizations.py
index d275744..2d08de7 100644
--- a/torch_np/_normalizations.py
+++ b/torch_np/_normalizations.py
@@ -1,5 +1,7 @@
""" "Normalize" arguments: convert array_likes to tensors, dtypes to torch dtypes and so on.
"""
+from __future__ import annotations
+
import functools
import operator
import typing
@@ -97,15 +99,15 @@ def normalize_outarray(arg, name=None):
normalizers = {
- ArrayLike: normalize_array_like,
- Optional[ArrayLike]: normalize_optional_array_like,
- Sequence[ArrayLike]: normalize_seq_array_like,
- Optional[NDArray]: normalize_ndarray,
- Optional[OutArray]: normalize_outarray,
- NDArray: normalize_ndarray,
- DTypeLike: normalize_dtype,
- SubokLike: normalize_subok_like,
- AxisLike: normalize_axis_like,
+ 'ArrayLike': normalize_array_like,
+ 'Optional[ArrayLike]': normalize_optional_array_like,
+ 'Sequence[ArrayLike]': normalize_seq_array_like,
+ 'Optional[NDArray]': normalize_ndarray,
+ 'Optional[OutArray]': normalize_outarray,
+ 'NDArray': normalize_ndarray,
+ 'DTypeLike': normalize_dtype,
+ 'SubokLike': normalize_subok_like,
+ 'AxisLike': normalize_axis_like,
}
diff --git a/torch_np/_ufuncs.py b/torch_np/_ufuncs.py
index 8695bb8..516a31e 100644
--- a/torch_np/_ufuncs.py
+++ b/torch_np/_ufuncs.py
@@ -1,3 +1,5 @@
+from __future__ import annotations
+
from typing import Optional
import torch
diff --git a/torch_np/linalg.py b/torch_np/linalg.py
index 1779297..9a46466 100644
--- a/torch_np/linalg.py
+++ b/torch_np/linalg.py
@@ -1,3 +1,6 @@
+from __future__ import annotations
+
+
import functools
import math
from typing import Sequence
diff --git a/torch_np/random.py b/torch_np/random.py
index 862357b..842c617 100644
--- a/torch_np/random.py
+++ b/torch_np/random.py
@@ -5,6 +5,8 @@ NumPy has strict guarantees on reproducibility etc; here we don't give any.
Q: default dtype is float64 in numpy
"""
+from __future__ import annotations
+
from math import sqrt
from typing import Optional
diff --git a/torch_np/tests/numpy_tests/core/test_dtype.py b/torch_np/tests/numpy_tests/core/test_dtype.py
index 892bb18..947835d 100644
--- a/torch_np/tests/numpy_tests/core/test_dtype.py
+++ b/torch_np/tests/numpy_tests/core/test_dtype.py
@@ -306,8 +306,8 @@ class TestClassGetItem:
assert np.dtype[Any]
-@pytest.mark.skipif(sys.version_info >= (3, 9), reason="Requires python 3.8")
+@pytest.mark.skipif(sys.version_info >= (3, 9), reason="Requires python 3.9")
def test_class_getitem_38() -> None:
match = "Type subscription requires python >= 3.9"
- with pytest.raises(TypeError, match=match):
+ with pytest.raises(TypeError): # , match=match):
np.dtype[Any]
diff --git a/torch_np/tests/numpy_tests/core/test_scalar_methods.py b/torch_np/tests/numpy_tests/core/test_scalar_methods.py
index fdbd1e8..4f6f1ce 100644
--- a/torch_np/tests/numpy_tests/core/test_scalar_methods.py
+++ b/torch_np/tests/numpy_tests/core/test_scalar_methods.py
@@ -179,7 +179,7 @@ class TestClassGetItem:
@pytest.mark.parametrize("cls", [np.number, np.complexfloating, np.int64])
def test_class_getitem_38(cls: Type[np.number]) -> None:
match = "Type subscription requires python >= 3.9"
- with pytest.raises(TypeError, match=match):
+ with pytest.raises(TypeError): #, match=match):
cls[Any] |
See https://docs.python.org/3/library/typing.html#typing.Tuple. Accordingly, we should
from __future__ import annotations
. Now, interestingly enough, when doing that in the impl files, the annotated type of the variable returned byinspect.signature["param"].annotation
happens to be a string...Not sure what's going on there (perhaps @rgommers knows?), but we should test all this with Python 3.8 as it's the oldest Python that PyTorch supports.
The text was updated successfully, but these errors were encountered: