Skip to content

The use of tuple in annotations is Python 3.9 #111

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
lezcano opened this issue Apr 10, 2023 · 6 comments · Fixed by #113
Closed

The use of tuple in annotations is Python 3.9 #111

lezcano opened this issue Apr 10, 2023 · 6 comments · Fixed by #113

Comments

@lezcano
Copy link
Collaborator

lezcano commented Apr 10, 2023

See https://docs.python.org/3/library/typing.html#typing.Tuple. Accordingly, we should from __future__ import annotations. Now, interestingly enough, when doing that in the impl files, the annotated type of the variable returned by inspect.signature["param"].annotation happens to be a string...
Not sure what's going on there (perhaps @rgommers knows?), but we should test all this with Python 3.8 as it's the oldest Python that PyTorch supports.

@ev-br
Copy link
Collaborator

ev-br commented Apr 10, 2023

typing.Tuple maybe?

@lezcano
Copy link
Collaborator Author

lezcano commented Apr 10, 2023

I wanted to use the import as it's more future proof. typing.Tuple is deprecated as of 3.9.

@rgommers
Copy link
Member

that doesn't ring any bells, sorry. I'd have thought that this should "just work" with from __future__ import annotations

@ev-br
Copy link
Collaborator

ev-br commented Apr 11, 2023

From a cursory look, the string behavior seems to be what https://peps.python.org/pep-0563/ mandates. For one,

With this PEP, function and variable annotations will no longer be evaluated at definition time. Instead, a string form will be preserved in the respective __annotations__ dictionary. Static type checkers will see no difference in behavior, whereas tools using annotations at runtime will have to perform postponed evaluation.

So we maybe can get away with simply changing the dict keys to be strings, not types themselves: https://github.com/Quansight-Labs/numpy_pytorch_interop/blob/main/torch_np/_normalizations.py#L99
This guess is based on dropping into a pdb in normalizer and:

(Pdb) p func
<function empty at 0x7ffa327ec3a0>
(Pdb) p params['dtype']
<Parameter "dtype: 'DTypeLike' = <class 'float'>">
(Pdb) p params['dtype'].annotation
'DTypeLike'

I'll take a closer look next week unless someone beats me to it.

EDIT: however, I'm really not sure what the intended direction of this whole annotation story is, and what is the language-lawyer way to do it. So if tweaking our usage is not very invasive and if it can be made to work on python 3.8 and above (will need some CI), I'd take a pragmatic approach.

EDIT2:

changing the dict keys to be strings, not types themselves

Or duplicating string and class forms:

{'ArrayLike': normalize_array_like,
 ArrayLike: normalize_array_like,
...
}

@lezcano
Copy link
Collaborator Author

lezcano commented Apr 11, 2023

Changing the dict to strings and adding the from future import... to all the files that use the annotations seems like the right approach. I might send a PR later this week with this change and a few other proposals if I find the time.

@ev-br
Copy link
Collaborator

ev-br commented Apr 11, 2023

Just dumping it here for now: the diff below makes tests pass locally for me with python 3.8 and pytorch 2.0.0.

diff --git a/torch_np/_funcs_impl.py b/torch_np/_funcs_impl.py
index 53b69bb..cdf7e7b 100644
--- a/torch_np/_funcs_impl.py
+++ b/torch_np/_funcs_impl.py
@@ -5,6 +5,7 @@ pytorch tensors.
 """
 # Contents of this module ends up in the main namespace via _funcs.py
 # where type annotations are used in conjunction with the @normalizer decorator.
+from __future__ import annotations
 
 import builtins
 import math
diff --git a/torch_np/_normalizations.py b/torch_np/_normalizations.py
index d275744..2d08de7 100644
--- a/torch_np/_normalizations.py
+++ b/torch_np/_normalizations.py
@@ -1,5 +1,7 @@
 """ "Normalize" arguments: convert array_likes to tensors, dtypes to torch dtypes and so on.
 """
+from __future__ import annotations
+
 import functools
 import operator
 import typing
@@ -97,15 +99,15 @@ def normalize_outarray(arg, name=None):
 
 
 normalizers = {
-    ArrayLike: normalize_array_like,
-    Optional[ArrayLike]: normalize_optional_array_like,
-    Sequence[ArrayLike]: normalize_seq_array_like,
-    Optional[NDArray]: normalize_ndarray,
-    Optional[OutArray]: normalize_outarray,
-    NDArray: normalize_ndarray,
-    DTypeLike: normalize_dtype,
-    SubokLike: normalize_subok_like,
-    AxisLike: normalize_axis_like,
+    'ArrayLike': normalize_array_like,
+    'Optional[ArrayLike]': normalize_optional_array_like,
+    'Sequence[ArrayLike]': normalize_seq_array_like,
+    'Optional[NDArray]': normalize_ndarray,
+    'Optional[OutArray]': normalize_outarray,
+    'NDArray': normalize_ndarray,
+    'DTypeLike': normalize_dtype,
+    'SubokLike': normalize_subok_like,
+    'AxisLike': normalize_axis_like,
 }
 
 
diff --git a/torch_np/_ufuncs.py b/torch_np/_ufuncs.py
index 8695bb8..516a31e 100644
--- a/torch_np/_ufuncs.py
+++ b/torch_np/_ufuncs.py
@@ -1,3 +1,5 @@
+from __future__ import annotations
+
 from typing import Optional
 
 import torch
diff --git a/torch_np/linalg.py b/torch_np/linalg.py
index 1779297..9a46466 100644
--- a/torch_np/linalg.py
+++ b/torch_np/linalg.py
@@ -1,3 +1,6 @@
+from __future__ import annotations
+
+
 import functools
 import math
 from typing import Sequence
diff --git a/torch_np/random.py b/torch_np/random.py
index 862357b..842c617 100644
--- a/torch_np/random.py
+++ b/torch_np/random.py
@@ -5,6 +5,8 @@ NumPy has strict guarantees on reproducibility etc; here we don't give any.
 Q: default dtype is float64 in numpy
 
 """
+from __future__ import annotations
+
 from math import sqrt
 from typing import Optional
 
diff --git a/torch_np/tests/numpy_tests/core/test_dtype.py b/torch_np/tests/numpy_tests/core/test_dtype.py
index 892bb18..947835d 100644
--- a/torch_np/tests/numpy_tests/core/test_dtype.py
+++ b/torch_np/tests/numpy_tests/core/test_dtype.py
@@ -306,8 +306,8 @@ class TestClassGetItem:
         assert np.dtype[Any]
 
 
-@pytest.mark.skipif(sys.version_info >= (3, 9), reason="Requires python 3.8")
+@pytest.mark.skipif(sys.version_info >= (3, 9), reason="Requires python 3.9")
 def test_class_getitem_38() -> None:
     match = "Type subscription requires python >= 3.9"
-    with pytest.raises(TypeError, match=match):
+    with pytest.raises(TypeError):  # , match=match):
         np.dtype[Any]
diff --git a/torch_np/tests/numpy_tests/core/test_scalar_methods.py b/torch_np/tests/numpy_tests/core/test_scalar_methods.py
index fdbd1e8..4f6f1ce 100644
--- a/torch_np/tests/numpy_tests/core/test_scalar_methods.py
+++ b/torch_np/tests/numpy_tests/core/test_scalar_methods.py
@@ -179,7 +179,7 @@ class TestClassGetItem:
 @pytest.mark.parametrize("cls", [np.number, np.complexfloating, np.int64])
 def test_class_getitem_38(cls: Type[np.number]) -> None:
     match = "Type subscription requires python >= 3.9"
-    with pytest.raises(TypeError, match=match):
+    with pytest.raises(TypeError):  #, match=match):
         cls[Any]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants