Skip to content

Commit 9e919c7

Browse files
Changes due to new numpy scalar promotion rules
1. Changed autocaster due to new promotion rules With "weak promotion" of python types in Numpy 2.0, the statement `1.1 == np.asarray(1.1).astype('float32')` is True, whereas in Numpy 1.26, it was false. However, in numpy 1.26, `1.1 == np.asarray([1.1]).astype('float32')` was true, so the scalar behavior and array behavior are the same in Numpy 2.0, while they were different in numpy 1.26. Essentially, in Numpy 2.0, if python floats are used in operations with numpy floats or arrays, then the type of the numpy object will be used (i.e. the python value will be treated as the type of the numpy objects). To preserve the behavior of `NumpyAutocaster` from numpy <= 1.26, I've added an explicit conversion of the value to be converted to a numpy type using `np.asarray` during the check that decides what dtype to cast to. 2. Updates due to new numpy conversion rules for out-of-bounds python ints In numpy 2.0, out of bounds python ints will not be automatically converted, and will raise an `OverflowError` instead. For instance, converting 255 to int8 will raise an error, instead of returning -1. To explicitly force conversion, we must use `np.asarray(value).astype(dtype)`, rather than `np.asarray(value, dtype=dtype)`. The code in `TensorType.filter` has been changed to the new recommended way to downcast, and the error type caught by some tests has been changed to OverflowError from TypeError
1 parent b349a9a commit 9e919c7

File tree

4 files changed

+5
-3
lines changed

4 files changed

+5
-3
lines changed

pytensor/scalar/basic.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,9 @@ def __call__(self, x):
183183

184184
for dtype in try_dtypes:
185185
x_ = np.asarray(x).astype(dtype=dtype)
186-
if np.all(x == x_):
186+
if np.all(
187+
np.asarray(x) == x_
188+
): # use np.asarray(x) to match TensorType.filter
187189
break
188190
# returns either an exact x_==x, or the last cast x_
189191
return x_

pytensor/tensor/type.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ def filter(self, data, strict=False, allow_downcast=None) -> np.ndarray:
178178
else:
179179
if allow_downcast:
180180
# Convert to self.dtype, regardless of the type of data
181-
data = np.asarray(data, dtype=self.dtype)
181+
data = np.asarray(data).astype(self.dtype)
182182
# TODO: consider to pad shape with ones to make it consistent
183183
# with self.broadcastable... like vector->row type thing
184184
else:

tests/compile/function/test_pfunc.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -335,6 +335,7 @@ def test_allow_input_downcast_int(self):
335335
h = pfunc([a, b, c], (a + b + c)) # Default: allow_input_downcast=None
336336
# Everything here should behave like with False
337337
assert np.all(h([3], [6], 0) == 9)
338+
338339
with pytest.raises(TypeError):
339340
h([3], np.array([6], dtype="int16"), 0)
340341

tests/tensor/test_basic.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3198,7 +3198,6 @@ def test_autocast_custom():
31983198
assert (dvector() + 1.1).dtype == "float64"
31993199
assert (fvector() + np.float32(1.1)).dtype == "float32"
32003200
assert (fvector() + np.float64(1.1)).dtype == "float64"
3201-
assert (fvector() + 1.1).dtype == config.floatX
32023201
assert (lvector() + np.int64(1)).dtype == "int64"
32033202
assert (lvector() + np.int32(1)).dtype == "int64"
32043203
assert (lvector() + np.int16(1)).dtype == "int64"

0 commit comments

Comments
 (0)