You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the changes in NumPy 2 is to the [behavior of type
promotion](https://numpy.org/devdocs/numpy_2_0_migration_guide.html#changes-to-numpy-data-type-promotion).
A possible negative impact of the changes is that some operations
involving scalar types can lead to lower precision, or even overflow.
For example, `uint8(100) + 200` previously (in Numpy < 2.0) produced a
`unit16` value, but now results in a `unit8` value and an overflow
_warning_ (not error). This can have an impact on Cirq. For example,
in Cirq, simulator measurement result values are `uint8`'s, and in
some places, arrays of values are summed; this leads to overflows if
the sum > 128. It would not be appropriate to change measurement
values to be larger than `uint8`, so in cases like this, the proper
solution is probably to make sure that where values are summed or
otherwise numerically manipulated, `uint16` or larger values are
ensured.
NumPy 2 offers a new option
(`np._set_promotion_state("weak_and_warn")`) to produce warnings where
data types are changed. Commit 6cf50eb adds a new command-line to our
pytest framework, such that running
```bash
check/pytest --warn-numpy-data-promotion
```
will turn on this NumPy setting. Running `check/pytest` with this
option enabled revealed quite a lot of warnings. The present commit
changes code in places where those warnings were raised, in an effort
to eliminate as many of them as possible.
It is certainly the case that not all of the type promotion warnings
are meaningful. Unfortunately, I found it sometimes difficult to be
sure of which ones _are_ meaningful, in part because Cirq's code has
many layers and uses ndarrays a lot, and understanding the impact of a
type demotion (say, from `float64` to `float32`) was difficult for me
to do. In view of this, I wanted to err on the side of caution and try
to avoid losses of precision. The principles followed in the changes
are roughly the following:
* Don't worry about warnings about changes from `complex64` to
`complex128`, as this obviously does not reduce precision.
* If a warning involves an operation using an ndarray, change the code
to try to get the actual data type of the data elements in the array
rather than use a specific data type. This is the reason some of the
changes look like the following, where it reaches into an ndarray to
get the dtype of an element and then later uses the `.type()` method
of that dtype to cast the value of something else:
```python
dtype = args.target_tensor.flat[0].dtype
.....
args.target_tensor[subspace] *= dtype.type(x)
```
* In cases where the above was not possible, or where it was obvious
what the type must always be, the changes add type casts with
explicit types like `complex(x)` or `np.float64(x)`.
It is likely that this approach resulted in some unnecessary
up-promotion of values and may have impacted run-time performance.
Some simple overall timing of `check/pytest` did not reveal a glaring
negative impact of the changes, but that doesn't mean real
applications won't be impacted. Perhaps a future review can evaluate
whether speedups are possible.
0 commit comments