Skip to content

MNT: Bump dev pin on NumPy #60987

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Apr 3, 2025
Merged
4 changes: 4 additions & 0 deletions asv_bench/benchmarks/indexing_engines.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,10 @@ class NumericEngineIndexing:
def setup(self, engine_and_dtype, index_type, unique, N):
engine, dtype = engine_and_dtype

if index_type == "non_monotonic" and dtype in ["int16", "int8", "uint8"]:
# Values overflow
raise NotImplementedError

if index_type == "monotonic_incr":
if unique:
arr = np.arange(N * 3, dtype=dtype)
Expand Down
4 changes: 2 additions & 2 deletions doc/source/getting_started/comparison/comparison_with_r.rst
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ In Python, since ``a`` is a list, you can simply use list comprehension.

.. ipython:: python

a = np.array(list(range(1, 24)) + [np.NAN]).reshape(2, 3, 4)
a = np.array(list(range(1, 24)) + [np.nan]).reshape(2, 3, 4)
pd.DataFrame([tuple(list(x) + [val]) for x, val in np.ndenumerate(a)])

meltlist
Expand All @@ -402,7 +402,7 @@ In Python, this list would be a list of tuples, so

.. ipython:: python

a = list(enumerate(list(range(1, 5)) + [np.NAN]))
a = list(enumerate(list(range(1, 5)) + [np.nan]))
pd.DataFrame(a)

For more details and examples see :ref:`the Intro to Data Structures
Expand Down
1 change: 1 addition & 0 deletions doc/source/user_guide/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2063,6 +2063,7 @@ or a passed ``Series``), then it will be preserved in DataFrame operations. Furt
different numeric dtypes will **NOT** be combined. The following example will give you a taste.

.. ipython:: python
:okwarning:

df1 = pd.DataFrame(np.random.randn(8, 1), columns=["A"], dtype="float32")
df1
Expand Down
2 changes: 2 additions & 0 deletions doc/source/user_guide/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,7 @@ can be improved by passing an ``np.ndarray``.
In [4]: %%cython
...: cimport numpy as np
...: import numpy as np
...: np.import_array()
...: cdef double f_typed(double x) except? -2:
...: return x * (x - 1)
...: cpdef double integrate_f_typed(double a, double b, int N):
Expand Down Expand Up @@ -225,6 +226,7 @@ and ``wraparound`` checks can yield more performance.
...: cimport cython
...: cimport numpy as np
...: import numpy as np
...: np.import_array()
...: cdef np.float64_t f_typed(np.float64_t x) except? -2:
...: return x * (x - 1)
...: cpdef np.float64_t integrate_f_typed(np.float64_t a, np.float64_t b, np.int64_t N):
Expand Down
1 change: 1 addition & 0 deletions doc/source/whatsnew/v0.11.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ Dtypes
Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``, or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.

.. ipython:: python
:okwarning:

df1 = pd.DataFrame(np.random.randn(8, 1), columns=['A'], dtype='float32')
df1
Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ dependencies:

# required dependencies
- python-dateutil
- numpy<2
- numpy<3

# optional dependencies
- beautifulsoup4>=4.11.2
Expand Down
4 changes: 2 additions & 2 deletions pandas/compat/numpy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@
r".*In the future `np\.long` will be defined as.*",
FutureWarning,
)
np_long = np.long # type: ignore[attr-defined]
np_ulong = np.ulong # type: ignore[attr-defined]
np_long = np.long
np_ulong = np.ulong
except AttributeError:
np_long = np.int_
np_ulong = np.uint
Expand Down
3 changes: 2 additions & 1 deletion pandas/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,8 @@ def asarray_tuplesafe(values: Iterable, dtype: NpDtype | None = None) -> ArrayLi
with warnings.catch_warnings():
# Can remove warning filter once NumPy 1.24 is min version
if not np_version_gte1p24:
warnings.simplefilter("ignore", np.VisibleDeprecationWarning)
# np.VisibleDeprecationWarning only in np.exceptions in 2.0
warnings.simplefilter("ignore", np.VisibleDeprecationWarning) # type: ignore[attr-defined]
result = np.asarray(values, dtype=dtype)
except ValueError:
# Using try/except since it's more performant than checking is_list_like
Expand Down
7 changes: 6 additions & 1 deletion pandas/core/internals/managers.py
Original file line number Diff line number Diff line change
Expand Up @@ -572,7 +572,12 @@ def setitem(self, indexer, value) -> Self:
0, blk_loc, values
)
# first block equals values
self.blocks[0].setitem((indexer[0], np.arange(len(blk_loc))), value)
col_indexer: slice | np.ndarray
if isinstance(indexer[1], slice) and indexer[1] == slice(None):
col_indexer = slice(None)
else:
col_indexer = np.arange(len(blk_loc))
self.blocks[0].setitem((indexer[0], col_indexer), value)
return self
# No need to split if we either set all columns or on a single block
# manager
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/extension/date/array.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ def __init__(

# error: "object_" object is not iterable
obj = np.char.split(dates, sep="-")
for (i,), (y, m, d) in np.ndenumerate(obj): # type: ignore[misc]
for (i,), (y, m, d) in np.ndenumerate(obj):
self._year[i] = int(y)
self._month[i] = int(m)
self._day[i] = int(d)
Expand Down
2 changes: 1 addition & 1 deletion requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ pytest-localserver
PyQt5>=5.15.9
coverage
python-dateutil
numpy<2
numpy<3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At what point do we stop numpy < 2 support?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spec 0 says Q3 2025. I would suggest pandas 4.0, but perhaps it could be earlier. In any case, this doesn't seem like the best place to discuss this at length.

beautifulsoup4>=4.11.2
blosc
bottleneck>=1.3.6
Expand Down
Loading