-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: resampling DataFrame with DateTimeIndex with holes and uint64
columns leads to error on pandas==1.3.2
(not in 1.2.5
)
#43329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
uint64
columns leads to error on pandas==1.3.2
(not in 1.1.0
)uint64
columns leads to error on pandas==1.3.2
(not in 1.2.5
)
Hi @julienlmet , could you please confirm if you are getting the error on |
Looks like ensure_int_or_float cast uint8 dtype to int64, and failing to do that (when there are empty groups, as there are here) raises. so we need to restore that particular casting |
Hi @debnathshoham, Here is the traceback that I get (sorry, I should have given it when submitting the issue):
|
changing milestone to 1.3.5 |
Restoring the previous casting would fix the regression case in the OP, but there is an underlying issue (latent bug) that we cannot (and could not in 1.2.5 and before) resample and aggregate a column with In the code sample, the DataFrames with considering a simplied code sample (can maybe use as the regression test) without using import numpy as np
import pandas as pd
print(pd.__version__)
df = pd.DataFrame(
index=pd.date_range(start="2000-01-01", end="2000-01-03 23", freq="12H"),
columns=["x"],
data=[0, 1, 0] * 2,
dtype="uint8",
)
df = df.loc[(df.index < "2000-01-02") | (df.index > "2000-01-03"), :]
result = df.resample("D").max()
print(result)
expected = pd.DataFrame(
[1, np.nan, 0],
columns=["x"],
index=pd.date_range(start="2000-01-01", end="2000-01-03 23", freq="D"),
)
pd.testing.assert_frame_equal(result, expected) gives on 1.2.5
and on master
Changing I think this is the correct result and this issue title is misleading. The title should probably be " restoring diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 815a0a2040..132d6c9610 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -111,6 +111,54 @@ def ensure_str(value: bytes | Any) -> str:
return value
+def ensure_int_or_float(arr: ArrayLike, copy: bool = False) -> np.ndarray:
+ """
+ Ensure that an dtype array of some integer dtype
+ has an int64 dtype if possible.
+ If it's not possible, potentially because of overflow,
+ convert the array to float64 instead.
+ Parameters
+ ----------
+ arr : array-like
+ The array whose data type we want to enforce.
+ copy: bool
+ Whether to copy the original array or reuse
+ it in place, if possible.
+ Returns
+ -------
+ out_arr : The input array cast as int64 if
+ possible without overflow.
+ Otherwise the input array cast to float64.
+ Notes
+ -----
+ If the array is explicitly of type uint64 the type
+ will remain unchanged.
+ """
+ # TODO: GH27506 potential bug with ExtensionArrays
+ try:
+ # error: No overload variant of "astype" of "ExtensionArray" matches
+ # argument types "str", "bool", "str"
+ return arr.astype( # type: ignore[call-overload]
+ "int64", copy=copy, casting="safe"
+ )
+ except TypeError:
+ pass
+ try:
+ # error: No overload variant of "astype" of "ExtensionArray" matches
+ # argument types "str", "bool", "str"
+ return arr.astype( # type:ignore[call-overload]
+ "uint64", copy=copy, casting="safe"
+ )
+ except TypeError:
+ if is_extension_array_dtype(arr.dtype):
+ # pandas/core/dtypes/common.py:168: error: Item "ndarray" of
+ # "Union[ExtensionArray, ndarray]" has no attribute "to_numpy" [union-attr]
+ return arr.to_numpy( # type: ignore[union-attr]
+ dtype="float64", na_value=np.nan
+ )
+ return arr.astype("float64", copy=copy)
+
+
def ensure_python_int(value: int | np.integer) -> int:
"""
Ensure that a value is a python int.
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 60c8851f05..bf4b219455 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -44,6 +44,7 @@ from pandas.core.dtypes.cast import (
from pandas.core.dtypes.common import (
ensure_float64,
ensure_int64,
+ ensure_int_or_float,
ensure_platform_int,
is_1d_only_ea_obj,
is_bool_dtype,
@@ -500,9 +501,7 @@ class WrappedCythonOp:
elif is_bool_dtype(dtype):
values = values.astype("int64")
elif is_integer_dtype(dtype):
- # e.g. uint8 -> uint64, int16 -> int64
- dtype_str = dtype.kind + "8"
- values = values.astype(dtype_str, copy=False)
+ values = ensure_int_or_float(values)
elif is_numeric:
if not is_complex_dtype(dtype):
values = ensure_float64(values)
|
@jbrockmendel any objections to #43329 (comment). If not, will open a PR to get this fixed for 1.3.5 |
No objection, though I'd suggest something more narrowly targeted:
|
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.
Code Sample, a copy-pastable example
Problem description
With
pandas=1.3.2
, above code block leads to "RuntimeError: empty group with uint64_t". It was not the case withpandas==1.1.0
for instance. Not an issue for me (problem solved specifyingdtype
), but probably an issue to solve.Expected Output
Given in code sample section
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : 5f648bf
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.72-microsoft-standard-WSL2
Version : #1 SMP Wed Oct 28 23:40:43 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.2
numpy : 1.21.2
pytz : 2021.1
dateutil : 2.8.2
pip : 21.2.4
setuptools : 57.4.0
Cython : None
pytest : 6.2.5
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.27.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.4.3
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.7.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
The text was updated successfully, but these errors were encountered: