You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
importnumpyasnpfrompandas.core.reshape.utilimportcartesian_productdims= [np.arange(0, 22, dtype=np.int16) foriinrange(12)] + [(np.arange(15128, dtype=np.int16)), ]
cartesian_product(dims)
# ValueError: negative dimensions are not allowed
Problem description
This issue is similar to #9096, when performing a groupby over many columns with a large key space and invoking a reduction over one column, the solution is computed correctly but an error occurs when constructing the result's index with MultiIndex.from_product. The error occurs in cartesian_product, where very large signed integer arithmetic overflows.
Unfortunately, the calculation of b is converted to a float64 automatically due to the real division at line 49, though this can be corrected by using floor division.
Current code b:
Even worse, while the value of b[i] in np.repeat is a very large unsigned integer, passing it into np.repeat seems to still fail due to a sign error:
np.repeat(np.array(range(22), dtype=np.int16), 454683593882591976)
ValueErrorTraceback (mostrecentcalllast)
<ipython-input-83-c5f3bb52f270>in<module>---->1np.repeat(np.array(range(22), dtype=np.int16), 454683593882591976)
<__array_function__internals>inrepeat(*args, **kwargs)
...\lib\site-packages\numpy\core\fromnumeric.pyinrepeat(a, repeats, axis)
479480 """
--> 481 return _wrapfunc(a, 'repeat', repeats, axis=axis)
482
483
...\lib\site-packages\numpy\core\fromnumeric.pyin_wrapfunc(obj, method, *args, **kwds)
5960try:
--->61returnbound(*args, **kwds)
62exceptTypeError:
63# A TypeError occurs if the object does have such a method in itsValueError: negativedimensionsarenotallowed
Curiously enough, if you vary the array argument to np.repeat, while holding b[i] constant, the error changes:
np.repeat(np.array(range(22), dtype=np.int16), 454683593882591976)
# ValueError: negative dimensions are not allowednp.repeat(np.array(range(20), dtype=np.int16), 454683593882591976)
# ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.np.repeat(np.array(range(2), dtype=np.int16), 454683593882591976)
# MemoryError: Unable to allocate array with shape (909367187765183952,) and data type int16
The issue is of course, I'm asking to create an array more than two times larger than the maximum value of Py_ssize_t, which would be impossible to index, even if I had the memory to allocate.
There does not appear to be an ultimate solution within pandas, but the code leading into numpy.repeat still misbehaves.
Expected Output
The expected output would be the exceptionally large cross-product of all of the dimensions described by the raw keys above, or to receive an error indicating the key space is too large.
Since creating an array larger than is physically possible is out of the question, some sort of error checking or warning in the documentation would be helpful.
Would something like this be safe though?
lenX=np.fromiter((len(x) forxinX), dtype=np.intp)
cumprodX=np.cumproduct(lenX)
a=np.roll(cumprodX, 1)
ifnp.any(cumprodX<0):
raiseValueError("Product space too large to allocate arrays!")
Output of pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.6.7.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : None.None
In my case, I was grouping by categorical columns with NaNs. This, increased significantly the size of the cartesian product. I was able to solve the issue by setting observed=True in the groupby.
Code Sample, a copy-pastable example if possible
Problem description
This issue is similar to #9096, when performing a
groupby
over many columns with a large key space and invoking a reduction over one column, the solution is computed correctly but an error occurs when constructing the result's index withMultiIndex.from_product
. The error occurs incartesian_product
, where very large signed integer arithmetic overflows.pandas/pandas/core/reshape/util.py
Lines 42 to 54 in 0c50950
The overflow at this point can be averted by using
np.uintp
instead ofnp.intp
as the dtype forfromiter
.Current code
cumprodX
:With
uintp
cumprodX
:Unfortunately, the calculation of
b
is converted to afloat64
automatically due to the real division at line 49, though this can be corrected by using floor division.Current code
b
:With floordiv
b
:Even worse, while the value of
b[i]
innp.repeat
is a very large unsigned integer, passing it intonp.repeat
seems to still fail due to a sign error:Curiously enough, if you vary the array argument to
np.repeat
, while holdingb[i]
constant, the error changes:The issue is of course, I'm asking to create an array more than two times larger than the maximum value of
Py_ssize_t
, which would be impossible to index, even if I had the memory to allocate.There does not appear to be an ultimate solution within
pandas
, but the code leading intonumpy.repeat
still misbehaves.Expected Output
The expected output would be the exceptionally large cross-product of all of the dimensions described by the raw keys above, or to receive an error indicating the key space is too large.
Since creating an array larger than is physically possible is out of the question, some sort of error checking or warning in the documentation would be helpful.
Would something like this be safe though?
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.6.7.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : None.None
pandas : 0.25.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 45.0.0.post20200113
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.2
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : None
tables : 3.6.1
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
The text was updated successfully, but these errors were encountered: