Skip to content

BUG: Calling cartesian_product with large combinatorial space overflows int64 gives misleading errors #31355

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mobiusklein opened this issue Jan 27, 2020 · 2 comments · Fixed by #36335
Assignees
Labels
Error Reporting Incorrect or improved errors from pandas good first issue
Milestone

Comments

@mobiusklein
Copy link
Contributor

Code Sample, a copy-pastable example if possible

import numpy as np
from pandas.core.reshape.util import cartesian_product

dims = [np.arange(0, 22, dtype=np.int16) for i in range(12)] + [(np.arange(15128, dtype=np.int16)), ]
cartesian_product(dims)
# ValueError: negative dimensions are not allowed

Problem description

This issue is similar to #9096, when performing a groupby over many columns with a large key space and invoking a reduction over one column, the solution is computed correctly but an error occurs when constructing the result's index with MultiIndex.from_product. The error occurs in cartesian_product, where very large signed integer arithmetic overflows.

lenX = np.fromiter((len(x) for x in X), dtype=np.intp)
cumprodX = np.cumproduct(lenX)
a = np.roll(cumprodX, 1)
a[0] = 1
if cumprodX[-1] != 0:
b = cumprodX[-1] / cumprodX
else:
# if any factor is empty, the cartesian product is empty
b = np.zeros_like(cumprodX)
return [

The overflow at this point can be averted by using np.uintp instead of np.intp as the dtype for fromiter.

Current code cumprodX:

array([                  22,                  484,                10648,
                     234256,              5153632,            113379904,
                 2494357888,          54875873536,        1207269217792,
             26559922791424,      584318301411328,    12855002631049216,
       -8443705008292528128], dtype=int64)

With uintp cumprodX:

array([                  22,                  484,                10648,
                     234256,              5153632,            113379904,
                 2494357888,          54875873536,        1207269217792,
             26559922791424,      584318301411328,    12855002631049216,
       10003039065417023488], dtype=uint64)

Unfortunately, the calculation of b is converted to a float64 automatically due to the real division at line 49, though this can be corrected by using floor division.
Current code b:

array([4.54683594e+17, 2.06674361e+16, 9.39428913e+14, 4.27013142e+13,
       1.94096883e+12, 8.82258558e+10, 4.01026617e+09, 1.82284826e+08,
       8.28567391e+06, 3.76621542e+05, 1.71191610e+04, 7.78143681e+02,
       1.00000000e+00])

With floordiv b:

array([454683593882591976,  20667436085572362,    939428912980561,
           42701314226389,      1940968828472,        88225855839,
               4010266174,          182284826,            8285673,
                   376621,              17119,                778,
                        1], dtype=uint64)

Even worse, while the value of b[i] in np.repeat is a very large unsigned integer, passing it into np.repeat seems to still fail due to a sign error:

np.repeat(np.array(range(22), dtype=np.int16), 454683593882591976)
ValueError                                Traceback (most recent call last)
<ipython-input-83-c5f3bb52f270> in <module>
----> 1 np.repeat(np.array(range(22), dtype=np.int16), 454683593882591976)

<__array_function__ internals> in repeat(*args, **kwargs)

...\lib\site-packages\numpy\core\fromnumeric.py in repeat(a, repeats, axis)
    479 
    480     """
--> 481     return _wrapfunc(a, 'repeat', repeats, axis=axis)
    482 
    483 

...\lib\site-packages\numpy\core\fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
     59 
     60     try:
---> 61         return bound(*args, **kwds)
     62     except TypeError:
     63         # A TypeError occurs if the object does have such a method in its

ValueError: negative dimensions are not allowed

Curiously enough, if you vary the array argument to np.repeat, while holding b[i] constant, the error changes:

np.repeat(np.array(range(22), dtype=np.int16), 454683593882591976)
# ValueError: negative dimensions are not allowed
np.repeat(np.array(range(20), dtype=np.int16), 454683593882591976)
# ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
np.repeat(np.array(range(2), dtype=np.int16), 454683593882591976)
# MemoryError: Unable to allocate array with shape (909367187765183952,) and data type int16

The issue is of course, I'm asking to create an array more than two times larger than the maximum value of Py_ssize_t, which would be impossible to index, even if I had the memory to allocate.

There does not appear to be an ultimate solution within pandas, but the code leading into numpy.repeat still misbehaves.

Expected Output

The expected output would be the exceptionally large cross-product of all of the dimensions described by the raw keys above, or to receive an error indicating the key space is too large.

Since creating an array larger than is physically possible is out of the question, some sort of error checking or warning in the documentation would be helpful.

Would something like this be safe though?

    lenX = np.fromiter((len(x) for x in X), dtype=np.intp)
    cumprodX = np.cumproduct(lenX)

    a = np.roll(cumprodX, 1)
    if np.any(cumprodX < 0):
        raise ValueError("Product space too large to allocate arrays!")

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.6.7.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : None.None

pandas : 0.25.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 45.0.0.post20200113
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.2
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : None
tables : 3.6.1
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None

@TAJD
Copy link
Contributor

TAJD commented Sep 6, 2020

take

@javierherrer
Copy link

In my case, I was grouping by categorical columns with NaNs. This, increased significantly the size of the cartesian product. I was able to solve the issue by setting observed=True in the groupby.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Error Reporting Incorrect or improved errors from pandas good first issue
Projects
None yet
5 participants