-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
DataFrame.sparse.from_spmatrix seems inefficient with large (but very sparse) matrices? #32196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
In case this is helpful, I ran through the sparrays = [SparseArray.from_spmatrix(data[:, i]) for i in range(data.shape[1])] takes about 294 seconds (~4.9 minutes) to finish on my laptop. I think this is responsible for most (but not all) of the slowdown here -- it looks like going through each column individually is just inefficient when there are a massive amount of columns. After that line completes, the only really slow line is this one: result = DataFrame(data, index=index) ... which takes about 48 seconds to finish on my laptop. |
a |
The current Quick experiment:
together with disabling unnecessary validation of the arrays and consolidation in |
@fedarko would you be interested in further exploring this and maybe doing a PR? |
@jorisvandenbossche thank you for following up on this! I don't have sufficient time right now to do a PR, sorry |
I'll make a PR @jorisvandenbossche . |
Code Sample, a copy-pastable example if possible
Problem description
It seems like the scipy csr matrix is being converted to dense somewhere in
pd.DataFrame.sparse.from_spmatrix()
, which results in that function taking a large amount of time (on my laptop, at least).I think this seems indicative of an efficiency problem, but if constructing the sparse DataFrame in this way really is expected to take a huge amount of time then I can close this issue. Thanks!
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-76-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.2.0.post20200210
Cython : 0.29.15
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.12.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None
The text was updated successfully, but these errors were encountered: