Skip to content

DataFrame.sparse.from_spmatrix seems inefficient with large (but very sparse) matrices? #32196

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
fedarko opened this issue Feb 23, 2020 · 6 comments · Fixed by #32825
Closed
Labels
Performance Memory or execution speed performance Sparse Sparse Data Type

Comments

@fedarko
Copy link

fedarko commented Feb 23, 2020

Code Sample, a copy-pastable example if possible

import pandas as pd
from scipy.sparse import csr_matrix
mil = 1000000
big_csr_diag_1s = csr_matrix((mil, mil), dtype="float")
# Following line takes around 15 seconds to run
big_csr_diag_1s.setdiag(1)
# At this point, big_csr_diag_1s is just a completely-sparse matrix with the only
# nonzero values being values of 1 on its diagonal (and there are 1 million of
# these values; I don't think this should be *too* bad to store in a sparse data
# structure).
# The following line runs for at least 5 minutes (I killed it after that point):
pd.DataFrame.sparse.from_spmatrix(big_csr_diag_1s)

Problem description

It seems like the scipy csr matrix is being converted to dense somewhere in pd.DataFrame.sparse.from_spmatrix(), which results in that function taking a large amount of time (on my laptop, at least).

I think this seems indicative of an efficiency problem, but if constructing the sparse DataFrame in this way really is expected to take a huge amount of time then I can close this issue. Thanks!

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-76-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.2.0.post20200210
Cython : 0.29.15
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.12.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None

@fedarko
Copy link
Author

fedarko commented Feb 23, 2020

In case this is helpful, I ran through the from_spmatrix() source code line-by-line with the example data above, and it looks like the following line:

sparrays = [SparseArray.from_spmatrix(data[:, i]) for i in range(data.shape[1])]

takes about 294 seconds (~4.9 minutes) to finish on my laptop. I think this is responsible for most (but not all) of the slowdown here -- it looks like going through each column individually is just inefficient when there are a massive amount of columns.

After that line completes, the only really slow line is this one:

result = DataFrame(data, index=index)

... which takes about 48 seconds to finish on my laptop.

@jreback
Copy link
Contributor

jreback commented Feb 23, 2020

a
data frame is dense in columns and sparse in rows, so this is as expected ; extremely wide frames are slow; but not very common

@jorisvandenbossche
Copy link
Member

The current from_spmatrix implementation is indeed not very efficient, and I think there a lot of room for improvement.
It will never be super fast (as we need to create many 1D sparse arrays for all columns), but I think we can easily get a 5-10x improvement.

Quick experiment:

def convert_scipy_sparse(X): 
    X2 = X.tocsc() 
    n_rows, n_columns = X2.shape 
    data = X2.data 
    indices = X2.indices 
    indptr = X2.indptr 
    dtype = pd.SparseDtype("float64", 0) 
    arrays = [] 
    for i in range(n_columns): 
        index = pd.core.arrays.sparse.IntIndex(n_rows, indices[indptr[i]:indptr[i+1]]) 
        arr = pd.core.arrays.sparse.SparseArray._simple_new(data[indptr[i]:indptr[i+1]], index, dtype) 
        arrays.append(arr) 
    return pd.DataFrame._from_arrays(arrays, columns=pd.Index(range(n_columns)), index=pd.Index(range(n_rows)))        

together with disabling unnecessary validation of the arrays and consolidation in _from_arrays (we should have a fastpath there), gives me a 5x speedup for 10k x 10k sparse matrix

@jorisvandenbossche
Copy link
Member

@fedarko would you be interested in further exploring this and maybe doing a PR?

@fedarko
Copy link
Author

fedarko commented Mar 18, 2020

@jorisvandenbossche thank you for following up on this! I don't have sufficient time right now to do a PR, sorry

@rth
Copy link
Contributor

rth commented Mar 18, 2020

I'll make a PR @jorisvandenbossche .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance Memory or execution speed performance Sparse Sparse Data Type
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants