-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
PERF: optimize DataFrame.sparse.from_spmatrix performance #32825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 17 commits
3ffe7f2
93bf825
e21701b
e095e7f
11afe40
eda0732
40f4cd6
508fda5
0053817
42792a3
c8f7abc
e541b0d
0a0bafc
ce6619e
e063c8a
8efe3ad
20c3685
d750eb4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -228,14 +228,29 @@ def from_spmatrix(cls, data, index=None, columns=None): | |
2 0.0 0.0 1.0 | ||
""" | ||
from pandas import DataFrame | ||
from pandas._libs.sparse import IntIndex | ||
|
||
data = data.tocsc() | ||
index, columns = cls._prep_index(data, index, columns) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the problem is that this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK, I see. Thanks for confirming that passing duplicate columns names in an Index object is expected to work in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That seems to be it, as doing You can add in here a |
||
sparrays = [SparseArray.from_spmatrix(data[:, i]) for i in range(data.shape[1])] | ||
data = dict(enumerate(sparrays)) | ||
result = DataFrame(data, index=index) | ||
result.columns = columns | ||
return result | ||
n_rows, n_columns = data.shape | ||
# We need to make sure indices are sorted, as we create | ||
# IntIndex with no input validation (i.e. check_integrity=False ). | ||
# Indices may already be sorted in scipy in which case this adds | ||
# a small overhead. | ||
data.sort_indices() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It might be already done in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe add a comment about that inline? |
||
indices = data.indices | ||
indptr = data.indptr | ||
array_data = data.data | ||
dtype = SparseDtype(array_data.dtype, 0) | ||
arrays = [] | ||
for i in range(n_columns): | ||
sl = slice(indptr[i], indptr[i + 1]) | ||
idx = IntIndex(n_rows, indices[sl], check_integrity=False) | ||
arr = SparseArray._simple_new(array_data[sl], idx, dtype) | ||
arrays.append(arr) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FWIW, also tried with a generator here to avoid pre-allocating all the arrays, but it doesn't really matter. Most of the remaining run time is in |
||
return DataFrame._from_arrays( | ||
arrays, columns=columns, index=index, verify_integrity=False | ||
) | ||
|
||
def to_dense(self): | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be
tocsc(copy=False)
for newer scipy versions, but the gain in performance is minimal anyway with respect to the total runtime.