-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
PERF: concat slow, manual concat through reindexing enhances performance #50652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Your def compare_dataframes(df1, df2):
return df1.equals(df2) The ordering difference may be the reason for the different performance result, as ordering may costly performance-wise. Can you redo your example where the columns ordering is the same as for |
Ok, I've redone your example below, where the result dataframes are actually equal (only code changes are where I've noted from itertools import product
import time
import numpy as np
import pandas as pd
NUM_ROWS = [100, 150, 200]
NUM_COLUMNS = [1000, 10000, 50000]
NUM_DFS = [3, 5, 7, 9]
test_cases = product(NUM_ROWS, NUM_COLUMNS, NUM_DFS)
def manual_concat(df_list):
columns = [col for df in df_list for col in df.columns]
columns = list(dict.fromkeys(columns)) # this line was changed!
index = np.hstack([df.index.values for df in df_list])
df_list = [df.reindex(columns=columns) for df in df_list]
values = np.vstack([df.values for df in df_list])
return pd.DataFrame(values, index=index, columns=columns)
def compare_dataframes(df1, df2, margin=1e-4):
return df1.equals(df2) # this line was changed!
def generate_dataframes(num_dfs, num_rows, num_cols, all_cols):
df_list = []
for i in range(num_dfs):
index = ['i%d'%i for i in range(i*num_rows, (i+1)*num_rows)]
columns = np.random.choice(all_cols, num_cols, replace=False)
values = np.random.uniform(-100, 100, [num_rows, num_cols])
df_list.append(pd.DataFrame(values, index=index, columns=columns))
return df_list
for ti, t in enumerate(test_cases):
num_rows = t[0]
num_cols = t[1]
num_dfs = t[2]
num_all_cols = num_cols * num_dfs * 4 // 5
all_cols = ['c%i'%i for i in range(num_all_cols)]
df_list = generate_dataframes(num_dfs, num_rows, num_cols, all_cols)
t1 = time.time()
df_pandas = pd.concat(df_list)
t2 = time.time()
df_manual = manual_concat(df_list)
t3 = time.time()
print('Testcase %d'%(ti + 1))
print('NUM_ROWS: %d, NUM_COLS: %d, NUM_DFS: %d'%(num_rows, num_cols, num_dfs))
print('Pandas: %.2f'%(t2-t1))
print('Manual: %.2f'%(t3-t2))
print(compare_dataframes(df_pandas, df_manual)) It confirms your results:
Of course there may be reasons why the native solution is slower (various checks, broader use case etc.) but this does warrant an examination IMO. Can you ping back if you want to submit a PR for the issue. Else I'll take a look. |
ping @topper-123 |
Not sure if related, but we are struggling mightily to get decent performance of Even in the case where the frames are all entirely Here is a somewhat simpler setup than the one suggested in this ticket. Setupimport pandas as pd
import numpy as np
>>> pd.__version__
'1.5.3'
np.random.seed(0) # reproducible example
n, m = 100, 200
k = 10
frames0 = [
pd.DataFrame(np.random.uniform(size=(n, m)),
columns=[f'c{i:04d}' for i in range(m)])
for _ in range(k)
]
# one mask per frame: where mask is True: no change; False: NaN or remove
masks = [np.random.randint(0, 2, m, dtype=bool) for _ in range(k)]
# null out entire columns
frames1 = [
df.mul(np.where(mask, 1, np.nan))
for df, mask in zip(frames0, masks)
]
# remove entire columns, then re-index
frames2 = [
df.iloc[:, mask].reindex_like(df)
for df, mask in zip(frames0, masks)
] TimingsThe absolute timing values don't matter, but their ratios do. FWIW, that was run on a Using IPython's lime-magic dt0 = %timeit -o pd.concat(frames0)
# 707 µs ± 862 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt1 = %timeit -o pd.concat(frames1)
# 886 µs ± 643 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt2 = %timeit -o pd.concat(frames2)
# 453 ms ± 3.62 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> dt2.best / dt1.best
509.53007324284977
# and, ironically
dt3 = %timeit -o pd.concat([df.copy() for df in frames2])
# 2.31 ms ± 23.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Equalities: >>> all([df1.equals(df2) for df1, df2 in zip(frames1, frames2)])
True
>>> pd.concat(frames1).equals(pd.concat(frames2))
True With import timeit
results = [
timeit.Timer(
'pd.concat(frames)',
'import pandas as pd',
globals={'frames': frames},
).autorange()
for frames in [frames0, frames1, frames2]
]
results = [t / n for n, t in results]
>>> results[2] / results[1]
473.753 The results are similar for the versions I checked: 1.4.4, 1.5.2 and 1.5.3. However, for 1.4.2, the seemed much better (next comment). |
With >>> pd.__version__
'1.4.2'
dt0 = %timeit -o pd.concat(frames0)
# 676 µs ± 6.12 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt1 = %timeit -o pd.concat(frames1)
# 674 µs ± 780 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
dt2 = %timeit -o pd.concat(frames2)
# 17.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> dt2.best / dt1.best
25.59028179864691
# and, ironically
dt3 = %timeit -o pd.concat([df.copy() for df in frames2])
# 3.33 ms ± 21.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) |
We implemented a quick fix for homogeneous frames that is similar to the one proposed by @bckim1318: from functools import reduce
def np_concat(frames):
a = np.vstack([df.to_numpy() for df in frames])
cols = frames[0].columns
ix = reduce(lambda x, y: x.append(y), [df.index for df in frames])
# this also gets the index name if they are all consistent
return pd.DataFrame(a, index=ix, columns=cols)
assert np_concat([df for df in frames2]).equals(pd.concat(frames1))
dt4 = %timeit -o np_concat([df for df in frames2])
# 509 µs ± 3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) While it is much better in performance, it of course causes issues with heterogeneous frames. For instance: df = pd.DataFrame({'a': [1, 7], 'b': ['foo', 'bar'], 'c': [1.2, 2.3]})
>>> vars(df)
{'_is_copy': None,
'_mgr': BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=2, step=1)
NumericBlock: slice(0, 1, 1), 1 x 2, dtype: int64
ObjectBlock: slice(1, 2, 1), 1 x 2, dtype: object
NumericBlock: slice(2, 3, 1), 1 x 2, dtype: float64, # 3 distinct blocks (nice)
'_item_cache': {},
'_attrs': {},
'_flags': <Flags(allows_duplicate_labels=True)>} But:
And thus: >>> vars(np_concat([df]))
{'_is_copy': None,
'_mgr': BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=2, step=1)
ObjectBlock: slice(0, 3, 1), 3 x 2, dtype: object, # single object block
'_item_cache': {},
'_attrs': {},
'_flags': <Flags(allows_duplicate_labels=True)>} Which may affect downstream operations. That said, >>> vars(df.T.T)
{'_is_copy': None,
'_mgr': BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=2, step=1)
ObjectBlock: slice(0, 3, 1), 3 x 2, dtype: object,
'_item_cache': {},
'_attrs': {},
'_flags': <Flags(allows_duplicate_labels=True)>} |
Thanks for the testing @pdemarti, your examples are very clear. Shame that this is difficult to fix. Can you do a bisect on the pandas git repo to pinpoint the PR where this performance degradation happened? That could be the best next step to fix this. I will take a look at my PR over the coming days (#51419). You're of course also very welcome to take a look/propose commit to that PR. |
I did bisect, @topper-123 : this first commit where the ratio above jumps from 31 to 690 is ad7dc56. The commit comment indicates work in that area:
The commit immediately before that, 30a3b98, gives a ratio of 34.91, similar to Personally, I think the ratio should be close 1.0. In fact, by simply doing a |
I've spent some time tracking this down, recording some preliminary observations here:
|
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this issue exists on the latest version of pandas.
I have confirmed this issue exists on the main branch of pandas.
Reproducible Example
output:
Installed Versions
INSTALLED VERSIONS
commit : 8dab54d
python : 3.10.8.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Korean_Korea.949
pandas : 1.5.2
numpy : 1.24.1
pytz : 2022.7
dateutil : 2.8.2
setuptools : 63.2.0
pip : 22.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None
Prior Performance
No response
The text was updated successfully, but these errors were encountered: