Skip to content

DataFrame groupby.first() is much slower than groupby.nth(0) #19598

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
capelastegui opened this issue Feb 8, 2018 · 2 comments
Closed

DataFrame groupby.first() is much slower than groupby.nth(0) #19598

capelastegui opened this issue Feb 8, 2018 · 2 comments
Labels
Groupby Performance Memory or execution speed performance

Comments

@capelastegui
Copy link

Code Sample, a copy-pastable example if possible

import pandas as pd, numpy as np
df1 = pd.DataFrame({'c1':np.concatenate([np.arange(0,10000),np.arange(0,10000)]),'c2':1,'c3':'a'})

# groupby.nth(0) runs in ~11ms
%timeit x1 =df1.groupby(['c1']).nth(0, dropna='all') 
# groupby.first() runs in ~700ms
%timeit  x2 = df1.groupby(['c1']).first()

Problem description

groupby.first() takes much longer to run than groupby.nth(0), even though both operations should be equivalent. Is there any reason why the current implementation of first() shouldn't just be replaced with a call to nth()?

Output of pd.show_versions()

INSTALLED VERSIONS

commit: None
python: 2.7.11.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8

pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 36.5.0
Cython: 0.23.4
numpy: 1.14.0
scipy: 1.0.0
pyarrow: 0.8.0
xarray: None
IPython: 5.5.0
sphinx: 1.5.3
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.0.0
tables: None
numexpr: 2.5
feather: 0.4.0
matplotlib: 2.1.2
openpyxl: 2.4.1
xlrd: 0.9.4
xlwt: None
xlsxwriter: None
lxml: 3.4.4
bs4: None
html5lib: 1.0b10
sqlalchemy: 1.1.2
pymysql: None
psycopg2: 2.7.3.2 (dt dec pq3 ext lo64)
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

@jreback
Copy link
Contributor

jreback commented Feb 9, 2018

you would have to actually show the times, this is not reproducible. .first() is cythonized and should actually be faster.

In [1]: import pandas as pd, numpy as np
   ...: df1 = pd.DataFrame({'c1':np.concatenate([np.arange(0,10000),np.arange(0,10000)]),'c2':1,'c3':'a'})
   ...: 
   ...: # groupby.nth(0) runs in ~11ms
   ...: %timeit x1 =df1.groupby(['c1']).nth(0, dropna='all') 
   ...: # groupby.first() runs in ~700ms
   ...: %timeit  x2 = df1.groupby(['c1']).first()
   ...: 
6.37 ms +- 154 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
2.26 ms +- 46.6 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

you are probably seeing #19283 which is a fixed regression in 0.23

@jreback jreback closed this as completed Feb 9, 2018
@jreback jreback added Groupby Performance Memory or execution speed performance labels Feb 9, 2018
@jreback jreback added this to the No action milestone Feb 9, 2018
@gaworecki5
Copy link

I noticed this problem on a df of about 2 million rows. groupby.first() would not run at all, other aggregations like count() and sum() run in a few seconds. I did not know about nth(0) until this thread but it runs in 20 seconds. Did anyone figure out the reason this is?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Groupby Performance Memory or execution speed performance
Projects
None yet
Development

No branches or pull requests

3 participants