Skip to content

Enhancingperf documentation updates #24807

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
smason opened this issue Jan 16, 2019 · 3 comments
Closed

Enhancingperf documentation updates #24807

smason opened this issue Jan 16, 2019 · 3 comments

Comments

@smason
Copy link

smason commented Jan 16, 2019

I was going through the enhancingperf document recently and realised that the saying:

Note: Loops like this would be extremely slow in Python, but in Cython looping over NumPy arrays is fast.

doesn't seem to be true any more. Specifically I can implement apply_integrate_f as:

def apply_integrate_pyf(df):
    return np.fromiter((
        integrate_f_typed(*x) for x in zip(df['a'], df['b'], df['N'])
    ), float, len(df))

and get basically the same performance:

  • apply_integrate_f takes 1.27 ms
  • apply_integrate_f_wrap checks disabled takes 856 µs
  • my apply_integrate_pyf version takes 1.13 ms

(all the above run on my computer using Jupyter %timeit, i.e. mean of 7 runs)

This feels like a much nicer way of elding the creation of all those Series objects.

I could submit a pull-request if it seems worthwhile updating this document — git blame says it's mostly been receiving cosmetic changes for 6 years or so.

Output of pd.show_versions()

INSTALLED VERSIONS
------------------
commit: None
python: 3.7.1.final.0
python-bits: 64
OS: Darwin
OS-release: 18.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8

pandas: 0.23.4
pytest: None
pip: 18.1
setuptools: 40.6.3
Cython: 0.29.2
numpy: 1.15.4
scipy: 1.2.0
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.9
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
@WillAyd
Copy link
Member

WillAyd commented Jan 16, 2019

There are certainly some updates that can be made in the doc, so sure feel free to submit a PR. At the very least the references to .values could be replaced with .to_numpy()

With that said I'm not sure the benchmark you've provided is all that relevant. Cython looping will definitely be much faster than iteration in Python. IIUC your benchmark is only on a very small DataFrame (1,000 records) so differences may not be as apparent there but certainly on larger datasets Cython performance will be much better

@WillAyd WillAyd added the Docs label Jan 16, 2019
@WillAyd WillAyd added this to the Contributions Welcome milestone Jan 16, 2019
@WillAyd WillAyd changed the title simplification of enhancingperf Enhancingperf documentation updates Jan 21, 2019
@priyankaatgithub
Copy link

priyankaatgithub commented Feb 26, 2019

@WillAyd - I would like to work on this, please. It will be my first contribution to open source.

huizew added a commit to huizew/pandas that referenced this issue May 8, 2019
As suggested in pandas-dev#24807 (comment)

Replace `.values` with `.to_numpy()` in the benchmark demonstration code.
gfyoung pushed a commit that referenced this issue May 8, 2019
* DOC: Replace .values with .to_numpy() 

As suggested in #24807 (comment)

Replace `.values` with `.to_numpy()` in the benchmark demonstration code.
@WillAyd
Copy link
Member

WillAyd commented May 8, 2019

Closed via #26313

@WillAyd WillAyd closed this as completed May 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants