Skip to content

Update Performance Considerations section in docs #17303

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 20, 2017
Merged

Update Performance Considerations section in docs #17303

merged 2 commits into from
Oct 20, 2017

Conversation

rvernica
Copy link
Contributor

  • re-run all tests
  • add tests for feather and pickle
  • closes #xxxx
  • tests added / passed
  • passes git diff upstream/master -u -- "*.py" | flake8 --diff
  • whatsnew entry

* re-run all tests
* add tests for feather and pickle
@gfyoung gfyoung added Docs Performance Memory or execution speed performance labels Aug 22, 2017
@chris-b1
Copy link
Contributor

If you'd like, it'd be nice to show the new parquet functionality here too. #15838

Copy link
Member

@jorisvandenbossche jorisvandenbossche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice update!

@@ -5208,82 +5208,105 @@ easy conversion to and from pandas.
Performance Considerations
--------------------------

This is an informal comparison of various IO methods, using pandas 0.13.1.
This is an informal comparison of various IO methods, using pandas 0.20.3.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you maybe put a stronger warning here that the timings are machine dependent and you should not look at small differences?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null float64
dtypes: float64(2)
memory usage: 22.9 MB
memory usage: 15.3 MB

Writing

.. code-block:: ipython

In [14]: %timeit test_sql_write(df)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, we could make these an ipython block (so they would run)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't do that, building the docs already takes a long time

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we actually do this in various sections. And the incremental time is quite small here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's true we have them in some places, but I suppose it are a lot smaller timings.
The extra time here is not small. Timing only writing ones that already existed takes 1min30 on my laptop, and then this PR even added more cases and you also have the reading. So this will add maybe like 3 to 5 min to the doc build. Which is IMO not worth it.

In [34]: %timeit test_pickle_read()
5.75 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [35]: %timeit test_pickle_read_compress()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the compression ones are pretty bogus because you are using random data. maybe add a colum of 1's and a column of strings or something to make compression not horrible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

* Update all timings
* Clarify wording
@codecov
Copy link

codecov bot commented Sep 26, 2017

Codecov Report

Merging #17303 into master will decrease coverage by 0.04%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #17303      +/-   ##
==========================================
- Coverage   91.03%   90.99%   -0.05%     
==========================================
  Files         162      162              
  Lines       49569    49569              
==========================================
- Hits        45124    45103      -21     
- Misses       4445     4466      +21
Flag Coverage Δ
#multiple 88.77% <ø> (-0.03%) ⬇️
#single 40.24% <ø> (-0.07%) ⬇️
Impacted Files Coverage Δ
pandas/io/gbq.py 25% <0%> (-58.34%) ⬇️
pandas/plotting/_converter.py 63.23% <0%> (-1.82%) ⬇️
pandas/core/frame.py 97.72% <0%> (-0.1%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4c4ede...1ecdf3f. Read the comment docs.

@codecov
Copy link

codecov bot commented Sep 26, 2017

Codecov Report

Merging #17303 into master will increase coverage by 0.18%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #17303      +/-   ##
==========================================
+ Coverage   91.03%   91.21%   +0.18%     
==========================================
  Files         162      163       +1     
  Lines       49569    49810     +241     
==========================================
+ Hits        45124    45435     +311     
+ Misses       4445     4375      -70
Flag Coverage Δ
#multiple 89.01% <ø> (+0.21%) ⬆️
#single 40.32% <ø> (+0.01%) ⬆️
Impacted Files Coverage Δ
pandas/io/gbq.py 25% <0%> (-58.34%) ⬇️
pandas/plotting/_tools.py 72.92% <0%> (-6.08%) ⬇️
pandas/plotting/_converter.py 63.38% <0%> (-1.67%) ⬇️
pandas/core/indexes/category.py 97.74% <0%> (-0.78%) ⬇️
pandas/core/dtypes/common.py 94.45% <0%> (-0.41%) ⬇️
pandas/tseries/frequencies.py 96.11% <0%> (-0.4%) ⬇️
pandas/io/parquet.py 65.38% <0%> (-0.37%) ⬇️
pandas/core/categorical.py 95.28% <0%> (-0.22%) ⬇️
pandas/compat/numpy/__init__.py 93.93% <0%> (-0.18%) ⬇️
pandas/io/excel.py 80.37% <0%> (-0.18%) ⬇️
... and 60 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a4c4ede...1ecdf3f. Read the comment docs.

Copy link
Member

@jorisvandenbossche jorisvandenbossche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was actually already good, so merging. Thanks!

@jorisvandenbossche jorisvandenbossche merged commit a441d23 into pandas-dev:master Oct 20, 2017
@jorisvandenbossche jorisvandenbossche added this to the 0.21.0 milestone Oct 20, 2017
@jorisvandenbossche
Copy link
Member

Although, as noted by @chris-b1, it would be nice to include read/to_parquet as well, which is in the release candidate. Follow-up PR always welcome!

yeemey pushed a commit to yeemey/pandas that referenced this pull request Oct 20, 2017
alanbato pushed a commit to alanbato/pandas that referenced this pull request Nov 10, 2017
No-Stream pushed a commit to No-Stream/pandas that referenced this pull request Nov 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Docs Performance Memory or execution speed performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants