Skip to content

Commit 56a5d6f

Browse files
author
pilkibun
committed
Merge remote-tracking branch 'origin/master' into list_of_namedtuples3
* origin/master: (52 commits) CLN: Prune unnecessary indexing code (#27576) CLN: get parts of Block.replace out of try/except (#27408) Groupby transform cleanups (#27467) TYPING: add type hints to pandas\io\formats\printing.py (#27579) TYPING: Partial typing of Categorical (#27318) REF: collect indexing methods (#27588) API: Add entrypoint for plotting (#27488) REF: implement module for shared constructor functions (#27551) CLN: more assorted cleanups (#27555) CLN: pandas\io\formats\format.py (#27577) TYPING: some type hints for core.dtypes.common (#27564) DEPR: remove .ix from tests/indexing/multiindex/test_ix.py (#27565) DEPR: remove .ix from tests/indexing/test_partial.py (#27566) TST: add regression test for slicing IntervalIndex MI level with scalars (#27572) DEPR: remove .ix from tests/indexing/multiindex/test_setitem.py (#27574) BUG: display.precision of negative complex numbers (#27511) Removed ABCs from pandas._typing (#27424) DEPR: remove .ix from tests/indexing/test_indexing.py (#27535) BUG: fix+test quantile with empty DataFrame, closes #23925 (#27436) BUG: maybe_convert_objects mixed datetimes and timedeltas (#27438) ...
2 parents 6c114f5 + ebcfee4 commit 56a5d6f

File tree

130 files changed

+2936
-2821
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

130 files changed

+2936
-2821
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,9 @@ coverage_html_report
6666
# hypothesis test database
6767
.hypothesis/
6868
__pycache__
69+
# pytest-monkeytype
70+
monkeytype.sqlite3
71+
6972

7073
# OS generated files #
7174
######################

.travis.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
sudo: false
21
language: python
32
python: 3.5
43

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ lint-diff:
1515
git diff upstream/master --name-only -- "*.py" | xargs flake8
1616

1717
black:
18-
black . --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist)'
18+
black . --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist|setup.py)'
1919

2020
develop: build
2121
python setup.py develop

ci/code_checks.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
5656
black --version
5757

5858
MSG='Checking black formatting' ; echo $MSG
59-
black . --check --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist)'
59+
black . --check --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist|setup.py)'
6060
RET=$(($RET + $?)) ; echo $MSG "DONE"
6161

6262
# `setup.cfg` contains the list of error codes that are being ignored in flake8

ci/deps/travis-36-cov.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ dependencies:
3939
- xlsxwriter
4040
- xlwt
4141
# universal
42-
- pytest>=4.0.2
42+
- pytest
4343
- pytest-xdist
4444
- pytest-cov
4545
- pytest-mock

doc/source/development/extending.rst

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -441,5 +441,22 @@ This would be more or less equivalent to:
441441
The backend module can then use other visualization tools (Bokeh, Altair,...)
442442
to generate the plots.
443443

444+
Libraries implementing the plotting backend should use `entry points <https://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins>`__
445+
to make their backend discoverable to pandas. The key is ``"pandas_plotting_backends"``. For example, pandas
446+
registers the default "matplotlib" backend as follows.
447+
448+
.. code-block:: python
449+
450+
# in setup.py
451+
setup( # noqa: F821
452+
...,
453+
entry_points={
454+
"pandas_plotting_backends": [
455+
"matplotlib = pandas:plotting._matplotlib",
456+
],
457+
},
458+
)
459+
460+
444461
More information on how to implement a third-party plotting backend can be found at
445462
https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/__init__.py#L1.

doc/source/getting_started/basics.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1422,8 +1422,6 @@ The :meth:`~DataFrame.rename` method also provides an ``inplace`` named
14221422
parameter that is by default ``False`` and copies the underlying data. Pass
14231423
``inplace=True`` to rename the data in place.
14241424

1425-
.. versionadded:: 0.18.0
1426-
14271425
Finally, :meth:`~Series.rename` also accepts a scalar or list-like
14281426
for altering the ``Series.name`` attribute.
14291427

@@ -2063,8 +2061,6 @@ Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`.
20632061
dft
20642062
dft.dtypes
20652063
2066-
.. versionadded:: 0.19.0
2067-
20682064
Convert certain columns to a specific dtype by passing a dict to :meth:`~DataFrame.astype`.
20692065

20702066
.. ipython:: python

doc/source/getting_started/dsintro.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -251,8 +251,6 @@ Series can also have a ``name`` attribute:
251251
The Series ``name`` will be assigned automatically in many cases, in particular
252252
when taking 1D slices of DataFrame as you will see below.
253253

254-
.. versionadded:: 0.18.0
255-
256254
You can rename a Series with the :meth:`pandas.Series.rename` method.
257255

258256
.. ipython:: python

doc/source/user_guide/advanced.rst

Lines changed: 3 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -810,15 +810,10 @@ values **not** in the categories, similarly to how you can reindex **any** panda
810810
Int64Index and RangeIndex
811811
~~~~~~~~~~~~~~~~~~~~~~~~~
812812

813-
.. warning::
814-
815-
Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the changes, see :ref:`here <whatsnew_0180.float_indexers>`.
816-
817-
:class:`Int64Index` is a fundamental basic index in pandas.
818-
This is an immutable array implementing an ordered, sliceable set.
819-
Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.
813+
:class:`Int64Index` is a fundamental basic index in pandas. This is an immutable array
814+
implementing an ordered, sliceable set.
820815

821-
:class:`RangeIndex` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
816+
:class:`RangeIndex` is a sub-class of ``Int64Index`` that provides the default index for all ``NDFrame`` objects.
822817
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to Python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
823818

824819
.. _indexing.float64index:
@@ -880,16 +875,6 @@ In non-float indexes, slicing using floats will raise a ``TypeError``.
880875
In [1]: pd.Series(range(5))[3.5:4.5]
881876
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
882877
883-
.. warning::
884-
885-
Using a scalar float indexer for ``.iloc`` has been removed in 0.18.0, so the following will raise a ``TypeError``:
886-
887-
.. code-block:: ipython
888-
889-
In [3]: pd.Series(range(5)).iloc[3.0]
890-
TypeError: cannot do positional indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers [3.0] of <type 'float'>
891-
892-
893878
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
894879
irregular timedelta-like indexing scheme, but the data is recorded as floats. This could, for
895880
example, be millisecond offsets.

doc/source/user_guide/categorical.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -834,8 +834,6 @@ See also the section on :ref:`merge dtypes<merging.dtypes>` for notes about pres
834834
Unioning
835835
~~~~~~~~
836836

837-
.. versionadded:: 0.19.0
838-
839837
If you want to combine categoricals that do not necessarily have the same
840838
categories, the :func:`~pandas.api.types.union_categoricals` function will
841839
combine a list-like of categoricals. The new categories will be the union of

doc/source/user_guide/computation.rst

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -408,9 +408,7 @@ For some windowing functions, additional parameters must be specified:
408408
Time-aware rolling
409409
~~~~~~~~~~~~~~~~~~
410410

411-
.. versionadded:: 0.19.0
412-
413-
New in version 0.19.0 are the ability to pass an offset (or convertible) to a ``.rolling()`` method and have it produce
411+
It is possible to pass an offset (or convertible) to a ``.rolling()`` method and have it produce
414412
variable sized windows based on the passed time window. For each time point, this includes all preceding values occurring
415413
within the indicated time delta.
416414

@@ -893,10 +891,9 @@ Therefore, there is an assumption that :math:`x_0` is not an ordinary value
893891
but rather an exponentially weighted moment of the infinite series up to that
894892
point.
895893

896-
One must have :math:`0 < \alpha \leq 1`, and while since version 0.18.0
897-
it has been possible to pass :math:`\alpha` directly, it's often easier
898-
to think about either the **span**, **center of mass (com)** or **half-life**
899-
of an EW moment:
894+
One must have :math:`0 < \alpha \leq 1`, and while it is possible to pass
895+
:math:`\alpha` directly, it's often easier to think about either the
896+
**span**, **center of mass (com)** or **half-life** of an EW moment:
900897

901898
.. math::
902899

doc/source/user_guide/enhancingperf.rst

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -601,8 +601,6 @@ This allows for *formulaic evaluation*. The assignment target can be a
601601
new column name or an existing column name, and it must be a valid Python
602602
identifier.
603603

604-
.. versionadded:: 0.18.0
605-
606604
The ``inplace`` keyword determines whether this assignment will performed
607605
on the original ``DataFrame`` or return a copy with the new column.
608606

@@ -630,8 +628,6 @@ new or modified columns is returned and the original frame is unchanged.
630628
df.eval('e = a - c', inplace=False)
631629
df
632630
633-
.. versionadded:: 0.18.0
634-
635631
As a convenience, multiple assignments can be performed by using a
636632
multi-line string.
637633

@@ -652,9 +648,7 @@ The equivalent in standard Python would be
652648
df['a'] = 1
653649
df
654650
655-
.. versionadded:: 0.18.0
656-
657-
The ``query`` method gained the ``inplace`` keyword which determines
651+
The ``query`` method has a ``inplace`` keyword which determines
658652
whether the query modifies the original frame.
659653

660654
.. ipython:: python

doc/source/user_guide/groupby.rst

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -827,13 +827,10 @@ and that the transformed data contains no NAs.
827827
828828
.. _groupby.transform.window_resample:
829829

830-
New syntax to window and resample operations
831-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
832-
.. versionadded:: 0.18.1
830+
Window and resample operations
831+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
833832

834-
Working with the resample, expanding or rolling operations on the groupby
835-
level used to require the application of helper functions. However,
836-
now it is possible to use ``resample()``, ``expanding()`` and
833+
It is possible to use ``resample()``, ``expanding()`` and
837834
``rolling()`` as methods on groupbys.
838835

839836
The example below will apply the ``rolling()`` method on the samples of

doc/source/user_guide/indexing.rst

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -36,10 +36,6 @@ this area.
3636
should be avoided. See :ref:`Returning a View versus Copy
3737
<indexing.view_versus_copy>`.
3838

39-
.. warning::
40-
41-
Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the changes, see :ref:`here <whatsnew_0180.float_indexers>`.
42-
4339
See the :ref:`MultiIndex / Advanced Indexing <advanced>` for ``MultiIndex`` and more advanced indexing documentation.
4440

4541
See the :ref:`cookbook<cookbook.selection>` for some advanced strategies.
@@ -67,8 +63,6 @@ of multi-axis indexing.
6763
* A ``callable`` function with one argument (the calling Series or DataFrame) and
6864
that returns valid output for indexing (one of the above).
6965

70-
.. versionadded:: 0.18.1
71-
7266
See more at :ref:`Selection by Label <indexing.label>`.
7367

7468
* ``.iloc`` is primarily integer position based (from ``0`` to
@@ -85,8 +79,6 @@ of multi-axis indexing.
8579
* A ``callable`` function with one argument (the calling Series or DataFrame) and
8680
that returns valid output for indexing (one of the above).
8781

88-
.. versionadded:: 0.18.1
89-
9082
See more at :ref:`Selection by Position <indexing.integer>`,
9183
:ref:`Advanced Indexing <advanced>` and :ref:`Advanced
9284
Hierarchical <advanced.advanced_hierarchical>`.
@@ -538,8 +530,6 @@ A list of indexers where any element is out of bounds will raise an
538530
Selection by callable
539531
---------------------
540532

541-
.. versionadded:: 0.18.1
542-
543533
``.loc``, ``.iloc``, and also ``[]`` indexing can accept a ``callable`` as indexer.
544534
The ``callable`` must be a function with one argument (the calling Series or DataFrame) that returns valid output for indexing.
545535

@@ -1105,9 +1095,7 @@ This is equivalent to (but faster than) the following.
11051095
df2 = df.copy()
11061096
df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])
11071097
1108-
.. versionadded:: 0.18.1
1109-
1110-
Where can accept a callable as condition and ``other`` arguments. The function must
1098+
``where`` can accept a callable as condition and ``other`` arguments. The function must
11111099
be with one argument (the calling Series or DataFrame) and that returns valid output
11121100
as condition and ``other`` argument.
11131101

doc/source/user_guide/io.rst

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,6 @@ delim_whitespace : boolean, default False
8787
If this option is set to ``True``, nothing should be passed in for the
8888
``delimiter`` parameter.
8989

90-
.. versionadded:: 0.18.1 support for the Python parser.
91-
9290
Column and index locations and names
9391
++++++++++++++++++++++++++++++++++++
9492

@@ -298,7 +296,6 @@ compression : {``'infer'``, ``'gzip'``, ``'bz2'``, ``'zip'``, ``'xz'``, ``None``
298296
the ZIP file must contain only one data file to be read in.
299297
Set to ``None`` for no decompression.
300298

301-
.. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
302299
.. versionchanged:: 0.24.0 'infer' option added and set to default.
303300
thousands : str, default ``None``
304301
Thousands separator.
@@ -456,8 +453,6 @@ worth trying.
456453
Specifying categorical dtype
457454
''''''''''''''''''''''''''''
458455

459-
.. versionadded:: 0.19.0
460-
461456
``Categorical`` columns can be parsed directly by specifying ``dtype='category'`` or
462457
``dtype=CategoricalDtype(categories, ordered)``.
463458

@@ -2195,8 +2190,6 @@ With max_level=1 the following snippet normalizes until 1st nesting level of the
21952190
Line delimited json
21962191
'''''''''''''''''''
21972192

2198-
.. versionadded:: 0.19.0
2199-
22002193
pandas is able to read and write line-delimited json files that are common in data processing pipelines
22012194
using Hadoop or Spark.
22022195

@@ -2494,16 +2487,12 @@ Specify values that should be converted to NaN:
24942487
24952488
dfs = pd.read_html(url, na_values=['No Acquirer'])
24962489
2497-
.. versionadded:: 0.19
2498-
24992490
Specify whether to keep the default set of NaN values:
25002491

25012492
.. code-block:: python
25022493
25032494
dfs = pd.read_html(url, keep_default_na=False)
25042495
2505-
.. versionadded:: 0.19
2506-
25072496
Specify converters for columns. This is useful for numerical text data that has
25082497
leading zeros. By default columns that are numerical are cast to numeric
25092498
types and the leading zeros are lost. To avoid this, we can convert these
@@ -2515,8 +2504,6 @@ columns to strings.
25152504
dfs = pd.read_html(url_mcc, match='Telekom Albania', header=0,
25162505
converters={'MNC': str})
25172506
2518-
.. versionadded:: 0.19
2519-
25202507
Use some combination of the above:
25212508

25222509
.. code-block:: python

doc/source/user_guide/merging.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -819,8 +819,6 @@ The ``indicator`` argument will also accept string arguments, in which case the
819819
Merge dtypes
820820
~~~~~~~~~~~~
821821

822-
.. versionadded:: 0.19.0
823-
824822
Merging will preserve the dtype of the join keys.
825823

826824
.. ipython:: python
@@ -1386,8 +1384,6 @@ fill/interpolate missing data:
13861384
Merging asof
13871385
~~~~~~~~~~~~
13881386

1389-
.. versionadded:: 0.19.0
1390-
13911387
A :func:`merge_asof` is similar to an ordered left-join except that we match on
13921388
nearest key rather than equal keys. For each row in the ``left`` ``DataFrame``,
13931389
we select the last row in the ``right`` ``DataFrame`` whose ``on`` key is less

doc/source/user_guide/reshaping.rst

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -254,8 +254,6 @@ values will be set to ``NaN``.
254254
df3
255255
df3.unstack()
256256
257-
.. versionadded:: 0.18.0
258-
259257
Alternatively, unstack takes an optional ``fill_value`` argument, for specifying
260258
the value of missing data.
261259

@@ -486,8 +484,6 @@ not contain any instances of a particular category.
486484
Normalization
487485
~~~~~~~~~~~~~
488486

489-
.. versionadded:: 0.18.1
490-
491487
Frequency tables can also be normalized to show percentages rather than counts
492488
using the ``normalize`` argument:
493489

@@ -630,8 +626,6 @@ the prefix separator. You can specify ``prefix`` and ``prefix_sep`` in 3 ways:
630626
from_dict = pd.get_dummies(df, prefix={'B': 'from_B', 'A': 'from_A'})
631627
from_dict
632628
633-
.. versionadded:: 0.18.0
634-
635629
Sometimes it will be useful to only keep k-1 levels of a categorical
636630
variable to avoid collinearity when feeding the result to statistical models.
637631
You can switch to this mode by turn on ``drop_first``.

doc/source/user_guide/style.ipynb

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,6 @@
66
"source": [
77
"# Styling\n",
88
"\n",
9-
"*New in version 0.17.1*\n",
10-
"\n",
11-
"<span style=\"color: red\">*Provisional: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.*</span>\n",
12-
"\n",
139
"This document is written as a Jupyter Notebook, and can be viewed or downloaded [here](http://nbviewer.ipython.org/github/pandas-dev/pandas/blob/master/doc/source/style.ipynb).\n",
1410
"\n",
1511
"You can apply **conditional formatting**, the visual styling of a DataFrame\n",

0 commit comments

Comments
 (0)