Skip to content

Commit 3b3e067

Browse files
committed
Merge remote-tracking branch 'upstream/master' into GH36360
2 parents fa879b7 + 32d79ef commit 3b3e067

File tree

244 files changed

+12596
-8953
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

244 files changed

+12596
-8953
lines changed

.github/workflows/stale-pr.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ name: "Stale PRs"
22
on:
33
schedule:
44
# * is a special character in YAML so you have to quote this string
5-
- cron: "0 */6 * * *"
5+
- cron: "0 0 * * *"
66

77
jobs:
88
stale:
@@ -11,8 +11,8 @@ jobs:
1111
- uses: actions/stale@v3
1212
with:
1313
repo-token: ${{ secrets.GITHUB_TOKEN }}
14-
stale-pr-message: "This pull request is stale because it has been open for thirty days with no activity."
15-
skip-stale-pr-message: true
14+
stale-pr-message: "This pull request is stale because it has been open for thirty days with no activity. Please update or respond to this comment if you're still interested in working on this."
15+
skip-stale-pr-message: false
1616
stale-pr-label: "Stale"
1717
exempt-pr-labels: "Needs Review,Blocked,Needs Discussion"
1818
days-before-stale: 30

.pre-commit-config.yaml

+4
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,7 @@ repos:
4343
entry: python -m scripts.generate_pip_deps_from_conda
4444
files: ^(environment.yml|requirements-dev.txt)$
4545
pass_filenames: false
46+
- repo: https://github.com/asottile/yesqa
47+
rev: v1.2.2
48+
hooks:
49+
- id: yesqa

.travis.yml

+10-4
Original file line numberDiff line numberDiff line change
@@ -46,16 +46,16 @@ matrix:
4646
- env:
4747
- JOB="3.7" ENV_FILE="ci/deps/travis-37.yaml" PATTERN="(not slow and not network and not clipboard)"
4848

49-
- arch: arm64
50-
env:
51-
- JOB="3.7, arm64" PYTEST_WORKERS=1 ENV_FILE="ci/deps/travis-37-arm64.yaml" PATTERN="(not slow and not network and not clipboard and not arm_slow)"
52-
5349
- env:
5450
- JOB="3.7, locale" ENV_FILE="ci/deps/travis-37-locale.yaml" PATTERN="((not slow and not network and not clipboard) or (single and db))" LOCALE_OVERRIDE="zh_CN.UTF-8" SQL="1"
5551
services:
5652
- mysql
5753
- postgresql
5854

55+
- arch: arm64
56+
env:
57+
- JOB="3.7, arm64" PYTEST_WORKERS=1 ENV_FILE="ci/deps/travis-37-arm64.yaml" PATTERN="(not slow and not network and not clipboard and not arm_slow)"
58+
5959
- env:
6060
# Enabling Deprecations when running tests
6161
# PANDAS_TESTING_MODE="deprecate" causes DeprecationWarning messages to be displayed in the logs
@@ -65,6 +65,12 @@ matrix:
6565
- mysql
6666
- postgresql
6767

68+
allow_failures:
69+
# Moved to allowed_failures 2020-09-29 due to timeouts https://github.com/pandas-dev/pandas/issues/36719
70+
- arch: arm64
71+
env:
72+
- JOB="3.7, arm64" PYTEST_WORKERS=1 ENV_FILE="ci/deps/travis-37-arm64.yaml" PATTERN="(not slow and not network and not clipboard and not arm_slow)"
73+
6874

6975
before_install:
7076
- echo "before_install"

asv_bench/benchmarks/indexing.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ def setup(self, index):
191191
}
192192
index = indexes[index]
193193
self.s = Series(np.random.rand(N), index=index)
194-
self.indexer = [True, False, True, True, False] * 20000
194+
self.indexer = np.random.randint(0, N, size=N)
195195

196196
def time_take(self, index):
197197
self.s.take(self.indexer)

asv_bench/benchmarks/pandas_vb_common.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515

1616
# Compatibility import for the testing module
1717
try:
18-
import pandas._testing as tm # noqa
18+
import pandas._testing as tm
1919
except ImportError:
2020
import pandas.util.testing as tm # noqa
2121

asv_bench/benchmarks/tslibs/offsets.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
from pandas import offsets
1010

1111
try:
12-
import pandas.tseries.holiday # noqa
12+
import pandas.tseries.holiday
1313
except ImportError:
1414
pass
1515

ci/code_checks.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
335335
RET=$(($RET + $?)) ; echo $MSG "DONE"
336336

337337
MSG='Doctests strings.py' ; echo $MSG
338-
pytest -q --doctest-modules pandas/core/strings.py
338+
pytest -q --doctest-modules pandas/core/strings/
339339
RET=$(($RET + $?)) ; echo $MSG "DONE"
340340

341341
# Directories

codecov.yml

+4-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
codecov:
22
branch: master
33

4-
comment: off
4+
comment: false
55

66
coverage:
77
status:
@@ -11,3 +11,6 @@ coverage:
1111
patch:
1212
default:
1313
target: '50'
14+
15+
github_checks:
16+
annotations: false

doc/source/conf.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@
146146
# built documents.
147147
#
148148
# The short X.Y version.
149-
import pandas # noqa: E402 isort:skip
149+
import pandas # isort:skip
150150

151151
# version = '%s r%s' % (pandas.__version__, svn_version())
152152
version = str(pandas.__version__)
@@ -441,14 +441,14 @@
441441
# Add custom Documenter to handle attributes/methods of an AccessorProperty
442442
# eg pandas.Series.str and pandas.Series.dt (see GH9322)
443443

444-
import sphinx # noqa: E402 isort:skip
445-
from sphinx.util import rpartition # noqa: E402 isort:skip
446-
from sphinx.ext.autodoc import ( # noqa: E402 isort:skip
444+
import sphinx # isort:skip
445+
from sphinx.util import rpartition # isort:skip
446+
from sphinx.ext.autodoc import ( # isort:skip
447447
AttributeDocumenter,
448448
Documenter,
449449
MethodDocumenter,
450450
)
451-
from sphinx.ext.autosummary import Autosummary # noqa: E402 isort:skip
451+
from sphinx.ext.autosummary import Autosummary # isort:skip
452452

453453

454454
class AccessorDocumenter(MethodDocumenter):

doc/source/development/code_style.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ pandas code style guide
99
.. contents:: Table of contents:
1010
:local:
1111

12-
*pandas* follows the `PEP8 <https://www.python.org/dev/peps/pep-0008/>`_
12+
pandas follows the `PEP8 <https://www.python.org/dev/peps/pep-0008/>`_
1313
standard and uses `Black <https://black.readthedocs.io/en/stable/>`_
1414
and `Flake8 <https://flake8.pycqa.org/en/latest/>`_ to ensure a
1515
consistent code format throughout the project. For details see the

doc/source/development/contributing.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ Using a Docker container
155155

156156
Instead of manually setting up a development environment, you can use `Docker
157157
<https://docs.docker.com/get-docker/>`_ to automatically create the environment with just several
158-
commands. Pandas provides a ``DockerFile`` in the root directory to build a Docker image
158+
commands. pandas provides a ``DockerFile`` in the root directory to build a Docker image
159159
with a full pandas development environment.
160160

161161
**Docker Commands**
@@ -190,7 +190,7 @@ Note that you might need to rebuild the C extensions if/when you merge with upst
190190
Installing a C compiler
191191
~~~~~~~~~~~~~~~~~~~~~~~
192192

193-
Pandas uses C extensions (mostly written using Cython) to speed up certain
193+
pandas uses C extensions (mostly written using Cython) to speed up certain
194194
operations. To install pandas from source, you need to compile these C
195195
extensions, which means you need a C compiler. This process depends on which
196196
platform you're using.
@@ -1219,7 +1219,7 @@ This test shows off several useful features of Hypothesis, as well as
12191219
demonstrating a good use-case: checking properties that should hold over
12201220
a large or complicated domain of inputs.
12211221

1222-
To keep the Pandas test suite running quickly, parametrized tests are
1222+
To keep the pandas test suite running quickly, parametrized tests are
12231223
preferred if the inputs or logic are simple, with Hypothesis tests reserved
12241224
for cases with complex logic or where there are too many combinations of
12251225
options or subtle interactions to test (or think of!) all of them.

doc/source/development/extending.rst

+18-17
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ decorate a class, providing the name of attribute to add. The class's
3434
@staticmethod
3535
def _validate(obj):
3636
# verify there is a column latitude and a column longitude
37-
if 'latitude' not in obj.columns or 'longitude' not in obj.columns:
37+
if "latitude" not in obj.columns or "longitude" not in obj.columns:
3838
raise AttributeError("Must have 'latitude' and 'longitude'.")
3939
4040
@property
@@ -50,8 +50,9 @@ decorate a class, providing the name of attribute to add. The class's
5050
5151
Now users can access your methods using the ``geo`` namespace:
5252

53-
>>> ds = pd.DataFrame({'longitude': np.linspace(0, 10),
54-
... 'latitude': np.linspace(0, 20)})
53+
>>> ds = pd.Dataframe(
54+
... {"longitude": np.linspace(0, 10), "latitude": np.linspace(0, 20)}
55+
... )
5556
>>> ds.geo.center
5657
(5.0, 10.0)
5758
>>> ds.geo.plot()
@@ -176,6 +177,7 @@ your ``MyExtensionArray`` class, as follows:
176177
177178
from pandas.api.extensions import ExtensionArray, ExtensionScalarOpsMixin
178179
180+
179181
class MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin):
180182
pass
181183
@@ -271,6 +273,7 @@ included as a column in a pandas DataFrame):
271273
def __arrow_array__(self, type=None):
272274
# convert the underlying array values to a pyarrow Array
273275
import pyarrow
276+
274277
return pyarrow.array(..., type=type)
275278
276279
The ``ExtensionDtype.__from_arrow__`` method then controls the conversion
@@ -347,7 +350,6 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
347350
.. code-block:: python
348351
349352
class SubclassedSeries(pd.Series):
350-
351353
@property
352354
def _constructor(self):
353355
return SubclassedSeries
@@ -358,7 +360,6 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
358360
359361
360362
class SubclassedDataFrame(pd.DataFrame):
361-
362363
@property
363364
def _constructor(self):
364365
return SubclassedDataFrame
@@ -377,7 +378,7 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
377378
>>> type(to_framed)
378379
<class '__main__.SubclassedDataFrame'>
379380
380-
>>> df = SubclassedDataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
381+
>>> df = SubclassedDataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
381382
>>> df
382383
A B C
383384
0 1 4 7
@@ -387,7 +388,7 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
387388
>>> type(df)
388389
<class '__main__.SubclassedDataFrame'>
389390
390-
>>> sliced1 = df[['A', 'B']]
391+
>>> sliced1 = df[["A", "B"]]
391392
>>> sliced1
392393
A B
393394
0 1 4
@@ -397,7 +398,7 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
397398
>>> type(sliced1)
398399
<class '__main__.SubclassedDataFrame'>
399400
400-
>>> sliced2 = df['A']
401+
>>> sliced2 = df["A"]
401402
>>> sliced2
402403
0 1
403404
1 2
@@ -422,39 +423,39 @@ Below is an example to define two original properties, "internal_cache" as a tem
422423
class SubclassedDataFrame2(pd.DataFrame):
423424
424425
# temporary properties
425-
_internal_names = pd.DataFrame._internal_names + ['internal_cache']
426+
_internal_names = pd.DataFrame._internal_names + ["internal_cache"]
426427
_internal_names_set = set(_internal_names)
427428
428429
# normal properties
429-
_metadata = ['added_property']
430+
_metadata = ["added_property"]
430431
431432
@property
432433
def _constructor(self):
433434
return SubclassedDataFrame2
434435
435436
.. code-block:: python
436437
437-
>>> df = SubclassedDataFrame2({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
438+
>>> df = SubclassedDataFrame2({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
438439
>>> df
439440
A B C
440441
0 1 4 7
441442
1 2 5 8
442443
2 3 6 9
443444
444-
>>> df.internal_cache = 'cached'
445-
>>> df.added_property = 'property'
445+
>>> df.internal_cache = "cached"
446+
>>> df.added_property = "property"
446447
447448
>>> df.internal_cache
448449
cached
449450
>>> df.added_property
450451
property
451452
452453
# properties defined in _internal_names is reset after manipulation
453-
>>> df[['A', 'B']].internal_cache
454+
>>> df[["A", "B"]].internal_cache
454455
AttributeError: 'SubclassedDataFrame2' object has no attribute 'internal_cache'
455456
456457
# properties defined in _metadata are retained
457-
>>> df[['A', 'B']].added_property
458+
>>> df[["A", "B"]].added_property
458459
property
459460
460461
.. _extending.plotting-backends:
@@ -468,7 +469,7 @@ one based on Matplotlib. For example:
468469

469470
.. code-block:: python
470471
471-
>>> pd.set_option('plotting.backend', 'backend.module')
472+
>>> pd.set_option("plotting.backend", "backend.module")
472473
>>> pd.Series([1, 2, 3]).plot()
473474
474475
This would be more or less equivalent to:
@@ -499,4 +500,4 @@ registers the default "matplotlib" backend as follows.
499500
500501
501502
More information on how to implement a third-party plotting backend can be found at
502-
https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/__init__.py#L1.
503+
https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/__init__.py#L1.

doc/source/development/maintaining.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ Only core team members can merge pull requests. We have a few guidelines.
207207
1. You should typically not self-merge your own pull requests. Exceptions include
208208
things like small changes to fix CI (e.g. pinning a package version).
209209
2. You should not merge pull requests that have an active discussion, or pull
210-
requests that has any ``-1`` votes from a core maintainer. Pandas operates
210+
requests that has any ``-1`` votes from a core maintainer. pandas operates
211211
by consensus.
212212
3. For larger changes, it's good to have a +1 from at least two core team members.
213213

doc/source/development/roadmap.rst

+17-14
Original file line numberDiff line numberDiff line change
@@ -141,20 +141,6 @@ ways for users to apply their own Numba-jitted functions where pandas accepts us
141141
and in groupby and window contexts). This will improve the performance of
142142
user-defined-functions in these operations by staying within compiled code.
143143

144-
145-
Documentation improvements
146-
--------------------------
147-
148-
We'd like to improve the content, structure, and presentation of the pandas documentation.
149-
Some specific goals include
150-
151-
* Overhaul the HTML theme with a modern, responsive design (:issue:`15556`)
152-
* Improve the "Getting Started" documentation, designing and writing learning paths
153-
for users different backgrounds (e.g. brand new to programming, familiar with
154-
other languages like R, already familiar with Python).
155-
* Improve the overall organization of the documentation and specific subsections
156-
of the documentation to make navigation and finding content easier.
157-
158144
Performance monitoring
159145
----------------------
160146

@@ -203,3 +189,20 @@ should be notified of the proposal.
203189
When there's agreement that an implementation
204190
would be welcome, the roadmap should be updated to include the summary and a
205191
link to the discussion issue.
192+
193+
Completed items
194+
---------------
195+
196+
This section records now completed items from the pandas roadmap.
197+
198+
Documentation improvements
199+
~~~~~~~~~~~~~~~~~~~~~~~~~~
200+
201+
We improved the pandas documentation
202+
203+
* The pandas community worked with others to build the `pydata-sphinx-theme`_,
204+
which is now used for https://pandas.pydata.org/docs/ (:issue:`15556`).
205+
* :ref:`getting_started` contains a number of resources intended for new
206+
pandas users coming from a variety of backgrounds (:issue:`26831`).
207+
208+
.. _pydata-sphinx-theme: https://github.com/pandas-dev/pydata-sphinx-theme

0 commit comments

Comments
 (0)