Skip to content

Commit 708dd75

Browse files
committed
Merge remote-tracking branch 'upstream/master' into ea-repr
2 parents 1b93bf0 + 3592a46 commit 708dd75

File tree

91 files changed

+2204
-1594
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

91 files changed

+2204
-1594
lines changed

ci/code_checks.sh

+19-5
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,19 @@
99
# In the future we may want to add the validation of docstrings and other checks here.
1010
#
1111
# Usage:
12-
# $ ./ci/code_checks.sh # run all checks
13-
# $ ./ci/code_checks.sh lint # run linting only
14-
# $ ./ci/code_checks.sh patterns # check for patterns that should not exist
15-
# $ ./ci/code_checks.sh doctests # run doctests
12+
# $ ./ci/code_checks.sh # run all checks
13+
# $ ./ci/code_checks.sh lint # run linting only
14+
# $ ./ci/code_checks.sh patterns # check for patterns that should not exist
15+
# $ ./ci/code_checks.sh doctests # run doctests
16+
# $ ./ci/code_checks.sh dependencies # check that dependencies are consistent
1617

1718
echo "inside $0"
1819
[[ $LINT ]] || { echo "NOT Linting. To lint use: LINT=true $0 $1"; exit 0; }
19-
[[ -z "$1" || "$1" == "lint" || "$1" == "patterns" || "$1" == "doctests" ]] || { echo "Unknown command $1. Usage: $0 [lint|patterns|doctests]"; exit 9999; }
20+
[[ -z "$1" || "$1" == "lint" || "$1" == "patterns" || "$1" == "doctests" || "$1" == "dependencies" ]] \
21+
|| { echo "Unknown command $1. Usage: $0 [lint|patterns|doctests|dependencies]"; exit 9999; }
2022

2123
source activate pandas
24+
BASE_DIR="$(dirname $0)/.."
2225
RET=0
2326
CHECK=$1
2427

@@ -119,6 +122,10 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
119122
! grep -R --include="*.py" --include="*.pyx" --include="*.rst" -E "\.\. (autosummary|contents|currentmodule|deprecated|function|image|important|include|ipython|literalinclude|math|module|note|raw|seealso|toctree|versionadded|versionchanged|warning):[^:]" ./pandas ./doc/source
120123
RET=$(($RET + $?)) ; echo $MSG "DONE"
121124

125+
MSG='Check that the deprecated `assert_raises_regex` is not used (`pytest.raises(match=pattern)` should be used instead)' ; echo $MSG
126+
! grep -R --exclude=*.pyc --exclude=testing.py --exclude=test_testing.py assert_raises_regex pandas
127+
RET=$(($RET + $?)) ; echo $MSG "DONE"
128+
122129
MSG='Check for modules that pandas should not import' ; echo $MSG
123130
python -c "
124131
import sys
@@ -172,4 +179,11 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
172179

173180
fi
174181

182+
### DEPENDENCIES ###
183+
if [[ -z "$CHECK" || "$CHECK" == "dependencies" ]]; then
184+
MSG='Check that requirements-dev.txt has been generated from environment.yml' ; echo $MSG
185+
$BASE_DIR/scripts/generate_pip_deps_from_conda.py --compare
186+
RET=$(($RET + $?)) ; echo $MSG "DONE"
187+
fi
188+
175189
exit $RET

ci/environment-dev.yaml

-20
This file was deleted.

ci/requirements-optional-conda.txt

-28
This file was deleted.

ci/requirements_dev.txt

-16
This file was deleted.

doc/source/contributing.rst

+3-8
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ We'll now kick off a three-step process:
170170
.. code-block:: none
171171
172172
# Create and activate the build environment
173-
conda env create -f ci/environment-dev.yaml
173+
conda env create -f environment.yml
174174
conda activate pandas-dev
175175
176176
# or with older versions of Anaconda:
@@ -180,9 +180,6 @@ We'll now kick off a three-step process:
180180
python setup.py build_ext --inplace -j 4
181181
python -m pip install -e .
182182
183-
# Install the rest of the optional dependencies
184-
conda install -c defaults -c conda-forge --file=ci/requirements-optional-conda.txt
185-
186183
At this point you should be able to import pandas from your locally built version::
187184

188185
$ python # start an interpreter
@@ -221,14 +218,12 @@ You'll need to have at least python3.5 installed on your system.
221218
. ~/virtualenvs/pandas-dev/bin/activate
222219
223220
# Install the build dependencies
224-
python -m pip install -r ci/requirements_dev.txt
221+
python -m pip install -r requirements-dev.txt
222+
225223
# Build and install pandas
226224
python setup.py build_ext --inplace -j 4
227225
python -m pip install -e .
228226
229-
# Install additional dependencies
230-
python -m pip install -r ci/requirements-optional-pip.txt
231-
232227
Creating a branch
233228
-----------------
234229

doc/source/io.rst

+28-1
Original file line numberDiff line numberDiff line change
@@ -2861,7 +2861,13 @@ to be parsed.
28612861
28622862
read_excel('path_to_file.xls', 'Sheet1', usecols=2)
28632863
2864-
If `usecols` is a list of integers, then it is assumed to be the file column
2864+
You can also specify a comma-delimited set of Excel columns and ranges as a string:
2865+
2866+
.. code-block:: python
2867+
2868+
read_excel('path_to_file.xls', 'Sheet1', usecols='A,C:E')
2869+
2870+
If ``usecols`` is a list of integers, then it is assumed to be the file column
28652871
indices to be parsed.
28662872

28672873
.. code-block:: python
@@ -2870,6 +2876,27 @@ indices to be parsed.
28702876
28712877
Element order is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.
28722878

2879+
.. versionadded:: 0.24
2880+
2881+
If ``usecols`` is a list of strings, it is assumed that each string corresponds
2882+
to a column name provided either by the user in ``names`` or inferred from the
2883+
document header row(s). Those strings define which columns will be parsed:
2884+
2885+
.. code-block:: python
2886+
2887+
read_excel('path_to_file.xls', 'Sheet1', usecols=['foo', 'bar'])
2888+
2889+
Element order is ignored, so ``usecols=['baz', 'joe']`` is the same as ``['joe', 'baz']``.
2890+
2891+
.. versionadded:: 0.24
2892+
2893+
If ``usecols`` is callable, the callable function will be evaluated against
2894+
the column names, returning names where the callable function evaluates to ``True``.
2895+
2896+
.. code-block:: python
2897+
2898+
read_excel('path_to_file.xls', 'Sheet1', usecols=lambda x: x.isalpha())
2899+
28732900
Parsing Dates
28742901
+++++++++++++
28752902

doc/source/reshaping.rst

+104-6
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ Reshaping and Pivot Tables
1717
Reshaping by pivoting DataFrame objects
1818
---------------------------------------
1919

20+
.. image:: _static/reshaping_pivot.png
21+
2022
.. ipython::
2123
:suppress:
2224

@@ -33,8 +35,7 @@ Reshaping by pivoting DataFrame objects
3335

3436
In [3]: df = unpivot(tm.makeTimeDataFrame())
3537

36-
Data is often stored in CSV files or databases in so-called "stacked" or
37-
"record" format:
38+
Data is often stored in so-called "stacked" or "record" format:
3839

3940
.. ipython:: python
4041
@@ -66,8 +67,6 @@ To select out everything for variable ``A`` we could do:
6667
6768
df[df['variable'] == 'A']
6869
69-
.. image:: _static/reshaping_pivot.png
70-
7170
But suppose we wish to do time series operations with the variables. A better
7271
representation would be where the ``columns`` are the unique variables and an
7372
``index`` of dates identifies individual observations. To reshape the data into
@@ -87,7 +86,7 @@ column:
8786
.. ipython:: python
8887
8988
df['value2'] = df['value'] * 2
90-
pivoted = df.pivot('date', 'variable')
89+
pivoted = df.pivot(index='date', columns='variable')
9190
pivoted
9291
9392
You can then select subsets from the pivoted ``DataFrame``:
@@ -99,6 +98,12 @@ You can then select subsets from the pivoted ``DataFrame``:
9998
Note that this returns a view on the underlying data in the case where the data
10099
are homogeneously-typed.
101100

101+
.. note::
102+
:func:`~pandas.pivot` will error with a ``ValueError: Index contains duplicate
103+
entries, cannot reshape`` if the index/column pair is not unique. In this
104+
case, consider using :func:`~pandas.pivot_table` which is a generalization
105+
of pivot that can handle duplicate values for one index/column pair.
106+
102107
.. _reshaping.stacking:
103108

104109
Reshaping by stacking and unstacking
@@ -704,10 +709,103 @@ handling of NaN:
704709
In [3]: np.unique(x, return_inverse=True)[::-1]
705710
Out[3]: (array([3, 3, 0, 4, 1, 2]), array([nan, 3.14, inf, 'A', 'B'], dtype=object))
706711
707-
708712
.. note::
709713
If you just want to handle one column as a categorical variable (like R's factor),
710714
you can use ``df["cat_col"] = pd.Categorical(df["col"])`` or
711715
``df["cat_col"] = df["col"].astype("category")``. For full docs on :class:`~pandas.Categorical`,
712716
see the :ref:`Categorical introduction <categorical>` and the
713717
:ref:`API documentation <api.categorical>`.
718+
719+
Examples
720+
--------
721+
722+
In this section, we will review frequently asked questions and examples. The
723+
column names and relevant column values are named to correspond with how this
724+
DataFrame will be pivoted in the answers below.
725+
726+
.. ipython:: python
727+
728+
np.random.seed([3, 1415])
729+
n = 20
730+
731+
cols = np.array(['key', 'row', 'item', 'col'])
732+
df = cols + pd.DataFrame((np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str))
733+
df.columns = cols
734+
df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val'))
735+
736+
df
737+
738+
Pivoting with Single Aggregations
739+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
740+
741+
Suppose we wanted to pivot ``df`` such that the ``col`` values are columns,
742+
``row`` values are the index, and the mean of ``val0`` are the values? In
743+
particular, the resulting DataFrame should look like:
744+
745+
.. code-block:: ipython
746+
747+
col col0 col1 col2 col3 col4
748+
row
749+
row0 0.77 0.605 NaN 0.860 0.65
750+
row2 0.13 NaN 0.395 0.500 0.25
751+
row3 NaN 0.310 NaN 0.545 NaN
752+
row4 NaN 0.100 0.395 0.760 0.24
753+
754+
This solution uses :func:`~pandas.pivot_table`. Also note that
755+
``aggfunc='mean'`` is the default. It is included here to be explicit.
756+
757+
.. ipython:: python
758+
759+
df.pivot_table(
760+
values='val0', index='row', columns='col', aggfunc='mean')
761+
762+
Note that we can also replace the missing values by using the ``fill_value``
763+
parameter.
764+
765+
.. ipython:: python
766+
767+
df.pivot_table(
768+
values='val0', index='row', columns='col', aggfunc='mean', fill_value=0)
769+
770+
Also note that we can pass in other aggregation functions as well. For example,
771+
we can also pass in ``sum``.
772+
773+
.. ipython:: python
774+
775+
df.pivot_table(
776+
values='val0', index='row', columns='col', aggfunc='sum', fill_value=0)
777+
778+
Another aggregation we can do is calculate the frequency in which the columns
779+
and rows occur together a.k.a. "cross tabulation". To do this, we can pass
780+
``size`` to the ``aggfunc`` parameter.
781+
782+
.. ipython:: python
783+
784+
df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size')
785+
786+
Pivoting with Multiple Aggregations
787+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
788+
789+
We can also perform multiple aggregations. For example, to perform both a
790+
``sum`` and ``mean``, we can pass in a list to the ``aggfunc`` argument.
791+
792+
.. ipython:: python
793+
794+
df.pivot_table(
795+
values='val0', index='row', columns='col', aggfunc=['mean', 'sum'])
796+
797+
Note to aggregate over multiple value columns, we can pass in a list to the
798+
``values`` parameter.
799+
800+
.. ipython:: python
801+
802+
df.pivot_table(
803+
values=['val0', 'val1'], index='row', columns='col', aggfunc=['mean'])
804+
805+
Note to subdivide over multiple columns we can pass in a list to the
806+
``columns`` parameter.
807+
808+
.. ipython:: python
809+
810+
df.pivot_table(
811+
values=['val0'], index='row', columns=['item', 'col'], aggfunc=['mean'])

0 commit comments

Comments
 (0)