Skip to content

Commit 505970e

Browse files
committed
Merge remote-tracking branch 'upstream/master' into index-ndarray-data
2 parents a30bc02 + 3592a46 commit 505970e

37 files changed

+986
-586
lines changed

ci/code_checks.sh

+19-5
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,19 @@
99
# In the future we may want to add the validation of docstrings and other checks here.
1010
#
1111
# Usage:
12-
# $ ./ci/code_checks.sh # run all checks
13-
# $ ./ci/code_checks.sh lint # run linting only
14-
# $ ./ci/code_checks.sh patterns # check for patterns that should not exist
15-
# $ ./ci/code_checks.sh doctests # run doctests
12+
# $ ./ci/code_checks.sh # run all checks
13+
# $ ./ci/code_checks.sh lint # run linting only
14+
# $ ./ci/code_checks.sh patterns # check for patterns that should not exist
15+
# $ ./ci/code_checks.sh doctests # run doctests
16+
# $ ./ci/code_checks.sh dependencies # check that dependencies are consistent
1617

1718
echo "inside $0"
1819
[[ $LINT ]] || { echo "NOT Linting. To lint use: LINT=true $0 $1"; exit 0; }
19-
[[ -z "$1" || "$1" == "lint" || "$1" == "patterns" || "$1" == "doctests" ]] || { echo "Unknown command $1. Usage: $0 [lint|patterns|doctests]"; exit 9999; }
20+
[[ -z "$1" || "$1" == "lint" || "$1" == "patterns" || "$1" == "doctests" || "$1" == "dependencies" ]] \
21+
|| { echo "Unknown command $1. Usage: $0 [lint|patterns|doctests|dependencies]"; exit 9999; }
2022

2123
source activate pandas
24+
BASE_DIR="$(dirname $0)/.."
2225
RET=0
2326
CHECK=$1
2427

@@ -119,6 +122,10 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
119122
! grep -R --include="*.py" --include="*.pyx" --include="*.rst" -E "\.\. (autosummary|contents|currentmodule|deprecated|function|image|important|include|ipython|literalinclude|math|module|note|raw|seealso|toctree|versionadded|versionchanged|warning):[^:]" ./pandas ./doc/source
120123
RET=$(($RET + $?)) ; echo $MSG "DONE"
121124

125+
MSG='Check that the deprecated `assert_raises_regex` is not used (`pytest.raises(match=pattern)` should be used instead)' ; echo $MSG
126+
! grep -R --exclude=*.pyc --exclude=testing.py --exclude=test_testing.py assert_raises_regex pandas
127+
RET=$(($RET + $?)) ; echo $MSG "DONE"
128+
122129
MSG='Check for modules that pandas should not import' ; echo $MSG
123130
python -c "
124131
import sys
@@ -172,4 +179,11 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
172179

173180
fi
174181

182+
### DEPENDENCIES ###
183+
if [[ -z "$CHECK" || "$CHECK" == "dependencies" ]]; then
184+
MSG='Check that requirements-dev.txt has been generated from environment.yml' ; echo $MSG
185+
$BASE_DIR/scripts/generate_pip_deps_from_conda.py --compare
186+
RET=$(($RET + $?)) ; echo $MSG "DONE"
187+
fi
188+
175189
exit $RET

ci/environment-dev.yaml

-20
This file was deleted.

ci/requirements-optional-conda.txt

-28
This file was deleted.

ci/requirements_dev.txt

-16
This file was deleted.

doc/source/contributing.rst

+3-8
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ We'll now kick off a three-step process:
170170
.. code-block:: none
171171
172172
# Create and activate the build environment
173-
conda env create -f ci/environment-dev.yaml
173+
conda env create -f environment.yml
174174
conda activate pandas-dev
175175
176176
# or with older versions of Anaconda:
@@ -180,9 +180,6 @@ We'll now kick off a three-step process:
180180
python setup.py build_ext --inplace -j 4
181181
python -m pip install -e .
182182
183-
# Install the rest of the optional dependencies
184-
conda install -c defaults -c conda-forge --file=ci/requirements-optional-conda.txt
185-
186183
At this point you should be able to import pandas from your locally built version::
187184

188185
$ python # start an interpreter
@@ -221,14 +218,12 @@ You'll need to have at least python3.5 installed on your system.
221218
. ~/virtualenvs/pandas-dev/bin/activate
222219
223220
# Install the build dependencies
224-
python -m pip install -r ci/requirements_dev.txt
221+
python -m pip install -r requirements-dev.txt
222+
225223
# Build and install pandas
226224
python setup.py build_ext --inplace -j 4
227225
python -m pip install -e .
228226
229-
# Install additional dependencies
230-
python -m pip install -r ci/requirements-optional-pip.txt
231-
232227
Creating a branch
233228
-----------------
234229

doc/source/reshaping.rst

+104-6
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ Reshaping and Pivot Tables
1717
Reshaping by pivoting DataFrame objects
1818
---------------------------------------
1919

20+
.. image:: _static/reshaping_pivot.png
21+
2022
.. ipython::
2123
:suppress:
2224

@@ -33,8 +35,7 @@ Reshaping by pivoting DataFrame objects
3335

3436
In [3]: df = unpivot(tm.makeTimeDataFrame())
3537

36-
Data is often stored in CSV files or databases in so-called "stacked" or
37-
"record" format:
38+
Data is often stored in so-called "stacked" or "record" format:
3839

3940
.. ipython:: python
4041
@@ -66,8 +67,6 @@ To select out everything for variable ``A`` we could do:
6667
6768
df[df['variable'] == 'A']
6869
69-
.. image:: _static/reshaping_pivot.png
70-
7170
But suppose we wish to do time series operations with the variables. A better
7271
representation would be where the ``columns`` are the unique variables and an
7372
``index`` of dates identifies individual observations. To reshape the data into
@@ -87,7 +86,7 @@ column:
8786
.. ipython:: python
8887
8988
df['value2'] = df['value'] * 2
90-
pivoted = df.pivot('date', 'variable')
89+
pivoted = df.pivot(index='date', columns='variable')
9190
pivoted
9291
9392
You can then select subsets from the pivoted ``DataFrame``:
@@ -99,6 +98,12 @@ You can then select subsets from the pivoted ``DataFrame``:
9998
Note that this returns a view on the underlying data in the case where the data
10099
are homogeneously-typed.
101100

101+
.. note::
102+
:func:`~pandas.pivot` will error with a ``ValueError: Index contains duplicate
103+
entries, cannot reshape`` if the index/column pair is not unique. In this
104+
case, consider using :func:`~pandas.pivot_table` which is a generalization
105+
of pivot that can handle duplicate values for one index/column pair.
106+
102107
.. _reshaping.stacking:
103108

104109
Reshaping by stacking and unstacking
@@ -704,10 +709,103 @@ handling of NaN:
704709
In [3]: np.unique(x, return_inverse=True)[::-1]
705710
Out[3]: (array([3, 3, 0, 4, 1, 2]), array([nan, 3.14, inf, 'A', 'B'], dtype=object))
706711
707-
708712
.. note::
709713
If you just want to handle one column as a categorical variable (like R's factor),
710714
you can use ``df["cat_col"] = pd.Categorical(df["col"])`` or
711715
``df["cat_col"] = df["col"].astype("category")``. For full docs on :class:`~pandas.Categorical`,
712716
see the :ref:`Categorical introduction <categorical>` and the
713717
:ref:`API documentation <api.categorical>`.
718+
719+
Examples
720+
--------
721+
722+
In this section, we will review frequently asked questions and examples. The
723+
column names and relevant column values are named to correspond with how this
724+
DataFrame will be pivoted in the answers below.
725+
726+
.. ipython:: python
727+
728+
np.random.seed([3, 1415])
729+
n = 20
730+
731+
cols = np.array(['key', 'row', 'item', 'col'])
732+
df = cols + pd.DataFrame((np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str))
733+
df.columns = cols
734+
df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val'))
735+
736+
df
737+
738+
Pivoting with Single Aggregations
739+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
740+
741+
Suppose we wanted to pivot ``df`` such that the ``col`` values are columns,
742+
``row`` values are the index, and the mean of ``val0`` are the values? In
743+
particular, the resulting DataFrame should look like:
744+
745+
.. code-block:: ipython
746+
747+
col col0 col1 col2 col3 col4
748+
row
749+
row0 0.77 0.605 NaN 0.860 0.65
750+
row2 0.13 NaN 0.395 0.500 0.25
751+
row3 NaN 0.310 NaN 0.545 NaN
752+
row4 NaN 0.100 0.395 0.760 0.24
753+
754+
This solution uses :func:`~pandas.pivot_table`. Also note that
755+
``aggfunc='mean'`` is the default. It is included here to be explicit.
756+
757+
.. ipython:: python
758+
759+
df.pivot_table(
760+
values='val0', index='row', columns='col', aggfunc='mean')
761+
762+
Note that we can also replace the missing values by using the ``fill_value``
763+
parameter.
764+
765+
.. ipython:: python
766+
767+
df.pivot_table(
768+
values='val0', index='row', columns='col', aggfunc='mean', fill_value=0)
769+
770+
Also note that we can pass in other aggregation functions as well. For example,
771+
we can also pass in ``sum``.
772+
773+
.. ipython:: python
774+
775+
df.pivot_table(
776+
values='val0', index='row', columns='col', aggfunc='sum', fill_value=0)
777+
778+
Another aggregation we can do is calculate the frequency in which the columns
779+
and rows occur together a.k.a. "cross tabulation". To do this, we can pass
780+
``size`` to the ``aggfunc`` parameter.
781+
782+
.. ipython:: python
783+
784+
df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size')
785+
786+
Pivoting with Multiple Aggregations
787+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
788+
789+
We can also perform multiple aggregations. For example, to perform both a
790+
``sum`` and ``mean``, we can pass in a list to the ``aggfunc`` argument.
791+
792+
.. ipython:: python
793+
794+
df.pivot_table(
795+
values='val0', index='row', columns='col', aggfunc=['mean', 'sum'])
796+
797+
Note to aggregate over multiple value columns, we can pass in a list to the
798+
``values`` parameter.
799+
800+
.. ipython:: python
801+
802+
df.pivot_table(
803+
values=['val0', 'val1'], index='row', columns='col', aggfunc=['mean'])
804+
805+
Note to subdivide over multiple columns we can pass in a list to the
806+
``columns`` parameter.
807+
808+
.. ipython:: python
809+
810+
df.pivot_table(
811+
values=['val0'], index='row', columns=['item', 'col'], aggfunc=['mean'])

doc/source/whatsnew/v0.24.0.txt

+2
Original file line numberDiff line numberDiff line change
@@ -247,6 +247,7 @@ Backwards incompatible API changes
247247

248248
- A newly constructed empty :class:`DataFrame` with integer as the ``dtype`` will now only be cast to ``float64`` if ``index`` is specified (:issue:`22858`)
249249
- :meth:`Series.str.cat` will now raise if `others` is a `set` (:issue:`23009`)
250+
- Passing scalar values to :class:`DatetimeIndex` or :class:`TimedeltaIndex` will now raise ``TypeError`` instead of ``ValueError`` (:issue:`23539`)
250251

251252
.. _whatsnew_0240.api_breaking.deps:
252253

@@ -969,6 +970,7 @@ Deprecations
969970
- The class ``FrozenNDArray`` has been deprecated. When unpickling, ``FrozenNDArray`` will be unpickled to ``np.ndarray`` once this class is removed (:issue:`9031`)
970971
- Deprecated the `nthreads` keyword of :func:`pandas.read_feather` in favor of
971972
`use_threads` to reflect the changes in pyarrow 0.11.0. (:issue:`23053`)
973+
- Constructing a :class:`TimedeltaIndex` from data with ``datetime64``-dtyped data is deprecated, will raise ``TypeError`` in a future version (:issue:`23539`)
972974

973975
.. _whatsnew_0240.deprecations.datetimelike_int_ops:
974976

environment.yml

+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
name: pandas-dev
2+
channels:
3+
- defaults
4+
- conda-forge
5+
dependencies:
6+
# required
7+
- NumPy
8+
- python=3
9+
- python-dateutil>=2.5.0
10+
- pytz
11+
12+
# development
13+
- Cython>=0.28.2
14+
- flake8
15+
- flake8-comprehensions
16+
- flake8-rst
17+
- hypothesis>=3.58.0
18+
- isort
19+
- moto
20+
- pytest>=3.6
21+
- setuptools>=24.2.0
22+
- sphinx
23+
- sphinxcontrib-spelling
24+
25+
# optional
26+
- beautifulsoup4>=4.2.1
27+
- blosc
28+
- bottleneck>=1.2.0
29+
- fastparquet>=0.1.2
30+
- gcsfs
31+
- html5lib
32+
- ipython>=5.6.0
33+
- ipykernel
34+
- jinja2
35+
- lxml
36+
- matplotlib>=2.0.0
37+
- nbsphinx
38+
- numexpr>=2.6.1
39+
- openpyxl
40+
- pyarrow>=0.7.0
41+
- pymysql
42+
- pytables>=3.4.2
43+
- pytest-cov
44+
- pytest-xdist
45+
- s3fs
46+
- scipy>=0.18.1
47+
- seaborn
48+
- sqlalchemy
49+
- statsmodels
50+
- xarray
51+
- xlrd
52+
- xlsxwriter
53+
- xlwt

pandas/core/arrays/datetimelike.py

+6-2
Original file line numberDiff line numberDiff line change
@@ -124,8 +124,12 @@ def asi8(self):
124124
# do not cache or you'll create a memory leak
125125
return self._data.view('i8')
126126

127-
# ------------------------------------------------------------------
128-
# Array-like Methods
127+
# ----------------------------------------------------------------
128+
# Array-Like / EA-Interface Methods
129+
130+
@property
131+
def nbytes(self):
132+
return self._data.nbytes
129133

130134
@property
131135
def shape(self):

0 commit comments

Comments
 (0)