Skip to content

DOC: cleanup build warnings #14172

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 7, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
136 changes: 11 additions & 125 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -935,134 +935,20 @@ method:
minor_axis=['a', 'b', 'c', 'd'])
panel.to_frame()


.. _dsintro.panel4d:

Panel4D (Experimental)
----------------------

.. warning::

In 0.19.0 ``Panel4D`` is deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the `xarray package <http://xarray.pydata.org/en/stable/>`__. Pandas provides a :meth:`~Panel4D.to_xarray` method to automate this conversion.

``Panel4D`` is a 4-Dimensional named container very much like a ``Panel``, but
having 4 named dimensions. It is intended as a test bed for more N-Dimensional named
containers.

- **labels**: axis 0, each item corresponds to a Panel contained inside
- **items**: axis 1, each item corresponds to a DataFrame contained inside
- **major_axis**: axis 2, it is the **index** (rows) of each of the
DataFrames
- **minor_axis**: axis 3, it is the **columns** of each of the DataFrames

``Panel4D`` is a sub-class of ``Panel``, so most methods that work on Panels are
applicable to Panel4D. The following methods are disabled:

- ``join , to_frame , to_excel , to_sparse , groupby``

Construction of Panel4D works in a very similar manner to a ``Panel``

From 4D ndarray with optional axis labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. ipython:: python

p4d = pd.Panel4D(np.random.randn(2, 2, 5, 4),
labels=['Label1','Label2'],
items=['Item1', 'Item2'],
major_axis=pd.date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
p4d


From dict of Panel objects
~~~~~~~~~~~~~~~~~~~~~~~~~~

.. ipython:: python

data = { 'Label1' : pd.Panel({ 'Item1' : pd.DataFrame(np.random.randn(4, 3)) }),
'Label2' : pd.Panel({ 'Item2' : pd.DataFrame(np.random.randn(4, 2)) }) }
pd.Panel4D(data)

Note that the values in the dict need only be **convertible to Panels**.
Thus, they can be any of the other valid inputs to Panel as per above.

Slicing
~~~~~~~

Slicing works in a similar manner to a Panel. ``[]`` slices the first dimension.
``.ix`` allows you to slice arbitrarily and get back lower dimensional objects

.. ipython:: python

p4d['Label1']

4D -> Panel

.. ipython:: python

p4d.ix[:,:,:,'A']

4D -> DataFrame

.. ipython:: python

p4d.ix[:,:,0,'A']

4D -> Series

.. ipython:: python

p4d.ix[:,0,0,'A']

Transposing
~~~~~~~~~~~

A Panel4D can be rearranged using its ``transpose`` method (which does not make a
copy by default unless the data are heterogeneous):

.. ipython:: python

p4d.transpose(3, 2, 1, 0)

.. _dsintro.panelnd:
.. _dsintro.panel4d:

PanelND (Experimental)
----------------------
Panel4D and PanelND (Deprecated)
--------------------------------

.. warning::

In 0.19.0 ``PanelND`` is deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the `xarray package <http://xarray.pydata.org/en/stable/>`__.
In 0.19.0 ``Panel4D`` and ``PanelND`` are deprecated and will be removed in
a future version. The recommended way to represent these types of
n-dimensional data are with the
`xarray package <http://xarray.pydata.org/en/stable/>`__.
Pandas provides a :meth:`~Panel4D.to_xarray` method to automate
this conversion.

PanelND is a module with a set of factory functions to enable a user to construct N-dimensional named
containers like Panel4D, with a custom set of axis labels. Thus a domain-specific container can easily be
created.

The following creates a Panel5D. A new panel type object must be sliceable into a lower dimensional object.
Here we slice to a Panel4D.

.. ipython:: python
:okwarning:

from pandas.core import panelnd
Panel5D = panelnd.create_nd_panel_factory(
klass_name = 'Panel5D',
orders = [ 'cool', 'labels','items','major_axis','minor_axis'],
slices = { 'labels' : 'labels', 'items' : 'items',
'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
slicer = pd.Panel4D,
aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
stat_axis = 2)

p5d = Panel5D(dict(C1 = p4d))
p5d

# print a slice of our 5D
p5d.ix['C1',:,:,0:3,:]

# transpose it
p5d.transpose(1,2,3,4,0)

# look at the shape & dim
p5d.shape
p5d.ndim
See the `docs of a previous version <http://pandas.pydata.org/pandas-docs/version/0.18.1/dsintro.html#panel4d-experimental>`__
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

for documentation on these objects.
5 changes: 3 additions & 2 deletions doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -255,6 +255,7 @@ Optional Dependencies

* `matplotlib <http://matplotlib.org/>`__: for plotting
* For Excel I/O:

* `xlrd/xlwt <http://www.python-excel.org/>`__: Excel reading (xlrd) and writing (xlwt)
* `openpyxl <http://packages.python.org/openpyxl/>`__: openpyxl version 1.6.1
or higher (but lower than 2.0.0), or version 2.2 or higher, for writing .xlsx files (xlrd >= 0.9.0)
Expand Down Expand Up @@ -296,8 +297,8 @@ Optional Dependencies
<html-gotchas>`. It explains issues surrounding the installation and
usage of the above three libraries
* You may need to install an older version of `BeautifulSoup4`_:
- Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
32-bit Ubuntu/Debian
Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 32-bit
Ubuntu/Debian
* Additionally, if you're using `Anaconda`_ you should definitely
read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`

Expand Down
36 changes: 4 additions & 32 deletions doc/source/sparse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import pandas as pd
import pandas.util.testing as tm
np.set_printoptions(precision=4, suppress=True)
options.display.max_rows = 15
pd.options.display.max_rows = 15

**********************
Sparse data structures
Expand Down Expand Up @@ -90,38 +90,10 @@ can be converted back to a regular ndarray by calling ``to_dense``:
SparseList
----------

.. note:: The ``SparseList`` class has been deprecated and will be removed in a future version.
The ``SparseList`` class has been deprecated and will be removed in a future version.
See the `docs of a previous version <http://pandas.pydata.org/pandas-docs/version/0.18.1/sparse.html#sparselist>`__
for documentation on ``SparseList``.

``SparseList`` is a list-like data structure for managing a dynamic collection
of SparseArrays. To create one, simply call the ``SparseList`` constructor with
a ``fill_value`` (defaulting to ``NaN``):

.. ipython:: python

spl = pd.SparseList()
spl

The two important methods are ``append`` and ``to_array``. ``append`` can
accept scalar values or any 1-dimensional sequence:

.. ipython:: python
:suppress:

.. ipython:: python

spl.append(np.array([1., np.nan, np.nan, 2., 3.]))
spl.append(5)
spl.append(sparr)
spl

As you can see, all of the contents are stored internally as a list of
memory-efficient ``SparseArray`` objects. Once you've accumulated all of the
data, you can call ``to_array`` to get a single ``SparseArray`` with all the
data:

.. ipython:: python

spl.to_array()

SparseIndex objects
-------------------
Expand Down
6 changes: 2 additions & 4 deletions doc/source/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1219,7 +1219,7 @@ objects.
ts.shift(1)

The shift method accepts an ``freq`` argument which can accept a
``DateOffset`` class or other ``timedelta``-like object or also a :ref:`offset alias <timeseries.alias>`:
``DateOffset`` class or other ``timedelta``-like object or also a :ref:`offset alias <timeseries.offset_aliases>`:

.. ipython:: python

Expand Down Expand Up @@ -1494,7 +1494,7 @@ level of ``MultiIndex``, its name or location can be passed to the

.. ipython:: python

df.resample(level='d').sum()
df.resample('M', level='d').sum()


.. _timeseries.periods:
Expand Down Expand Up @@ -1630,8 +1630,6 @@ Period Dtypes
``PeriodIndex`` has a custom ``period`` dtype. This is a pandas extension
dtype similar to the :ref:`timezone aware dtype <timeseries.timezone_series>` (``datetime64[ns, tz]``).

.. _timeseries.timezone_series:

The ``period`` dtype holds the ``freq`` attribute and is represented with
``period[freq]`` like ``period[D]`` or ``period[M]``, using :ref:`frequency strings <timeseries.offset_aliases>`.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.14.1.txt
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ Experimental
~~~~~~~~~~~~

- ``pandas.io.data.Options`` has a new method, ``get_all_data`` method, and now consistently returns a
multi-indexed ``DataFrame``, see :ref:`the docs <remote_data.yahoo_options>`. (:issue:`5602`)
multi-indexed ``DataFrame`` (:issue:`5602`)
- ``io.gbq.read_gbq`` and ``io.gbq.to_gbq`` were refactored to remove the
dependency on the Google ``bq.py`` command line client. This submodule
now uses ``httplib2`` and the Google ``apiclient`` and ``oauth2client`` API client
Expand Down
4 changes: 1 addition & 3 deletions doc/source/whatsnew/v0.15.1.txt
Original file line number Diff line number Diff line change
Expand Up @@ -185,8 +185,6 @@ API changes
2014-11-22 call AAPL141122C00110000 1.02
2014-11-28 call AAPL141128C00110000 1.32

See the Options documentation in :ref:`Remote Data <remote_data.yahoo_options>`

.. _whatsnew_0151.datetime64_plotting:

- pandas now also registers the ``datetime64`` dtype in matplotlib's units registry
Expand Down Expand Up @@ -257,7 +255,7 @@ Enhancements

- Added support for 3-character ISO and non-standard country codes in :func:`io.wb.download()` (:issue:`8482`)

- :ref:`World Bank data requests <remote_data.wb>` now will warn/raise based
- World Bank data requests now will warn/raise based
on an ``errors`` argument, as well as a list of hard-coded country codes and
the World Bank's JSON response. In prior versions, the error messages
didn't look at the World Bank's JSON response. Problem-inducing input were
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.8.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Time series changes and improvements
aggregation functions, and control over how the intervals and result labeling
are defined. A suite of high performance Cython/C-based resampling functions
(including Open-High-Low-Close) have also been implemented.
- Revamp of :ref:`frequency aliases <timeseries.alias>` and support for
- Revamp of :ref:`frequency aliases <timeseries.offset_aliases>` and support for
**frequency shortcuts** like '15min', or '1h30min'
- New :ref:`DatetimeIndex class <timeseries.datetimeindex>` supports both fixed
frequency and irregular time
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -3998,7 +3998,7 @@ def asfreq(self, freq, method=None, how=None, normalize=False):
converted : type of caller

To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
from pandas.tseries.resample import asfreq
return asfreq(self, freq, method=method, how=how, normalize=normalize)
Expand Down
32 changes: 20 additions & 12 deletions pandas/io/gbq.py
Original file line number Diff line number Diff line change
Expand Up @@ -630,16 +630,20 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
https://developers.google.com/api-client-library/python/apis/bigquery/v2

Authentication to the Google BigQuery service is via OAuth 2.0.

- If "private_key" is not provided:
By default "application default credentials" are used.

.. versionadded:: 0.19.0
By default "application default credentials" are used.

.. versionadded:: 0.19.0

If default application credentials are not found or are restrictive,
user account credentials are used. In this case, you will be asked to
grant permissions for product name 'pandas GBQ'.

If default application credentials are not found or are restrictive,
user account credentials are used. In this case, you will be asked to
grant permissions for product name 'pandas GBQ'.
- If "private_key" is provided:
Service account credentials will be used to authenticate.

Service account credentials will be used to authenticate.

Parameters
----------
Expand Down Expand Up @@ -747,16 +751,20 @@ def to_gbq(dataframe, destination_table, project_id, chunksize=10000,
https://developers.google.com/api-client-library/python/apis/bigquery/v2

Authentication to the Google BigQuery service is via OAuth 2.0.

- If "private_key" is not provided:
By default "application default credentials" are used.

.. versionadded:: 0.19.0
By default "application default credentials" are used.

.. versionadded:: 0.19.0

If default application credentials are not found or are restrictive,
user account credentials are used. In this case, you will be asked to
grant permissions for product name 'pandas GBQ'.

If default application credentials are not found or are restrictive,
user account credentials are used. In this case, you will be asked to
grant permissions for product name 'pandas GBQ'.
- If "private_key" is provided:
Service account credentials will be used to authenticate.

Service account credentials will be used to authenticate.

Parameters
----------
Expand Down