diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index b5ad681426b15..6063e3e8bce45 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -935,134 +935,20 @@ method:
minor_axis=['a', 'b', 'c', 'd'])
panel.to_frame()
-
-.. _dsintro.panel4d:
-
-Panel4D (Experimental)
-----------------------
-
-.. warning::
-
- In 0.19.0 ``Panel4D`` is deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the `xarray package `__. Pandas provides a :meth:`~Panel4D.to_xarray` method to automate this conversion.
-
-``Panel4D`` is a 4-Dimensional named container very much like a ``Panel``, but
-having 4 named dimensions. It is intended as a test bed for more N-Dimensional named
-containers.
-
- - **labels**: axis 0, each item corresponds to a Panel contained inside
- - **items**: axis 1, each item corresponds to a DataFrame contained inside
- - **major_axis**: axis 2, it is the **index** (rows) of each of the
- DataFrames
- - **minor_axis**: axis 3, it is the **columns** of each of the DataFrames
-
-``Panel4D`` is a sub-class of ``Panel``, so most methods that work on Panels are
-applicable to Panel4D. The following methods are disabled:
-
- - ``join , to_frame , to_excel , to_sparse , groupby``
-
-Construction of Panel4D works in a very similar manner to a ``Panel``
-
-From 4D ndarray with optional axis labels
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. ipython:: python
-
- p4d = pd.Panel4D(np.random.randn(2, 2, 5, 4),
- labels=['Label1','Label2'],
- items=['Item1', 'Item2'],
- major_axis=pd.date_range('1/1/2000', periods=5),
- minor_axis=['A', 'B', 'C', 'D'])
- p4d
-
-
-From dict of Panel objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. ipython:: python
-
- data = { 'Label1' : pd.Panel({ 'Item1' : pd.DataFrame(np.random.randn(4, 3)) }),
- 'Label2' : pd.Panel({ 'Item2' : pd.DataFrame(np.random.randn(4, 2)) }) }
- pd.Panel4D(data)
-
-Note that the values in the dict need only be **convertible to Panels**.
-Thus, they can be any of the other valid inputs to Panel as per above.
-
-Slicing
-~~~~~~~
-
-Slicing works in a similar manner to a Panel. ``[]`` slices the first dimension.
-``.ix`` allows you to slice arbitrarily and get back lower dimensional objects
-
-.. ipython:: python
-
- p4d['Label1']
-
-4D -> Panel
-
-.. ipython:: python
-
- p4d.ix[:,:,:,'A']
-
-4D -> DataFrame
-
-.. ipython:: python
-
- p4d.ix[:,:,0,'A']
-
-4D -> Series
-
-.. ipython:: python
-
- p4d.ix[:,0,0,'A']
-
-Transposing
-~~~~~~~~~~~
-
-A Panel4D can be rearranged using its ``transpose`` method (which does not make a
-copy by default unless the data are heterogeneous):
-
-.. ipython:: python
-
- p4d.transpose(3, 2, 1, 0)
-
.. _dsintro.panelnd:
+.. _dsintro.panel4d:
-PanelND (Experimental)
-----------------------
+Panel4D and PanelND (Deprecated)
+--------------------------------
.. warning::
- In 0.19.0 ``PanelND`` is deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the `xarray package `__.
+ In 0.19.0 ``Panel4D`` and ``PanelND`` are deprecated and will be removed in
+ a future version. The recommended way to represent these types of
+ n-dimensional data are with the
+ `xarray package `__.
+ Pandas provides a :meth:`~Panel4D.to_xarray` method to automate
+ this conversion.
-PanelND is a module with a set of factory functions to enable a user to construct N-dimensional named
-containers like Panel4D, with a custom set of axis labels. Thus a domain-specific container can easily be
-created.
-
-The following creates a Panel5D. A new panel type object must be sliceable into a lower dimensional object.
-Here we slice to a Panel4D.
-
-.. ipython:: python
- :okwarning:
-
- from pandas.core import panelnd
- Panel5D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel5D',
- orders = [ 'cool', 'labels','items','major_axis','minor_axis'],
- slices = { 'labels' : 'labels', 'items' : 'items',
- 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
- slicer = pd.Panel4D,
- aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
-
- p5d = Panel5D(dict(C1 = p4d))
- p5d
-
- # print a slice of our 5D
- p5d.ix['C1',:,:,0:3,:]
-
- # transpose it
- p5d.transpose(1,2,3,4,0)
-
- # look at the shape & dim
- p5d.shape
- p5d.ndim
+See the `docs of a previous version `__
+for documentation on these objects.
diff --git a/doc/source/install.rst b/doc/source/install.rst
index f8ee0542ea17e..6295e6f6cbb68 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -255,6 +255,7 @@ Optional Dependencies
* `matplotlib `__: for plotting
* For Excel I/O:
+
* `xlrd/xlwt `__: Excel reading (xlrd) and writing (xlwt)
* `openpyxl `__: openpyxl version 1.6.1
or higher (but lower than 2.0.0), or version 2.2 or higher, for writing .xlsx files (xlrd >= 0.9.0)
@@ -296,8 +297,8 @@ Optional Dependencies
`. It explains issues surrounding the installation and
usage of the above three libraries
* You may need to install an older version of `BeautifulSoup4`_:
- - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
- 32-bit Ubuntu/Debian
+ Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 32-bit
+ Ubuntu/Debian
* Additionally, if you're using `Anaconda`_ you should definitely
read :ref:`the gotchas about HTML parsing libraries `
diff --git a/doc/source/sparse.rst b/doc/source/sparse.rst
index b6c5c15bc9081..d3f921f8762cc 100644
--- a/doc/source/sparse.rst
+++ b/doc/source/sparse.rst
@@ -9,7 +9,7 @@
import pandas as pd
import pandas.util.testing as tm
np.set_printoptions(precision=4, suppress=True)
- options.display.max_rows = 15
+ pd.options.display.max_rows = 15
**********************
Sparse data structures
@@ -90,38 +90,10 @@ can be converted back to a regular ndarray by calling ``to_dense``:
SparseList
----------
-.. note:: The ``SparseList`` class has been deprecated and will be removed in a future version.
+The ``SparseList`` class has been deprecated and will be removed in a future version.
+See the `docs of a previous version `__
+for documentation on ``SparseList``.
-``SparseList`` is a list-like data structure for managing a dynamic collection
-of SparseArrays. To create one, simply call the ``SparseList`` constructor with
-a ``fill_value`` (defaulting to ``NaN``):
-
-.. ipython:: python
-
- spl = pd.SparseList()
- spl
-
-The two important methods are ``append`` and ``to_array``. ``append`` can
-accept scalar values or any 1-dimensional sequence:
-
-.. ipython:: python
- :suppress:
-
-.. ipython:: python
-
- spl.append(np.array([1., np.nan, np.nan, 2., 3.]))
- spl.append(5)
- spl.append(sparr)
- spl
-
-As you can see, all of the contents are stored internally as a list of
-memory-efficient ``SparseArray`` objects. Once you've accumulated all of the
-data, you can call ``to_array`` to get a single ``SparseArray`` with all the
-data:
-
-.. ipython:: python
-
- spl.to_array()
SparseIndex objects
-------------------
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 36e492df29983..7ab97c6af3583 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1219,7 +1219,7 @@ objects.
ts.shift(1)
The shift method accepts an ``freq`` argument which can accept a
-``DateOffset`` class or other ``timedelta``-like object or also a :ref:`offset alias `:
+``DateOffset`` class or other ``timedelta``-like object or also a :ref:`offset alias `:
.. ipython:: python
@@ -1494,7 +1494,7 @@ level of ``MultiIndex``, its name or location can be passed to the
.. ipython:: python
- df.resample(level='d').sum()
+ df.resample('M', level='d').sum()
.. _timeseries.periods:
@@ -1630,8 +1630,6 @@ Period Dtypes
``PeriodIndex`` has a custom ``period`` dtype. This is a pandas extension
dtype similar to the :ref:`timezone aware dtype ` (``datetime64[ns, tz]``).
-.. _timeseries.timezone_series:
-
The ``period`` dtype holds the ``freq`` attribute and is represented with
``period[freq]`` like ``period[D]`` or ``period[M]``, using :ref:`frequency strings `.
diff --git a/doc/source/whatsnew/v0.14.1.txt b/doc/source/whatsnew/v0.14.1.txt
index 84f2a77203c41..239d6c9c6e0d4 100644
--- a/doc/source/whatsnew/v0.14.1.txt
+++ b/doc/source/whatsnew/v0.14.1.txt
@@ -156,7 +156,7 @@ Experimental
~~~~~~~~~~~~
- ``pandas.io.data.Options`` has a new method, ``get_all_data`` method, and now consistently returns a
- multi-indexed ``DataFrame``, see :ref:`the docs `. (:issue:`5602`)
+ multi-indexed ``DataFrame`` (:issue:`5602`)
- ``io.gbq.read_gbq`` and ``io.gbq.to_gbq`` were refactored to remove the
dependency on the Google ``bq.py`` command line client. This submodule
now uses ``httplib2`` and the Google ``apiclient`` and ``oauth2client`` API client
diff --git a/doc/source/whatsnew/v0.15.1.txt b/doc/source/whatsnew/v0.15.1.txt
index a25e5a80b65fc..cd9298c74539a 100644
--- a/doc/source/whatsnew/v0.15.1.txt
+++ b/doc/source/whatsnew/v0.15.1.txt
@@ -185,8 +185,6 @@ API changes
2014-11-22 call AAPL141122C00110000 1.02
2014-11-28 call AAPL141128C00110000 1.32
- See the Options documentation in :ref:`Remote Data `
-
.. _whatsnew_0151.datetime64_plotting:
- pandas now also registers the ``datetime64`` dtype in matplotlib's units registry
@@ -257,7 +255,7 @@ Enhancements
- Added support for 3-character ISO and non-standard country codes in :func:`io.wb.download()` (:issue:`8482`)
-- :ref:`World Bank data requests ` now will warn/raise based
+- World Bank data requests now will warn/raise based
on an ``errors`` argument, as well as a list of hard-coded country codes and
the World Bank's JSON response. In prior versions, the error messages
didn't look at the World Bank's JSON response. Problem-inducing input were
diff --git a/doc/source/whatsnew/v0.8.0.txt b/doc/source/whatsnew/v0.8.0.txt
index cf6ac7c1e6ad2..4136c108fba57 100644
--- a/doc/source/whatsnew/v0.8.0.txt
+++ b/doc/source/whatsnew/v0.8.0.txt
@@ -59,7 +59,7 @@ Time series changes and improvements
aggregation functions, and control over how the intervals and result labeling
are defined. A suite of high performance Cython/C-based resampling functions
(including Open-High-Low-Close) have also been implemented.
-- Revamp of :ref:`frequency aliases ` and support for
+- Revamp of :ref:`frequency aliases ` and support for
**frequency shortcuts** like '15min', or '1h30min'
- New :ref:`DatetimeIndex class ` supports both fixed
frequency and irregular time
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5a17401ea67b1..ea5dca32945e8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3998,7 +3998,7 @@ def asfreq(self, freq, method=None, how=None, normalize=False):
converted : type of caller
To learn more about the frequency strings, please see `this link
- `__.
+`__.
"""
from pandas.tseries.resample import asfreq
return asfreq(self, freq, method=method, how=how, normalize=normalize)
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index 068cfee2b2aa2..8f23e82daf2e3 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -630,16 +630,20 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
https://developers.google.com/api-client-library/python/apis/bigquery/v2
Authentication to the Google BigQuery service is via OAuth 2.0.
+
- If "private_key" is not provided:
- By default "application default credentials" are used.
- .. versionadded:: 0.19.0
+ By default "application default credentials" are used.
+
+ .. versionadded:: 0.19.0
+
+ If default application credentials are not found or are restrictive,
+ user account credentials are used. In this case, you will be asked to
+ grant permissions for product name 'pandas GBQ'.
- If default application credentials are not found or are restrictive,
- user account credentials are used. In this case, you will be asked to
- grant permissions for product name 'pandas GBQ'.
- If "private_key" is provided:
- Service account credentials will be used to authenticate.
+
+ Service account credentials will be used to authenticate.
Parameters
----------
@@ -747,16 +751,20 @@ def to_gbq(dataframe, destination_table, project_id, chunksize=10000,
https://developers.google.com/api-client-library/python/apis/bigquery/v2
Authentication to the Google BigQuery service is via OAuth 2.0.
+
- If "private_key" is not provided:
- By default "application default credentials" are used.
- .. versionadded:: 0.19.0
+ By default "application default credentials" are used.
+
+ .. versionadded:: 0.19.0
+
+ If default application credentials are not found or are restrictive,
+ user account credentials are used. In this case, you will be asked to
+ grant permissions for product name 'pandas GBQ'.
- If default application credentials are not found or are restrictive,
- user account credentials are used. In this case, you will be asked to
- grant permissions for product name 'pandas GBQ'.
- If "private_key" is provided:
- Service account credentials will be used to authenticate.
+
+ Service account credentials will be used to authenticate.
Parameters
----------