Skip to content

BUG: Frequency not set on empty series #14340

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ Please try to maintain backward compatibility. *pandas* has lots of users with l

Adding tests is one of the most common requests after code is pushed to *pandas*. Therefore, it is worth getting in the habit of writing tests ahead of time so this is never an issue.

Like many packages, *pandas* uses the [Nose testing system](http://nose.readthedocs.org/en/latest/index.html) and the convenient extensions in [numpy.testing](http://docs.scipy.org/doc/numpy/reference/routines.testing.html).
Like many packages, *pandas* uses the [Nose testing system](https://nose.readthedocs.io/en/latest/index.html) and the convenient extensions in [numpy.testing](http://docs.scipy.org/doc/numpy/reference/routines.testing.html).

#### Writing tests

Expand Down Expand Up @@ -323,7 +323,7 @@ Performance matters and it is worth considering whether your code has introduced
>
> The asv benchmark suite was translated from the previous framework, vbench, so many stylistic issues are likely a result of automated transformation of the code.

To use asv you will need either `conda` or `virtualenv`. For more details please check the [asv installation webpage](http://asv.readthedocs.org/en/latest/installing.html).
To use asv you will need either `conda` or `virtualenv`. For more details please check the [asv installation webpage](https://asv.readthedocs.io/en/latest/installing.html).

To install asv:

Expand Down Expand Up @@ -360,7 +360,7 @@ This command is equivalent to:

This will launch every test only once, display stderr from the benchmarks, and use your local `python` that comes from your `$PATH`.

Information on how to write a benchmark can be found in the [asv documentation](http://asv.readthedocs.org/en/latest/writing_benchmarks.html).
Information on how to write a benchmark can be found in the [asv documentation](https://asv.readthedocs.io/en/latest/writing_benchmarks.html).

#### Running the vbench performance test suite (phasing out)

Expand Down
4 changes: 2 additions & 2 deletions ci/prep_cython_cache.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
ls "$HOME/.cache/"

PYX_CACHE_DIR="$HOME/.cache/pyxfiles"
pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx"`
pyx_cache_file_list=`find ${PYX_CACHE_DIR} -name "*.pyx"`
pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx" -o -name "*.pxd"`
pyx_cache_file_list=`find ${PYX_CACHE_DIR} -name "*.pyx" -o -name "*.pxd"`

CACHE_File="$HOME/.cache/cython_files.tar"

Expand Down
2 changes: 1 addition & 1 deletion ci/submit_cython_cache.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

CACHE_File="$HOME/.cache/cython_files.tar"
PYX_CACHE_DIR="$HOME/.cache/pyxfiles"
pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx"`
pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx" -o -name "*.pxd"`

rm -rf $CACHE_File
rm -rf $PYX_CACHE_DIR
Expand Down
14 changes: 7 additions & 7 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1794,18 +1794,18 @@ The following functions are available for one dimensional object arrays or scala

- :meth:`~pandas.to_datetime` (conversion to datetime objects)

.. ipython:: python
.. ipython:: python

import datetime
m = ['2016-07-09', datetime.datetime(2016, 3, 2)]
pd.to_datetime(m)
import datetime
m = ['2016-07-09', datetime.datetime(2016, 3, 2)]
pd.to_datetime(m)

- :meth:`~pandas.to_timedelta` (conversion to timedelta objects)

.. ipython:: python
.. ipython:: python

m = ['5us', pd.Timedelta('1day')]
pd.to_timedelta(m)
m = ['5us', pd.Timedelta('1day')]
pd.to_timedelta(m)

To force a conversion, we can pass in an ``errors`` argument, which specifies how pandas should deal with elements
that cannot be converted to desired dtype or object. By default, ``errors='raise'``, meaning that any errors encountered
Expand Down
2 changes: 1 addition & 1 deletion doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,7 @@
'python': ('http://docs.python.org/3', None),
'numpy': ('http://docs.scipy.org/doc/numpy', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference', None),
'py': ('http://pylib.readthedocs.org/en/latest/', None)
'py': ('https://pylib.readthedocs.io/en/latest/', None)
}
import glob
autosummary_generate = glob.glob("*.rst")
Expand Down
8 changes: 4 additions & 4 deletions doc/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ follow the Numpy Docstring Standard (see above), but you don't need to install
this because a local copy of numpydoc is included in the *pandas* source
code.
`nbconvert <https://nbconvert.readthedocs.io/en/latest/>`_ and
`nbformat <http://nbformat.readthedocs.io/en/latest/>`_ are required to build
`nbformat <https://nbformat.readthedocs.io/en/latest/>`_ are required to build
the Jupyter notebooks included in the documentation.

If you have a conda environment named ``pandas_dev``, you can install the extra
Expand Down Expand Up @@ -490,7 +490,7 @@ Adding tests is one of the most common requests after code is pushed to *pandas*
it is worth getting in the habit of writing tests ahead of time so this is never an issue.

Like many packages, *pandas* uses the `Nose testing system
<http://nose.readthedocs.org/en/latest/index.html>`_ and the convenient
<https://nose.readthedocs.io/en/latest/index.html>`_ and the convenient
extensions in `numpy.testing
<http://docs.scipy.org/doc/numpy/reference/routines.testing.html>`_.

Expand Down Expand Up @@ -569,7 +569,7 @@ supports both python2 and python3.

To use all features of asv, you will need either ``conda`` or
``virtualenv``. For more details please check the `asv installation
webpage <http://asv.readthedocs.org/en/latest/installing.html>`_.
webpage <https://asv.readthedocs.io/en/latest/installing.html>`_.

To install asv::

Expand Down Expand Up @@ -624,7 +624,7 @@ This will display stderr from the benchmarks, and use your local
``python`` that comes from your ``$PATH``.

Information on how to write a benchmark and how to use asv can be found in the
`asv documentation <http://asv.readthedocs.org/en/latest/writing_benchmarks.html>`_.
`asv documentation <https://asv.readthedocs.io/en/latest/writing_benchmarks.html>`_.

.. _contributing.gbq_integration_tests:

Expand Down
2 changes: 1 addition & 1 deletion doc/source/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -877,7 +877,7 @@ The :ref:`Plotting <visualization>` docs.
<http://stackoverflow.com/questions/17891493/annotating-points-from-a-pandas-dataframe-in-matplotlib-plot>`__

`Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter
<http://pandas-xlsxwriter-charts.readthedocs.org/en/latest/introduction.html>`__
<https://pandas-xlsxwriter-charts.readthedocs.io/>`__

`Boxplot for each quartile of a stratifying variable
<http://stackoverflow.com/questions/23232989/boxplot-stratified-by-column-in-python-pandas>`__
Expand Down
6 changes: 0 additions & 6 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,6 @@ categories of functionality and methods in separate sections.
Series
------

.. warning::

In 0.13.0 ``Series`` has internally been refactored to no longer sub-class ``ndarray``
but instead subclass ``NDFrame``, similarly to the rest of the pandas containers. This should be
a transparent change with only very limited API implications (See the :ref:`Internal Refactoring<whatsnew_0130.refactoring>`)

:class:`Series` is a one-dimensional labeled array capable of holding any data
type (integers, strings, floating point numbers, Python objects, etc.). The axis
labels are collectively referred to as the **index**. The basic method to create a Series is to call:
Expand Down
6 changes: 3 additions & 3 deletions doc/source/ecosystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ API

`pandas-datareader <https://github.com/pydata/pandas-datareader>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``pandas-datareader`` is a remote data access library for pandas. ``pandas.io`` from pandas < 0.17.0 is now refactored/split-off to and importable from ``pandas_datareader`` (PyPI:``pandas-datareader``). Many/most of the supported APIs have at least a documentation paragraph in the `pandas-datareader docs <https://pandas-datareader.readthedocs.org/en/latest/>`_:
``pandas-datareader`` is a remote data access library for pandas. ``pandas.io`` from pandas < 0.17.0 is now refactored/split-off to and importable from ``pandas_datareader`` (PyPI:``pandas-datareader``). Many/most of the supported APIs have at least a documentation paragraph in the `pandas-datareader docs <https://pandas-datareader.readthedocs.io/en/latest/>`_:

The following data feeds are available:

Expand All @@ -170,7 +170,7 @@ PyDatastream is a Python interface to the
SOAP API to return indexed Pandas DataFrames or Panels with financial data.
This package requires valid credentials for this API (non free).

`pandaSDMX <http://pandasdmx.readthedocs.org>`__
`pandaSDMX <https://pandasdmx.readthedocs.io>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandaSDMX is an extensible library to retrieve and acquire statistical data
and metadata disseminated in
Expand Down Expand Up @@ -215,7 +215,7 @@ dimensional arrays, rather than the tabular data for which pandas excels.
Out-of-core
-------------

`Dask <https://dask.readthedocs.org/en/latest/>`__
`Dask <https://dask.readthedocs.io/en/latest/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Dask is a flexible parallel computing library for analytics. Dask
Expand Down
2 changes: 1 addition & 1 deletion doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ pandas is equipped with an exhaustive set of unit tests covering about 97% of
the codebase as of this writing. To run it on your machine to verify that
everything is working (and you have all of the dependencies, soft and hard,
installed), make sure you have `nose
<http://readthedocs.org/docs/nose/en/latest/>`__ and run:
<https://nose.readthedocs.io/en/latest/>`__ and run:

::

Expand Down
7 changes: 4 additions & 3 deletions doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1481,7 +1481,7 @@ function takes a number of arguments. Only the first is required.
- ``encoding``: a string representing the encoding to use if the contents are
non-ASCII, for python versions prior to 3
- ``line_terminator``: Character sequence denoting line end (default '\\n')
- ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL)
- ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a `float_format` then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
- ``quotechar``: Character used to quote fields (default '"')
- ``doublequote``: Control quoting of ``quotechar`` in fields (default True)
- ``escapechar``: Character used to escape ``sep`` and ``quotechar`` when
Expand Down Expand Up @@ -2639,8 +2639,8 @@ config options <options>` ``io.excel.xlsx.writer`` and
``io.excel.xls.writer``. pandas will fall back on `openpyxl`_ for ``.xlsx``
files if `Xlsxwriter`_ is not available.

.. _XlsxWriter: http://xlsxwriter.readthedocs.org
.. _openpyxl: http://openpyxl.readthedocs.org/
.. _XlsxWriter: https://xlsxwriter.readthedocs.io
.. _openpyxl: https://openpyxl.readthedocs.io/
.. _xlwt: http://www.python-excel.org

To specify which writer you want to use, you can pass an engine keyword
Expand Down Expand Up @@ -2775,6 +2775,7 @@ both on the writing (serialization), and reading (deserialization).
as an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release.

As a result of writing format changes and other issues:

+----------------------+------------------------+
| Packed with | Can be unpacked with |
+======================+========================+
Expand Down
2 changes: 1 addition & 1 deletion doc/source/r_interface.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ rpy2 / R interface

In v0.16.0, the ``pandas.rpy`` interface has been **deprecated and will be
removed in a future version**. Similar functionality can be accessed
through the `rpy2 <http://rpy2.readthedocs.io/>`__ project.
through the `rpy2 <https://rpy2.readthedocs.io/>`__ project.
See the :ref:`updating <rpy.updating>` section for a guide to port your
code from the ``pandas.rpy`` to ``rpy2`` functions.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ Modern Pandas
Excel charts with pandas, vincent and xlsxwriter
------------------------------------------------

- `Using Pandas and XlsxWriter to create Excel charts <http://pandas-xlsxwriter-charts.readthedocs.org/>`_
- `Using Pandas and XlsxWriter to create Excel charts <https://pandas-xlsxwriter-charts.readthedocs.io/>`_

Various Tutorials
-----------------
Expand Down
2 changes: 2 additions & 0 deletions doc/source/whatsnew.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ What's New

These are new features and improvements of note in each release.

.. include:: whatsnew/v0.19.1.txt

.. include:: whatsnew/v0.19.0.txt

.. include:: whatsnew/v0.18.1.txt
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.14.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -401,7 +401,7 @@ through SQLAlchemy (:issue:`2717`, :issue:`4163`, :issue:`5950`, :issue:`6292`).
All databases supported by SQLAlchemy can be used, such
as PostgreSQL, MySQL, Oracle, Microsoft SQL server (see documentation of
SQLAlchemy on `included dialects
<http://sqlalchemy.readthedocs.org/en/latest/dialects/index.html>`_).
<https://sqlalchemy.readthedocs.io/en/latest/dialects/index.html>`_).

The functionality of providing DBAPI connection objects will only be supported
for sqlite3 in the future. The ``'mysql'`` flavor is deprecated.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.17.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ as well as the ``.sum()`` operation.

Releasing of the GIL could benefit an application that uses threads for user interactions (e.g. QT_), or performing multi-threaded computations. A nice example of a library that can handle these types of computation-in-parallel is the dask_ library.

.. _dask: https://dask.readthedocs.org/en/latest/
.. _dask: https://dask.readthedocs.io/en/latest/
.. _QT: https://wiki.python.org/moin/PyQt

.. _whatsnew_0170.plot:
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.19.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1560,6 +1560,6 @@ Bug Fixes
- Bug in ``.to_string()`` when called with an integer ``line_width`` and ``index=False`` raises an UnboundLocalError exception because ``idx`` referenced before assignment.
- Bug in ``eval()`` where the ``resolvers`` argument would not accept a list (:issue:`14095`)
- Bugs in ``stack``, ``get_dummies``, ``make_axis_dummies`` which don't preserve categorical dtypes in (multi)indexes (:issue:`13854`)
- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
- ``PeriodIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
- Bug in ``df.groupby`` where ``.median()`` returns arbitrary values if grouped dataframe contains empty bins (:issue:`13629`)
- Bug in ``Index.copy()`` where ``name`` parameter was ignored (:issue:`14302`)
48 changes: 48 additions & 0 deletions doc/source/whatsnew/v0.19.1.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
.. _whatsnew_0191:

v0.19.1 (????, 2016)
---------------------

This is a minor bug-fix release from 0.19.0 and includes a large number of
bug fixes along with several new features, enhancements, and performance improvements.
We recommend that all users upgrade to this version.

Highlights include:


.. contents:: What's new in v0.19.1
:local:
:backlinks: none


.. _whatsnew_0191.performance:

Performance Improvements
~~~~~~~~~~~~~~~~~~~~~~~~







.. _whatsnew_0191.bug_fixes:

Bug Fixes
~~~~~~~~~




- Bug in localizing an ambiguous timezone when a boolean is passed (:issue:`14402`)








- Bug in ``pd.concat`` where names of the ``keys`` were not propagated to the resulting ``MultiIndex`` (:issue:`14252`)
- Bug in ``MultiIndex.set_levels`` where illegal level values were still set after raising an error (:issue:`13754`)
- Bug in ``asfreq``, where frequency wasn't set for empty Series (:issue:`14320`)
4 changes: 3 additions & 1 deletion pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -1345,7 +1345,9 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
The newline character or character sequence to use in the output
file
quoting : optional constant from csv module
defaults to csv.QUOTE_MINIMAL
defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
then floats are comverted to strings and thus csv.QUOTE_NONNUMERIC
will treat them as non-numeric
quotechar : string (length 1), default '\"'
character used to quote fields
doublequote : boolean, default True
Expand Down
35 changes: 25 additions & 10 deletions pandas/indexes/multi.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,12 +116,27 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None,

return result

def _verify_integrity(self):
"""Raises ValueError if length of levels and labels don't match or any
label would exceed level bounds"""
def _verify_integrity(self, labels=None, levels=None):
"""

Parameters
----------
labels : optional list
Labels to check for validity. Defaults to current labels.
levels : optional list
Levels to check for validity. Defaults to current levels.

Raises
------
ValueError
* if length of levels and labels don't match or any label would
exceed level bounds
"""
# NOTE: Currently does not check, among other things, that cached
# nlevels matches nor that sortorder matches actually sortorder.
labels, levels = self.labels, self.levels
labels = labels or self.labels
levels = levels or self.levels

if len(levels) != len(labels):
raise ValueError("Length of levels and labels must match. NOTE:"
" this index is in an inconsistent state.")
Expand Down Expand Up @@ -162,6 +177,9 @@ def _set_levels(self, levels, level=None, copy=False, validate=True,
new_levels[l] = _ensure_index(v, copy=copy)._shallow_copy()
new_levels = FrozenList(new_levels)

if verify_integrity:
self._verify_integrity(levels=new_levels)

names = self.names
self._levels = new_levels
if any(names):
Expand All @@ -170,9 +188,6 @@ def _set_levels(self, levels, level=None, copy=False, validate=True,
self._tuples = None
self._reset_cache()

if verify_integrity:
self._verify_integrity()

def set_levels(self, levels, level=None, inplace=False,
verify_integrity=True):
"""
Expand Down Expand Up @@ -268,13 +283,13 @@ def _set_labels(self, labels, level=None, copy=False, validate=True,
lab, lev, copy=copy)._shallow_copy()
new_labels = FrozenList(new_labels)

if verify_integrity:
self._verify_integrity(labels=new_labels)

self._labels = new_labels
self._tuples = None
self._reset_cache()

if verify_integrity:
self._verify_integrity()

def set_labels(self, labels, level=None, inplace=False,
verify_integrity=True):
"""
Expand Down
Loading