Skip to content

DOC: clean up some doc-build warnings #12579

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions ci/build_docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,10 @@ if [ x"$DOC_BUILD" != x"" ]; then
echo ./make.py
./make.py

echo ########################
echo # Create and send docs #
echo ########################

cd /tmp/doc/build/html
git config --global user.email "[email protected]"
git config --global user.name "pandas-docs-bot"
Expand Down
1 change: 1 addition & 0 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -692,6 +692,7 @@ R package):

.. ipython:: python
:suppress:
:okwarning:

# restore GlobalPrintConfig
pd.reset_option('^display\.')
Expand Down
1 change: 1 addition & 0 deletions doc/source/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -438,6 +438,7 @@ For instance:

.. ipython:: python
:suppress:
:okwarning:

pd.reset_option('^display\.')

Expand Down
2 changes: 1 addition & 1 deletion doc/source/release.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import matplotlib.pyplot as plt
plt.close('all')

options.display.max_rows=15
pd.options.display.max_rows=15
import pandas.util.testing as tm

*************
Expand Down
5 changes: 2 additions & 3 deletions doc/source/whatsnew/v0.18.0.txt
Original file line number Diff line number Diff line change
Expand Up @@ -863,11 +863,10 @@ Previous API will work but deprecations

In [7]: r.iloc[0] = 5
ValueError: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
assignment will have no effect as you are working on a copy
use .resample(...).mean() instead of .resample(...)

There is a situation where the new API can not perform all the operations when using original code.
This code is intending to resample every 2s, take the ``mean`` AND then take the ``min` of those results.
This code is intending to resample every 2s, take the ``mean`` AND then take the ``min`` of those results.

.. code-block:: python

Expand Down
14 changes: 9 additions & 5 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -1627,6 +1627,7 @@ def info(self, verbose=None, buf=None, max_cols=None, memory_usage=None,
human-readable units (base-2 representation).
null_counts : boolean, default None
Whether to show the non-null counts

- If None, then only show if the frame is smaller than
max_info_rows and max_info_columns.
- If True, always show counts.
Expand Down Expand Up @@ -4932,6 +4933,7 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
0 or 'index' for row-wise, 1 or 'columns' for column-wise
interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
.. versionadded:: 0.18.0

This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points `i` and `j`:

Expand All @@ -4945,11 +4947,12 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
Returns
-------
quantiles : Series or DataFrame
If ``q`` is an array, a DataFrame will be returned where the
index is ``q``, the columns are the columns of self, and the
values are the quantiles.
If ``q`` is a float, a Series will be returned where the
index is the columns of self and the values are the quantiles.

- If ``q`` is an array, a DataFrame will be returned where the
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so @jorisvandenbossche all of those nasty warnings just needed a blank line before AND after a list when embedded in a doc-string. very confusing message

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, indeed, that's the rule for rst lists and also apply in docstrings, unless it's the only thing in the explanation of a parameter (no text before or after in the entry), then no blank lines are needed (eg in this case it is normally not needed)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok I fixed, but that seems a confusing rule. better to just put blanks around lists in doc-strings always, no? (checking if taking these out still has warnings). I was trying things so didn't always take out the added whitespace.

index is ``q``, the columns are the columns of self, and the
values are the quantiles.
- If ``q`` is a float, a Series will be returned where the
index is the columns of self and the values are the quantiles.

Examples
--------
Expand All @@ -4965,6 +4968,7 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
0.1 1.3 3.7
0.5 2.5 55.0
"""

self._check_percentile(q)
per = np.asarray(q) * 100

Expand Down
24 changes: 14 additions & 10 deletions pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2041,11 +2041,13 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
* default: don't fill gaps
* pad / ffill: propagate last valid observation forward to next
valid
* backfill / bfill: use next valid observation to fill gap
* nearest: use nearest valid observations to fill gap

* default: don't fill gaps
* pad / ffill: propagate last valid observation forward to next
valid
* backfill / bfill: use next valid observation to fill gap
* nearest: use nearest valid observations to fill gap

copy : boolean, default True
Return a new object, even if the passed indexes are the same
level : int or name
Expand Down Expand Up @@ -2265,11 +2267,13 @@ def _reindex_multi(self, axes, copy, fill_value):
axis : %(axes_single_arg)s
method : {None, 'backfill'/'bfill', 'pad'/'ffill', 'nearest'}, optional
Method to use for filling holes in reindexed DataFrame:
* default: don't fill gaps
* pad / ffill: propagate last valid observation forward to next
valid
* backfill / bfill: use next valid observation to fill gap
* nearest: use nearest valid observations to fill gap

* default: don't fill gaps
* pad / ffill: propagate last valid observation forward to next
valid
* backfill / bfill: use next valid observation to fill gap
* nearest: use nearest valid observations to fill gap

copy : boolean, default True
Return a new object, even if the passed indexes are the same
level : int or name
Expand Down
6 changes: 4 additions & 2 deletions pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -1289,8 +1289,10 @@ def quantile(self, q=0.5, interpolation='linear'):
0 <= q <= 1, the quantile(s) to compute
interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
.. versionadded:: 0.18.0

This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points `i` and `j`:

* linear: `i + (j - i) * fraction`, where `fraction` is the
fractional part of the index surrounded by `i` and `j`.
* lower: `i`.
Expand All @@ -1306,15 +1308,15 @@ def quantile(self, q=0.5, interpolation='linear'):

Examples
--------

>>> s = Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64

"""

self._check_percentile(q)
Expand Down
2 changes: 2 additions & 0 deletions pandas/io/parsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,13 +121,15 @@ class ParserWarning(Warning):
If True, skip over blank lines rather than interpreting as NaN values
parse_dates : boolean or list of ints or names or list of lists or dict, \
default False

* boolean. If True -> try parsing the index.
* list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
* list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
* dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result
'foo'

Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : boolean, default False
If True and parse_dates is enabled for a column, attempt to infer
Expand Down
7 changes: 7 additions & 0 deletions pandas/tseries/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,7 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
----------
arg : string, datetime, list, tuple, 1-d array, or Series
errors : {'ignore', 'raise', 'coerce'}, default 'raise'

- If 'raise', then invalid parsing will raise an exception
- If 'coerce', then invalid parsing will be set as NaT
- If 'ignore', then invalid parsing will return the input
Expand All @@ -201,10 +202,12 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
with day first (this is a known bug, based on dateutil behavior).
yearfirst : boolean, default False
Specify a date parse order if `arg` is str or its list-likes.

- If True parses dates with the year first, eg 10/11/12 is parsed as
2010-11-12.
- If both dayfirst and yearfirst are True, yearfirst is preceded (same
as dateutil).

Warning: yearfirst=True is not strict, but will prefer to parse
with year first (this is a known bug, based on dateutil beahavior).

Expand All @@ -214,14 +217,17 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
Return UTC DatetimeIndex if True (converting any tz-aware
datetime.datetime objects as well).
box : boolean, default True

- If True returns a DatetimeIndex
- If False returns ndarray of values.
format : string, default None
strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse
all the way up to nanoseconds.
exact : boolean, True by default

- If True, require an exact format match.
- If False, allow the format to match anywhere in the target string.

unit : unit of the arg (D,s,ms,us,ns) denote the unit in epoch
(e.g. a unix timestamp), which is an integer/float number.
infer_datetime_format : boolean, default False
Expand Down Expand Up @@ -273,6 +279,7 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
datetime.datetime(1300, 1, 1, 0, 0)
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')
NaT

"""
return _to_datetime(arg, errors=errors, dayfirst=dayfirst,
yearfirst=yearfirst,
Expand Down
13 changes: 9 additions & 4 deletions pandas/util/nosetester.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,8 @@ def _get_custom_doctester(self):
return None

def _test_argv(self, label, verbose, extra_argv):
''' Generate argv for nosetest command
"""
Generate argv for nosetest command

Parameters
----------
Expand All @@ -138,7 +139,8 @@ def _test_argv(self, label, verbose, extra_argv):
-------
argv : list
command line arguments that will be passed to nose
'''
"""

argv = [__file__, self.package_path]
if label and label != 'full':
if not isinstance(label, string_types):
Expand Down Expand Up @@ -170,13 +172,15 @@ def test(self, label='fast', verbose=1, extra_argv=None,
Identifies the tests to run. This can be a string to pass to
the nosetests executable with the '-A' option, or one of several
special values. Special values are:

* 'fast' - the default - which corresponds to the ``nosetests -A``
option of 'not slow'.
* 'full' - fast (as above) and slow tests as in the
'no -A' option to nosetests - this is the same as ''.
* None or '' - run all tests.
* attribute_identifier - string passed directly to nosetests
as '-A'.

verbose : int, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
Expand All @@ -191,14 +195,15 @@ def test(self, label='fast', verbose=1, extra_argv=None,
This specifies which warnings to configure as 'raise' instead
of 'warn' during the test execution. Valid strings are:

- "develop" : equals ``(DeprecationWarning, RuntimeWarning)``
- "release" : equals ``()``, don't raise on any warnings.
- 'develop' : equals ``(DeprecationWarning, RuntimeWarning)``
- 'release' : equals ``()``, don't raise on any warnings.

Returns
-------
result : object
Returns the result of running the tests as a
``nose.result.TextTestResult`` object.

"""

# cap verbosity at 3 because nose becomes *very* verbose beyond that
Expand Down
17 changes: 11 additions & 6 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -439,22 +439,25 @@ def pxd(name):
obj = Extension('pandas.%s' % name,
sources=sources,
depends=data.get('depends', []),
include_dirs=include)
include_dirs=include,
extra_compile_args=['-w'])

extensions.append(obj)


sparse_ext = Extension('pandas._sparse',
sources=[srcpath('sparse', suffix=suffix)],
include_dirs=[],
libraries=libraries)
libraries=libraries,
extra_compile_args=['-w'])

extensions.extend([sparse_ext])

testing_ext = Extension('pandas._testing',
sources=[srcpath('testing', suffix=suffix)],
include_dirs=[],
libraries=libraries)
libraries=libraries,
extra_compile_args=['-w'])

extensions.extend([testing_ext])

Expand All @@ -474,7 +477,8 @@ def pxd(name):
subdir='msgpack')],
language='c++',
include_dirs=['pandas/src/msgpack'] + common_include,
define_macros=macros)
define_macros=macros,
extra_compile_args=['-w'])
unpacker_ext = Extension('pandas.msgpack._unpacker',
depends=['pandas/src/msgpack/unpack.h',
'pandas/src/msgpack/unpack_define.h',
Expand All @@ -484,7 +488,8 @@ def pxd(name):
subdir='msgpack')],
language='c++',
include_dirs=['pandas/src/msgpack'] + common_include,
define_macros=macros)
define_macros=macros,
extra_compile_args=['-w'])
extensions.append(packer_ext)
extensions.append(unpacker_ext)

Expand All @@ -508,7 +513,7 @@ def pxd(name):
include_dirs=['pandas/src/ujson/python',
'pandas/src/ujson/lib',
'pandas/src/datetime'] + common_include,
extra_compile_args=['-D_GNU_SOURCE'])
extra_compile_args=['-D_GNU_SOURCE', '-w'])


extensions.append(ujson_ext)
Expand Down