Skip to content

CLN: boolean->bool, string->str in docstrings #40698

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 31, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions pandas/_libs/algos.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -933,10 +933,10 @@ def rank_1d(
* max: highest rank in group
* first: ranks assigned in order they appear in the array
* dense: like 'min', but rank always increases by 1 between groups
ascending : boolean, default True
ascending : bool, default True
False for ranks by high (1) to low (N)
na_option : {'keep', 'top', 'bottom'}, default 'keep'
pct : boolean, default False
pct : bool, default False
Compute percentage rank of data within each group
na_option : {'keep', 'top', 'bottom'}, default 'keep'
* keep: leave NA values where they are
Expand Down
8 changes: 4 additions & 4 deletions pandas/_libs/groupby.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -402,9 +402,9 @@ def group_any_all(uint8_t[::1] out,
ordering matching up to the corresponding record in `values`
values : array containing the truth value of each element
mask : array indicating whether a value is na or not
val_test : str {'any', 'all'}
val_test : {'any', 'all'}
String object dictating whether to use any or all truth testing
skipna : boolean
skipna : bool
Flag to ignore nan values during truth testing

Notes
Expand Down Expand Up @@ -1083,10 +1083,10 @@ def group_rank(float64_t[:, ::1] out,
* max: highest rank in group
* first: ranks assigned in order they appear in the array
* dense: like 'min', but rank always increases by 1 between groups
ascending : boolean, default True
ascending : bool, default True
False for ranks by high (1) to low (N)
na_option : {'keep', 'top', 'bottom'}, default 'keep'
pct : boolean, default False
pct : bool, default False
Compute percentage rank of data within each group
na_option : {'keep', 'top', 'bottom'}, default 'keep'
* keep: leave NA values where they are
Expand Down
18 changes: 9 additions & 9 deletions pandas/_libs/hashtable_class_helper.pxi.in
Original file line number Diff line number Diff line change
Expand Up @@ -523,15 +523,15 @@ cdef class {{name}}HashTable(HashTable):
any value "val" satisfying val != val is considered missing.
If na_value is not None, then _additionally_, any value "val"
satisfying val == na_value is considered missing.
ignore_na : boolean, default False
ignore_na : bool, default False
Whether NA-values should be ignored for calculating the uniques. If
True, the labels corresponding to missing values will be set to
na_sentinel.
mask : ndarray[bool], optional
If not None, the mask is used as indicator for missing values
(True = missing, False = valid) instead of `na_value` or
condition "val != val".
return_inverse : boolean, default False
return_inverse : bool, default False
Whether the mapping of the original array values to their location
in the vector of uniques should be returned.

Expand Down Expand Up @@ -625,7 +625,7 @@ cdef class {{name}}HashTable(HashTable):
----------
values : ndarray[{{dtype}}]
Array of values of which unique will be calculated
return_inverse : boolean, default False
return_inverse : bool, default False
Whether the mapping of the original array values to their location
in the vector of uniques should be returned.

Expand Down Expand Up @@ -906,11 +906,11 @@ cdef class StringHashTable(HashTable):
that is not a string is considered missing. If na_value is
not None, then _additionally_ any value "val" satisfying
val == na_value is considered missing.
ignore_na : boolean, default False
ignore_na : bool, default False
Whether NA-values should be ignored for calculating the uniques. If
True, the labels corresponding to missing values will be set to
na_sentinel.
return_inverse : boolean, default False
return_inverse : bool, default False
Whether the mapping of the original array values to their location
in the vector of uniques should be returned.

Expand Down Expand Up @@ -998,7 +998,7 @@ cdef class StringHashTable(HashTable):
----------
values : ndarray[object]
Array of values of which unique will be calculated
return_inverse : boolean, default False
return_inverse : bool, default False
Whether the mapping of the original array values to their location
in the vector of uniques should be returned.

Expand Down Expand Up @@ -1181,11 +1181,11 @@ cdef class PyObjectHashTable(HashTable):
any value "val" satisfying val != val is considered missing.
If na_value is not None, then _additionally_, any value "val"
satisfying val == na_value is considered missing.
ignore_na : boolean, default False
ignore_na : bool, default False
Whether NA-values should be ignored for calculating the uniques. If
True, the labels corresponding to missing values will be set to
na_sentinel.
return_inverse : boolean, default False
return_inverse : bool, default False
Whether the mapping of the original array values to their location
in the vector of uniques should be returned.

Expand Down Expand Up @@ -1251,7 +1251,7 @@ cdef class PyObjectHashTable(HashTable):
----------
values : ndarray[object]
Array of values of which unique will be calculated
return_inverse : boolean, default False
return_inverse : bool, default False
Whether the mapping of the original array values to their location
in the vector of uniques should be returned.

Expand Down
4 changes: 2 additions & 2 deletions pandas/_libs/reshape.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,13 @@ def unstack(reshape_t[:, :] values, const uint8_t[:] mask,
Parameters
----------
values : typed ndarray
mask : boolean ndarray
mask : np.ndarray[bool]
stride : int
length : int
width : int
new_values : typed ndarray
result array
new_mask : boolean ndarray
new_mask : np.ndarray[bool]
result mask
"""
cdef:
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/tslibs/conversion.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ def ensure_timedelta64ns(arr: ndarray, copy: bool=True):
Parameters
----------
arr : ndarray
copy : boolean, default True
copy : bool, default True

Returns
-------
Expand Down
6 changes: 3 additions & 3 deletions pandas/_libs/tslibs/fields.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -635,9 +635,9 @@ def get_locale_names(name_type: str, locale: object = None):

Parameters
----------
name_type : string, attribute of LocaleTime() in which to return localized
names
locale : string
name_type : str
Attribute of LocaleTime() in which to return localized names.
locale : str

Returns
-------
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/tslibs/timedeltas.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -540,7 +540,7 @@ cdef inline int64_t timedelta_as_neg(int64_t value, bint neg):
Parameters
----------
value : int64_t of the timedelta value
neg : boolean if the a negative value
neg : bool if the a negative value
"""
if neg:
return -value
Expand Down
2 changes: 1 addition & 1 deletion pandas/_testing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -952,7 +952,7 @@ def get_op_from_name(op_name: str) -> Callable:

Parameters
----------
op_name : string
op_name : str
The op name, in form of "add" or "__add__".

Returns
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -959,7 +959,7 @@ def mode(values, dropna: bool = True) -> Series:
----------
values : array-like
Array over which to check for duplicate values.
dropna : boolean, default True
dropna : bool, default True
Don't consider counts of NaN/NaT.

.. versionadded:: 0.24.0
Expand Down Expand Up @@ -1025,9 +1025,9 @@ def rank(
- ``keep``: rank each NaN value with a NaN ranking
- ``top``: replace each NaN with either +/- inf so that they
there are ranked at the top
ascending : boolean, default True
ascending : bool, default True
Whether or not the elements should be ranked in ascending order.
pct : boolean, default False
pct : bool, default False
Whether or not to the display the returned rankings in integer form
(e.g. 1, 2, 3) or in percentile form (e.g. 0.333..., 0.666..., 1).
"""
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/array_algos/take.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def take_nd(
Axis to take from
fill_value : any, default np.nan
Fill value to replace -1 values with
allow_fill : boolean, default True
allow_fill : bool, default True
If False, indexer is assumed to contain no -1 values so no filling
will be done. This short-circuits computation of a mask. Result is
undefined if allow_fill == False and -1 is present in indexer.
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/dtypes/cast.py
Original file line number Diff line number Diff line change
Expand Up @@ -508,7 +508,7 @@ def maybe_upcast_putmask(result: np.ndarray, mask: np.ndarray) -> np.ndarray:
result : ndarray
The destination array. This will be mutated in-place if no upcasting is
necessary.
mask : boolean ndarray
mask : np.ndarray[bool]

Returns
-------
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@
If 0 or 'index': apply function to each column.
If 1 or 'columns': apply function to each row.""",
"inplace": """
inplace : boolean, default False
inplace : bool, default False
If True, performs operation inplace and returns None.""",
"optional_by": """
by : str or list of str
Expand All @@ -251,7 +251,7 @@
you to specify a location to update with some value.""",
}

_numeric_only_doc = """numeric_only : boolean, default None
_numeric_only_doc = """numeric_only : bool or None, default None
Include only float, int, boolean data. If None, will attempt to use
everything, then use only numeric data
"""
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@
"axes_single_arg": "int or labels for object",
"args_transpose": "axes to permute (int or label for object)",
"inplace": """
inplace : boolean, default False
inplace : bool, default False
If True, performs operation inplace and returns None.""",
"optional_by": """
by : str or list of str
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/groupby/categorical.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@ def recode_for_groupby(
Parameters
----------
c : Categorical
sort : boolean
sort : bool
The value of the sort parameter groupby was called with.
observed : boolean
observed : bool
Account only for the observed values

Returns
Expand Down Expand Up @@ -93,7 +93,7 @@ def recode_from_groupby(
Parameters
----------
c : Categorical
sort : boolean
sort : bool
The value of the sort parameter groupby was called with.
ci : CategoricalIndex
The codes / categories to recode
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/groupby/grouper.py
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ def _get_grouper(self, obj, validate: bool = True):
Parameters
----------
obj : the subject object
validate : boolean, default True
validate : bool, default True
if True, validate the grouper

Returns
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/groupby/ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -817,8 +817,8 @@ class BinGrouper(BaseGrouper):
----------
bins : the split index of binlabels to group the item of axis
binlabels : the label list
filter_empty : boolean, default False
mutated : boolean, default False
filter_empty : bool, default False
mutated : bool, default False
indexer : a intp array

Examples
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexing.py
Original file line number Diff line number Diff line change
Expand Up @@ -2009,7 +2009,7 @@ def _align_series(self, indexer, ser: Series, multiindex_indexer: bool = False):
Indexer used to get the locations that will be set to `ser`.
ser : pd.Series
Values to assign to the locations specified by `indexer`.
multiindex_indexer : boolean, optional
multiindex_indexer : bool, optional
Defaults to False. Should be set to True if `indexer` was from
a `pd.MultiIndex`, to avoid unnecessary broadcasting.

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/missing.py
Original file line number Diff line number Diff line change
Expand Up @@ -789,7 +789,7 @@ def _interp_limit(invalid, fw_limit, bw_limit):

Parameters
----------
invalid : boolean ndarray
invalid : np.ndarray[bool]
fw_limit : int or None
forward limit to index
bw_limit : int or None
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/resample.py
Original file line number Diff line number Diff line change
Expand Up @@ -1272,7 +1272,7 @@ def _upsample(self, method, limit=None, fill_value=None):
"""
Parameters
----------
method : string {'backfill', 'bfill', 'pad', 'ffill'}
method : {'backfill', 'bfill', 'pad', 'ffill'}
Method for upsampling.
limit : int, default None
Maximum size gap to fill when reindexing.
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@
"axes_single_arg": "{0 or 'index'}",
"axis": """axis : {0 or 'index'}
Parameter needed for compatibility with DataFrame.""",
"inplace": """inplace : boolean, default False
"inplace": """inplace : bool, default False
If True, performs operation inplace and returns None.""",
"unique": "np.ndarray",
"duplicated": "Series",
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/sorting.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ def lexsort_indexer(
----------
keys : sequence of arrays
Sequence of ndarrays to be sorted by the indexer
orders : boolean or list of booleans, optional
orders : bool or list of booleans, optional
Determines the sorting order for each element in keys. If a list,
it must be the same length as keys. This determines whether the
corresponding element in keys should be sorted in ascending
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/tools/datetimes.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ def _maybe_cache(
arg : listlike, tuple, 1-d array, Series
format : string
Strftime format to parse time
cache : boolean
cache : bool
True attempts to create a cache of converted values
convert_listlike : function
Conversion function to apply on dates
Expand Down Expand Up @@ -313,9 +313,9 @@ def _convert_listlike_datetimes(
error handing behaviors from to_datetime, 'raise', 'coerce', 'ignore'
infer_datetime_format : bool, default False
inferring format behavior from to_datetime
dayfirst : boolean
dayfirst : bool
dayfirst parsing behavior from to_datetime
yearfirst : boolean
yearfirst : bool
yearfirst parsing behavior from to_datetime
exact : bool, default True
exact format matching behavior from to_datetime
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/clipboards.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ def to_clipboard(obj, excel=True, sep=None, **kwargs): # pragma: no cover
Parameters
----------
obj : the object to write to the clipboard
excel : boolean, defaults to True
excel : bool, defaults to True
if True, use the provided separator, writing in a csv
format for allowing easy pasting into excel.
if False, write a string representation of the object
Expand Down
4 changes: 2 additions & 2 deletions pandas/io/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -565,9 +565,9 @@ def get_handle(
Passing compression options as keys in dict is now
supported for compression modes 'gzip' and 'bz2' as well as 'zip'.

memory_map : boolean, default False
memory_map : bool, default False
See parsers._parser_params for more information.
is_text : boolean, default True
is_text : bool, default True
Whether the type of the content passed to the file/buffer is string or
bytes. This is not the same as `"b" not in mode`. If a string content is
passed to a binary file/buffer, a wrapper is inserted.
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/excel/_odfreader.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class ODFReader(BaseExcelReader):

Parameters
----------
filepath_or_buffer : string, path to be parsed or
filepath_or_buffer : str, path to be parsed or
an open readable stream.
storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``_get_filepath_or_buffer``)
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/excel/_openpyxl.py
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ def __init__(

Parameters
----------
filepath_or_buffer : string, path object or Workbook
filepath_or_buffer : str, path object or Workbook
Object to be parsed.
storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``_get_filepath_or_buffer``)
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/excel/_xlrd.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def __init__(self, filepath_or_buffer, storage_options: StorageOptions = None):

Parameters
----------
filepath_or_buffer : string, path object or Workbook
filepath_or_buffer : str, path object or Workbook
Object to be parsed.
storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``_get_filepath_or_buffer``)
Expand Down
Loading