Skip to content

DOC: Fixed PR06 docstring errors in pandas.interval_range & pandas.util.hash_array #28760

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Oct 7, 2019
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -1410,7 +1410,7 @@ def interval_range(
Left bound for generating intervals
end : numeric or datetime-like, default None
Right bound for generating intervals
periods : integer, default None
periods : int, default None
Number of periods to generate
freq : numeric, string, or DateOffset, default None
The length of each interval. Must be consistent with the type of start
Expand Down
26 changes: 13 additions & 13 deletions pandas/core/util/hashing.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,11 @@ def hash_pandas_object(

Parameters
----------
index : boolean, default True
index : bool, default True
include the index in the hash (if Series/DataFrame)
encoding : string, default 'utf8'
encoding for data & key when strings
hash_key : string key to encode, default to _default_hash_key
encoding : str, default 'utf8'
encoding for data & key when str
hash_key : str key to encode, default to _default_hash_key
categorize : bool, default True
Whether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
Expand Down Expand Up @@ -143,8 +143,8 @@ def hash_tuples(vals, encoding="utf8", hash_key=None):
Parameters
----------
vals : MultiIndex, list-of-tuples, or single tuple
encoding : string, default 'utf8'
hash_key : string key to encode, default to _default_hash_key
encoding : str, default 'utf8'
hash_key : str key to encode, default to _default_hash_key

Returns
-------
Expand Down Expand Up @@ -186,8 +186,8 @@ def hash_tuple(val, encoding="utf8", hash_key=None):
Parameters
----------
val : single tuple
encoding : string, default 'utf8'
hash_key : string key to encode, default to _default_hash_key
encoding : str, default 'utf8'
hash_key : str key to encode, default to _default_hash_key

Returns
-------
Expand All @@ -209,8 +209,8 @@ def _hash_categorical(c, encoding, hash_key):
Parameters
----------
c : Categorical
encoding : string, default 'utf8'
hash_key : string key to encode, default to _default_hash_key
encoding : str, default 'utf8'
hash_key : str key to encode, default to _default_hash_key

Returns
-------
Expand Down Expand Up @@ -246,9 +246,9 @@ def hash_array(vals, encoding="utf8", hash_key=None, categorize=True):
Parameters
----------
vals : ndarray, Categorical
encoding : string, default 'utf8'
encoding for data & key when strings
hash_key : string key to encode, default to _default_hash_key
encoding : str, default 'utf8'
encoding for data & key when str
hash_key : str key to encode, default to _default_hash_key
categorize : bool, default True
Whether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
Expand Down