Skip to content

Sync Fork from Upstream Repo #156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 27, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v1.3.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,7 @@ Deprecations
- Using ``.astype`` to convert between ``datetime64[ns]`` dtype and :class:`DatetimeTZDtype` is deprecated and will raise in a future version, use ``obj.tz_localize`` or ``obj.dt.tz_localize`` instead (:issue:`38622`)
- Deprecated casting ``datetime.date`` objects to ``datetime64`` when used as ``fill_value`` in :meth:`DataFrame.unstack`, :meth:`DataFrame.shift`, :meth:`Series.shift`, and :meth:`DataFrame.reindex`, pass ``pd.Timestamp(dateobj)`` instead (:issue:`39767`)
- Deprecated :meth:`.Styler.set_na_rep` and :meth:`.Styler.set_precision` in favour of :meth:`.Styler.format` with ``na_rep`` and ``precision`` as existing and new input arguments respectively (:issue:`40134`, :issue:`40425`)
- Deprecated allowing partial failure in :meth:`Series.transform` and :meth:`DataFrame.transform` when ``func`` is list-like or dict-like; will raise if any function fails on a column in a future version (:issue:`40211`)
- Deprecated allowing partial failure in :meth:`Series.transform` and :meth:`DataFrame.transform` when ``func`` is list-like or dict-like and raises anything but ``TypeError``; ``func`` raising anything but a ``TypeError`` will raise in a future version (:issue:`40211`)
- Deprecated support for ``np.ma.mrecords.MaskedRecords`` in the :class:`DataFrame` constructor, pass ``{name: data[name] for name in data.dtype.names}`` instead (:issue:`40363`)
- Deprecated the use of ``**kwargs`` in :class:`.ExcelWriter`; use the keyword argument ``engine_kwargs`` instead (:issue:`40430`)

Expand Down
21 changes: 12 additions & 9 deletions pandas/_libs/algos.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@ def nancorr_kendall(ndarray[float64_t, ndim=2] mat, Py_ssize_t minp=1) -> ndarra
int64_t total_discordant = 0
float64_t kendall_tau
int64_t n_obs
const int64_t[:] labels_n
const intp_t[:] labels_n

N, K = (<object>mat).shape

Expand All @@ -499,7 +499,7 @@ def nancorr_kendall(ndarray[float64_t, ndim=2] mat, Py_ssize_t minp=1) -> ndarra

ranked_mat = np.empty((N, K), dtype=np.float64)
# For compatibility when calling rank_1d
labels_n = np.zeros(N, dtype=np.int64)
labels_n = np.zeros(N, dtype=np.intp)

for i in range(K):
ranked_mat[:, i] = rank_1d(mat[:, i], labels_n)
Expand Down Expand Up @@ -591,16 +591,17 @@ def validate_limit(nobs: int, limit=None) -> int:

@cython.boundscheck(False)
@cython.wraparound(False)
def pad(ndarray[algos_t] old, ndarray[algos_t] new, limit=None):
def pad(ndarray[algos_t] old, ndarray[algos_t] new, limit=None) -> ndarray:
# -> ndarray[intp_t, ndim=1]
cdef:
Py_ssize_t i, j, nleft, nright
ndarray[int64_t, ndim=1] indexer
ndarray[intp_t, ndim=1] indexer
algos_t cur, next_val
int lim, fill_count = 0

nleft = len(old)
nright = len(new)
indexer = np.empty(nright, dtype=np.int64)
indexer = np.empty(nright, dtype=np.intp)
indexer[:] = -1

lim = validate_limit(nright, limit)
Expand Down Expand Up @@ -737,15 +738,16 @@ D
@cython.boundscheck(False)
@cython.wraparound(False)
def backfill(ndarray[algos_t] old, ndarray[algos_t] new, limit=None) -> ndarray:
# -> ndarray[intp_t, ndim=1]
cdef:
Py_ssize_t i, j, nleft, nright
ndarray[int64_t, ndim=1] indexer
ndarray[intp_t, ndim=1] indexer
algos_t cur, prev
int lim, fill_count = 0

nleft = len(old)
nright = len(new)
indexer = np.empty(nright, dtype=np.int64)
indexer = np.empty(nright, dtype=np.intp)
indexer[:] = -1

lim = validate_limit(nright, limit)
Expand Down Expand Up @@ -959,7 +961,7 @@ ctypedef fused rank_t:
@cython.boundscheck(False)
def rank_1d(
ndarray[rank_t, ndim=1] values,
const int64_t[:] labels,
const intp_t[:] labels,
ties_method="average",
bint ascending=True,
bint pct=False,
Expand All @@ -971,7 +973,8 @@ def rank_1d(
Parameters
----------
values : array of rank_t values to be ranked
labels : array containing unique label for each group, with its ordering
labels : np.ndarray[np.intp]
Array containing unique label for each group, with its ordering
matching up to the corresponding record in `values`. If not called
from a groupby operation, will be an array of 0's
ties_method : {'average', 'min', 'max', 'first', 'dense'}, default
Expand Down
4 changes: 2 additions & 2 deletions pandas/_libs/algos_take_helper.pxi.in
Original file line number Diff line number Diff line change
Expand Up @@ -219,8 +219,8 @@ def take_2d_multi_{{name}}_{{dest}}(ndarray[{{c_type_in}}, ndim=2] values,
fill_value=np.nan):
cdef:
Py_ssize_t i, j, k, n, idx
ndarray[int64_t] idx0 = indexer[0]
ndarray[int64_t] idx1 = indexer[1]
ndarray[intp_t] idx0 = indexer[0]
ndarray[intp_t] idx1 = indexer[1]
{{c_type_out}} fv

n = len(idx0)
Expand Down
Loading