Skip to content

CLN: .values -> ._values #32778

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 50 commits into from
Mar 26, 2020
Merged
Show file tree
Hide file tree
Changes from 48 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
0be1713
CLN: avoid _ndarray_values
jbrockmendel Feb 28, 2020
e9e5db8
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Feb 28, 2020
953b04c
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Feb 29, 2020
f70e15d
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Feb 29, 2020
6197fbc
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Feb 29, 2020
6db54bd
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 1, 2020
4a1088e
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 2, 2020
3a4e2a2
more _values
jbrockmendel Mar 3, 2020
b0b024b
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
ed65b44
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
c92858a
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
ebaec34
_ndarray_values->asi8
jbrockmendel Mar 3, 2020
c2d5b1b
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
12ea11c
typo
jbrockmendel Mar 3, 2020
9f2b6ef
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
f7805b8
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
130c3e6
_ndarray_values->np.asarray
jbrockmendel Mar 3, 2020
5e8dcb6
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 3, 2020
63b05c6
unnecessary values_from_object calls
jbrockmendel Mar 3, 2020
c9989e2
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 4, 2020
92983f0
comments
jbrockmendel Mar 4, 2020
f32a1a0
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 4, 2020
d4a420b
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 5, 2020
35d3fdd
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 5, 2020
65f005f
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 6, 2020
201ca4e
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 6, 2020
cf9b479
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 6, 2020
d43348a
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 7, 2020
ee3a5b0
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 7, 2020
b0acf4c
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 7, 2020
ef05d7d
move comments to collect branch
jbrockmendel Mar 7, 2020
d5b5730
values->_values
jbrockmendel Mar 7, 2020
be0d40a
values->_values
jbrockmendel Mar 8, 2020
dce37dd
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 9, 2020
4fd7e62
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 11, 2020
8fa2dc0
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 11, 2020
8bcd1df
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 11, 2020
a13b6dc
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 11, 2020
61835a5
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 12, 2020
3971791
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 12, 2020
a49a3a6
checkpoint passing
jbrockmendel Mar 13, 2020
b9be012
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 13, 2020
11020b9
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 15, 2020
ed9ba9d
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 15, 2020
df25c00
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 17, 2020
f3a4d38
values-> _values
jbrockmendel Mar 17, 2020
6cb81b7
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 17, 2020
dc58813
.values->._values
jbrockmendel Mar 17, 2020
01b98c2
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 24, 2020
18e713a
Merge branch 'master' of https://github.com/pandas-dev/pandas into no…
jbrockmendel Mar 25, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -700,7 +700,7 @@ def value_counts(
result = result.sort_index()

# if we are dropna and we have NO values
if dropna and (result.values == 0).all():
if dropna and (result._values == 0).all():
result = result.iloc[0:0]

# normalizing is by len of all (regardless of dropna)
Expand All @@ -713,7 +713,7 @@ def value_counts(
# handle Categorical and sparse,
result = Series(values)._values.value_counts(dropna=dropna)
result.name = name
counts = result.values
counts = result._values

else:
keys, counts = _value_counts_arraylike(values, dropna)
Expand Down Expand Up @@ -823,7 +823,7 @@ def mode(values, dropna: bool = True) -> "Series":
# categorical is a fast-path
if is_categorical_dtype(values):
if isinstance(values, Series):
return Series(values.values.mode(dropna=dropna), name=values.name)
return Series(values._values.mode(dropna=dropna), name=values.name)
return values.mode(dropna=dropna)

if dropna and needs_i8_conversion(values.dtype):
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/datetimelike.py
Original file line number Diff line number Diff line change
Expand Up @@ -899,7 +899,7 @@ def value_counts(self, dropna=False):
index = Index(
cls(result.index.view("i8"), dtype=self.dtype), name=result.index.name
)
return Series(result.values, index=index, name=result.name)
return Series(result._values, index=index, name=result.name)

def map(self, mapper):
# TODO(GH-23179): Add ExtensionArray.map
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ class IntervalArray(IntervalMixin, ExtensionArray):
def __new__(cls, data, closed=None, dtype=None, copy=False, verify_integrity=True):

if isinstance(data, ABCSeries) and is_interval_dtype(data):
data = data.values
data = data._values

if isinstance(data, (cls, ABCIntervalIndex)):
left = data.left
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/arrays/masked.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,11 +244,11 @@ def value_counts(self, dropna: bool = True) -> "Series":
# TODO(extension)
# if we have allow Index to hold an ExtensionArray
# this is easier
index = value_counts.index.values.astype(object)
index = value_counts.index._values.astype(object)

# if we want nans, count the mask
if dropna:
counts = value_counts.values
counts = value_counts._values
else:
counts = np.empty(len(value_counts) + 1, dtype="int64")
counts[:-1] = value_counts
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ def asarray_tuplesafe(values, dtype=None):
if not (isinstance(values, (list, tuple)) or hasattr(values, "__array__")):
values = list(values)
elif isinstance(values, ABCIndexClass):
return values.values
return values._values # TODO: extract_array?

if isinstance(values, list) and dtype in [np.object_, object]:
return construct_1d_object_array_from_listlike(values)
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/dtypes/missing.py
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ def _isna_ndarraylike(obj):
if not is_extension:
# Avoid accessing `.values` on things like
# PeriodIndex, which may be expensive.
values = getattr(obj, "values", obj)
values = getattr(obj, "_values", obj)
else:
values = obj

Expand Down Expand Up @@ -270,7 +270,7 @@ def _isna_ndarraylike(obj):


def _isna_ndarraylike_old(obj):
values = getattr(obj, "values", obj)
values = getattr(obj, "_values", obj)
dtype = values.dtype

if is_string_dtype(dtype):
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -7066,7 +7066,7 @@ def asof(self, where, subset=None):

return Series(np.nan, index=self.columns, name=where[0])

locs = self.index.asof_locs(where, ~(nulls.values))
locs = self.index.asof_locs(where, ~(nulls._values))

# mask the missing
missing = locs == -1
Expand Down Expand Up @@ -7225,7 +7225,7 @@ def _clip_with_scalar(self, lower, upper, inplace: bool_t = False):
raise ValueError("Cannot use an NA value as a clip threshold")

result = self
mask = isna(self.values)
mask = isna(self._values)

with np.errstate(all="ignore"):
if upper is not None:
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/accessors.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,7 @@ def __new__(cls, data: "Series"):
orig.array,
name=orig.name,
copy=False,
dtype=orig.values.categories.dtype,
dtype=orig._values.categories.dtype,
)

if is_datetime64_dtype(data.dtype):
Expand Down
3 changes: 3 additions & 0 deletions pandas/core/indexes/category.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,8 +243,11 @@ def _simple_new(cls, values: Categorical, name: Label = None):

@Appender(Index._shallow_copy.__doc__)
def _shallow_copy(self, values=None, name: Label = no_default):
name = self.name if name is no_default else name

if values is not None:
values = Categorical(values, dtype=self.dtype)

return super()._shallow_copy(values=values, name=name)

def _is_dtype_compat(self, other) -> bool:
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/datetimes.py
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ def _is_dates_only(self) -> bool:
"""
from pandas.io.formats.format import _is_dates_only

return _is_dates_only(self.values) and self.tz is None
return self.tz is None and _is_dates_only(self._values)

def __reduce__(self):

Expand Down
4 changes: 2 additions & 2 deletions pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -1104,9 +1104,9 @@ def func(self, other, sort=sort):

# GH 19101: ensure empty results have correct dtype
if result.empty:
result = result.values.astype(self.dtype.subtype)
result = result._values.astype(self.dtype.subtype)
else:
result = result.values
result = result._values

return type(self).from_tuples(result, closed=self.closed, name=result_name)

Expand Down
4 changes: 2 additions & 2 deletions pandas/core/indexes/period.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ def _is_comparable_dtype(self, dtype: DtypeObj) -> bool:

def _mpl_repr(self):
# how to represent ourselves to matplotlib
return self.astype(object).values
return self.astype(object)._values

@property
def _formatter_func(self):
Expand Down Expand Up @@ -389,7 +389,7 @@ def asof_locs(self, where, mask: np.ndarray) -> np.ndarray:
"""
where_idx = where
if isinstance(where_idx, DatetimeIndex):
where_idx = PeriodIndex(where_idx.values, freq=self.freq)
where_idx = PeriodIndex(where_idx._values, freq=self.freq)
elif not isinstance(where_idx, PeriodIndex):
raise TypeError("asof_locs `where` must be DatetimeIndex or PeriodIndex")
elif where_idx.freq != self.freq:
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/ops/array_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def comp_method_OBJECT_ARRAY(op, x, y):
y = y.astype(np.object_)

if isinstance(y, (ABCSeries, ABCIndex)):
y = y.values
y = y._values

if x.shape != y.shape:
raise ValueError("Shapes must match", x.shape, y.shape)
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/resample.py
Original file line number Diff line number Diff line change
Expand Up @@ -1596,7 +1596,7 @@ def _get_period_bins(self, ax):
def _take_new_index(obj, indexer, new_index, axis=0):

if isinstance(obj, ABCSeries):
new_values = algos.take_1d(obj.values, indexer)
new_values = algos.take_1d(obj._values, indexer)
return obj._constructor(new_values, index=new_index, name=obj.name)
elif isinstance(obj, ABCDataFrame):
if axis == 1:
Expand Down
8 changes: 4 additions & 4 deletions pandas/core/reshape/melt.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,12 +105,12 @@ def melt(
if is_extension_array_dtype(id_data):
id_data = concat([id_data] * K, ignore_index=True)
else:
id_data = np.tile(id_data.values, K)
id_data = np.tile(id_data._values, K)
mdata[col] = id_data

mcolumns = id_vars + var_name + [value_name]

mdata[value_name] = frame.values.ravel("F")
mdata[value_name] = frame._values.ravel("F")
for i, col in enumerate(var_name):
# asanyarray will keep the columns as an Index
mdata[col] = np.asanyarray(frame.columns._get_level_values(i)).repeat(N)
Expand Down Expand Up @@ -170,13 +170,13 @@ def lreshape(data: DataFrame, groups, dropna: bool = True, label=None) -> DataFr
pivot_cols = []

for target, names in zip(keys, values):
to_concat = [data[col].values for col in names]
to_concat = [data[col]._values for col in names]

mdata[target] = concat_compat(to_concat)
pivot_cols.append(target)

for col in id_cols:
mdata[col] = np.tile(data[col].values, K)
mdata[col] = np.tile(data[col]._values, K)

if dropna:
mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool)
Expand Down
6 changes: 3 additions & 3 deletions pandas/core/reshape/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -1347,7 +1347,7 @@ def _convert_to_mulitindex(index) -> MultiIndex:
if isinstance(index, MultiIndex):
return index
else:
return MultiIndex.from_arrays([index.values], names=[index.name])
return MultiIndex.from_arrays([index._values], names=[index.name])

# For multi-multi joins with one overlapping level,
# the returned index if of type Index
Expand Down Expand Up @@ -1672,10 +1672,10 @@ def flip(xs) -> np.ndarray:

# values to compare
left_values = (
self.left.index.values if self.left_index else self.left_join_keys[-1]
self.left.index._values if self.left_index else self.left_join_keys[-1]
)
right_values = (
self.right.index.values if self.right_index else self.right_join_keys[-1]
self.right.index._values if self.right_index else self.right_join_keys[-1]
)
tolerance = self.tolerance

Expand Down
4 changes: 2 additions & 2 deletions pandas/core/reshape/pivot.py
Original file line number Diff line number Diff line change
Expand Up @@ -456,10 +456,10 @@ def pivot(data: "DataFrame", index=None, columns=None, values=None) -> "DataFram
if is_list_like(values) and not isinstance(values, tuple):
# Exclude tuple because it is seen as a single column name
indexed = data._constructor(
data[values].values, index=index, columns=values
data[values]._values, index=index, columns=values
)
else:
indexed = data._constructor_sliced(data[values].values, index=index)
indexed = data._constructor_sliced(data[values]._values, index=index)
return indexed.unstack(columns)


Expand Down
8 changes: 4 additions & 4 deletions pandas/core/reshape/reshape.py
Original file line number Diff line number Diff line change
Expand Up @@ -541,9 +541,9 @@ def factorize(index):
)

if frame._is_homogeneous_type:
# For homogeneous EAs, frame.values will coerce to object. So
# For homogeneous EAs, frame._values will coerce to object. So
# we concatenate instead.
dtypes = list(frame.dtypes.values)
dtypes = list(frame.dtypes._values)
dtype = dtypes[0]

if is_extension_array_dtype(dtype):
Expand All @@ -554,11 +554,11 @@ def factorize(index):
new_values = _reorder_for_extension_array_stack(new_values, N, K)
else:
# homogeneous, non-EA
new_values = frame.values.ravel()
new_values = frame._values.ravel()

else:
# non-homogeneous
new_values = frame.values.ravel()
new_values = frame._values.ravel()

if dropna:
mask = notna(new_values)
Expand Down
3 changes: 2 additions & 1 deletion pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -1712,7 +1712,7 @@ def count(self, level=None):
level_codes[mask] = cnt = len(lev)
lev = lev.insert(cnt, lev._na_value)

obs = level_codes[notna(self.values)]
obs = level_codes[notna(self._values)]
out = np.bincount(obs, minlength=len(lev) or None)
return self._constructor(out, index=lev, dtype="int64").__finalize__(self)

Expand Down Expand Up @@ -2704,6 +2704,7 @@ def combine(self, other, func, fill_value=None) -> "Series":
if is_categorical_dtype(self.dtype):
pass
elif is_extension_array_dtype(self.dtype):
# TODO: can we do this for only SparseDtype?
# The function can return something of any type, so check
# if the type is compatible with the calling EA.
new_values = try_cast_to_ea(self._values, new_values)
Expand Down
8 changes: 4 additions & 4 deletions pandas/core/strings.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ def _map_object(f, arr, na_mask=False, na_value=np.nan, dtype=object):
return np.ndarray(0, dtype=dtype)

if isinstance(arr, ABCSeries):
arr = arr.values
arr = arr._values # TODO: extract_array?
if not isinstance(arr, np.ndarray):
arr = np.asarray(arr, dtype=object)
if na_mask:
Expand Down Expand Up @@ -2034,8 +2034,8 @@ def __init__(self, data):
self._is_categorical = is_categorical_dtype(data)
self._is_string = data.dtype.name == "string"

# .values.categories works for both Series/Index
self._parent = data.values.categories if self._is_categorical else data
# ._values.categories works for both Series/Index
self._parent = data._values.categories if self._is_categorical else data
# save orig to blow up categoricals to the right type
self._orig = data
self._freeze()
Expand Down Expand Up @@ -2236,7 +2236,7 @@ def _get_series_list(self, others):
if isinstance(others, ABCSeries):
return [others]
elif isinstance(others, ABCIndexClass):
return [Series(others.values, index=others)]
return [Series(others._values, index=others)]
elif isinstance(others, ABCDataFrame):
return [others[x] for x in others]
elif isinstance(others, np.ndarray) and others.ndim == 2:
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/window/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ def zsqrt(x):
mask = x < 0

if isinstance(x, ABCDataFrame):
if mask.values.any():
if mask._values.any():
result[mask] = 0
else:
if mask.any():
Expand Down