Skip to content

apply sometimes unexpectantly casts int64 series to objects #28773

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
crew102 opened this issue Oct 3, 2019 · 7 comments
Open

apply sometimes unexpectantly casts int64 series to objects #28773

crew102 opened this issue Oct 3, 2019 · 7 comments
Labels
Apply Apply, Aggregate, Transform, Map Bug DataFrame DataFrame data structure Dtype Conversions Unexpected or buggy dtype conversions

Comments

@crew102
Copy link

crew102 commented Oct 3, 2019

Problem description

pandas.DataFrame.apply()seems to be converting series from int64 to object in some circumstances, and I'm not sure why. An example of the strange behavior I'm seeing is shown below, along with comments on what I'm expecting to see versus what I actually see. Note, this issue was originally reported on SO here: https://stackoverflow.com/questions/58222263/unexpected-behavior-when-applying-function-to-all-columns-in-pandas-data-frame.

Code Sample, a copy-pastable example if possible

import pandas as pd
import numpy as np

df = pd.DataFrame({
    "col_1": [1, 2, 3],
    "col_2": ["hi", "there", "friend"]
})
print(df)
#>    col_1   col_2
#> 0      1      hi
#> 1      2   there
#> 2      3  friend
print(df.dtypes)
#> col_1     int64
#> col_2    object
#> dtype: object

# looks like np.issubdtype returns the expected result when calling the function
# on each series:
np.issubdtype(df.col_1, np.number)
#> True
np.issubdtype(df.col_2, np.number)
#> False

# but it doesn't return the expected result when using the apply function:
print(df.apply(lambda x: np.issubdtype(x, np.number)))
#> col_1    False
#> col_2    False
#> dtype: bool

# we can see that apply seems to be coercing the series to objects here:
print(df.apply(lambda x: x.dtype))
#> col_1    object
#> col_2    object
#> dtype: object

# what's also pretty weird is that i get the expected result when applying the
# replace_nulls() function below (e.g., median imputation is used if the series
# is a number, otherwise nulls are replaced with "MISSING"):
df = pd.DataFrame({
    "col_1": [1, 2, np.nan],
    "col_2": ["hi", "there", np.nan]
})

def replace_nulls(s):
    is_numeric = np.issubdtype(s, np.number)
    missing_value = s.median() if is_numeric else "MISSING"
    return np.where(s.isnull(), missing_value, s)

print(df.apply(replace_nulls))
#>    col_1    col_2
#> 0    1.0       hi
#> 1    2.0    there
#> 2    1.5  MISSING

Created on 2019-10-03 by the reprexpy package

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.6.5.final.0
python-bits : 64
OS : Darwin
OS-release : 18.6.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 0.25.1
numpy : 1.17.2
pytz : 2019.2
dateutil : 2.8.0
pip : 19.0.3
setuptools : 40.8.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : 7.8.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None

@WillAyd
Copy link
Member

WillAyd commented Oct 3, 2019

Hmm yea that does seem weird. Apply a print shows the dtypes as object:

>>> df.apply(print)
0    1
1    2
2    3
Name: col_1, dtype: object
0        hi
1     there
2    friend
Name: col_2, dtype: object

So I think something is awry here with the underlying block management. @jbrockmendel might have some thoughts

Just to confirm, if everything in the frame was a number this would work

>>> df['col_2'] = [4, 5, 6]
>>> df.apply(lambda x: np.issubdtype(x, np.number))
col_1    True
col_2    True
dtype: bool

Investigation and PRs are of course welcome

@WillAyd WillAyd added Bug DataFrame DataFrame data structure labels Oct 3, 2019
@WillAyd WillAyd added this to the Contributions Welcome milestone Oct 3, 2019
@crew102
Copy link
Author

crew102 commented Oct 4, 2019

Just to confirm, if everything in the frame was a number this would work

K, well that's good to know/will help with debugging. Can you briefly describe what you mean by "block management?" I'd be happy to investigate this issue, though it'd be great to have a tip on where to look first.

@WillAyd
Copy link
Member

WillAyd commented Oct 4, 2019

Hmm I think this starts diverging here:

values = self.values

The problem with calling .values on a 2D object is that (in this case at least) returns a 2D numpy array which must have a contiguous dtype. The only dtype that can hold say 1 and "hello" is object, hence why all of these lose their dtype information

You might just have to iterate over the axis to maintain that dtype info, maybe building up a dict of results and returning from there at the end

In any case certainly would welcome investigation and a PR if you can make it all work

@crew102
Copy link
Author

crew102 commented Oct 4, 2019

Yeah, that definitely looks like the issue. I'll take a stab at a solution in the next few weeks or so.

@jbrockmendel
Copy link
Member

I'm out of town until Tuesday, will take a look a this then.

@jbrockmendel jbrockmendel added the Apply Apply, Aggregate, Transform, Map label Oct 16, 2019
@Reksbril
Copy link
Contributor

@crew102 Are you still working on this, or could I take over the task?

@crew102
Copy link
Author

crew102 commented Nov 21, 2019

Sorry, haven't had time to look into this. Yes, please take it over.

Reksbril pushed a commit to Reksbril/pandas that referenced this issue Dec 17, 2019
…pe (pandas-dev#28773)

The DataFrame.apply was sometimes returning wrong result when we passed
function, that was dealing with dtypes. It was caused by retrieving
the DataFrame.values of whole DataFrame, and applying the function
to it: values are represented by NumPy array, which has one
type for all data inside. It sometimes caused treating objects
in DataFrame as if they had one common type. What's worth mentioning,
the problem only existed, when we were applying function on columns.

The implemented solution "cuts" the DataFrame by columns and applies
function to each part, as it was whole DataFrame. After that, all
results are concatenated into final result on whole DataFrame.
The "cuts" are done in following way: the first column is taken, and
then we iterate through next columns and take them into first cut
while their dtype is identical as in the first column. The process
is then repeated for the rest of DataFrame
Reksbril pushed a commit to Reksbril/pandas that referenced this issue Jan 6, 2020
In new solution, existing machinery is used to apply the function
column-wise, and to recreate final result.
@mroeschke mroeschke added the Dtype Conversions Unexpected or buggy dtype conversions label Jul 21, 2021
@mroeschke mroeschke removed this from the Contributions Welcome milestone Oct 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apply Apply, Aggregate, Transform, Map Bug DataFrame DataFrame data structure Dtype Conversions Unexpected or buggy dtype conversions
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants