You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.
Code Sample, a copy-pastable example
# Your code hereimportpandasaspddata= [[pd.NA, 10], [pd.NA,10], [2, pd.NA]]
df=pd.DataFrame(data, columns=['A1', 'A2'])
print(df)
counts=df.groupby(['A1', 'A2'],dropna=False).size()
print(counts)
# If you uncomment the lines below, it works# pk_file = 'tmp.df'# counts.to_pickle(pk_file)# counts = pd.read_pickle(pk_file)idx=counts.index[0]
print(counts.loc[idx]) # KeyError: (2.0, nan)
Problem description
Unable to access the Series returned from groupby using apparently valid key
Line "print(counts.loc[idx]" raises KeyError.
If I uncomment the lines that store/load the data with pickle, then it works. No KeyError is raised.
[this should explain why the current behaviour is a problem and why the expected output is a better solution]
Expected Output
The Series returned from groupby is accessible with loc with valid keys
Output of pd.show_versions()
[paste the output of pd.show_versions() here leaving a blank line after the details tag]
commit : 2a7d332
python : 3.8.5.final.0
python-bits : 64
OS : Darwin
OS-release : 18.7.0
Version : Darwin Kernel Version 18.7.0: Thu Jun 18 20:50:10 PDT 2020; root:xnu-4903.278.43~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
hdou
changed the title
BUG: KeyError after groupby when index is multiindex and involves nan
BUG: KeyError after groupby().size() when index is multiindex and involves nan
Sep 19, 2020
thanks for your report. Looks like the resulting Index from the groupby operation does not treat the np.nan s within the MultiIndex as missing values. This seems to be the issue in here.
Thanks @phofl - agree. When you look at counts.index.codes before and after pickling, you get [[0, 1], [1, 0]] and [[0, -1], [-1, 0]] respectively. Closing as duplicate.
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.
Code Sample, a copy-pastable example
Problem description
Unable to access the Series returned from groupby using apparently valid key
Line "print(counts.loc[idx]" raises KeyError.
If I uncomment the lines that store/load the data with pickle, then it works. No KeyError is raised.
[this should explain why the current behaviour is a problem and why the expected output is a better solution]
Expected Output
The Series returned from groupby is accessible with loc with valid keys
Output of
pd.show_versions()
[paste the output of
pd.show_versions()
here leaving a blank line after the details tag]commit : 2a7d332
python : 3.8.5.final.0
python-bits : 64
OS : Darwin
OS-release : 18.7.0
Version : Darwin Kernel Version 18.7.0: Thu Jun 18 20:50:10 PDT 2020; root:xnu-4903.278.43~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.2
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.1.1
setuptools : 47.1.0
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
The text was updated successfully, but these errors were encountered: