Description
-
I have checked that this issue has not already been reported.
-
I have confirmed this bug exists on the latest version of pandas.
-
I have confirmed this bug exists on the master branch of pandas.
Reproducible Example
import pandas as pd
nan = float('nan')
df = pd.DataFrame([
[1, 2, 1],
[1, 2, 2],
[2, nan, 3],
[2, nan, 4],
[3, 3, 5],],
columns=["a", "b", "c"])
gb = df.groupby(["a", "b"], dropna=False)
filtered = gb.filter(lambda x: len(x) > 1)
print(filtered)
# a b c
# 0 1 2.0 1
# 1 1 2.0 2
# Expecting:
# a b c
# 0 1 2.0 1
# 1 1 2.0 2
# 2 2 NaN 3
# 3 2 NaN 4
# Note that there is no problem with agg:
print()
agged = gb.agg(len)
print(agged)
# c
# a b
# 1 2.0 2
# 2 NaN 2
# 3 3.0 1
Issue Description
When using groupby()
and then applying a filter, if the group values include a NaN
, those rows are always dropped! If you add dropna=False
to the filter
call, it just fills them with NaNs.
Expected Behavior
NaN
s should be treated as unique values rather than dropped automatically.
Installed Versions
INSTALLED VERSIONS
commit : 945c9ed
python : 3.9.4.final.0
python-bits : 64
OS : Linux
OS-release : 5.14.16-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.4
numpy : 1.21.4
pytz : 2021.3
dateutil : 2.8.2
pip : 21.3.1
setuptools : 58.3.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None