You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would expect has_duplicates to return False here, because 102 is not the same as 101.
I would also expect it to return false for the MultiIndex
MultiIndex
[(101, 3.5), (101, nan)]
since 3.5 != NaN, but this case is more debatable.
This is important because you can't call .unstack() on a series with a MultiIndex for which has_duplicates is True, even if the MultiIndex is of high dimension and the dimensions containing the NaN(s) are not involved in the operation.
This is with pandas 0.12.0
The text was updated successfully, but these errors were encountered:
I appreciate that nans are not the best thing to index by! Sadly, the data I've been given includes nans as an index (presumably in an attempt to indicate that that dimension of the index is not relevant to the particular sample).
Here's a slightly more complicated example that is closer to my actual data:
If (multi)index with nan is officially discouraged and causes trouble here and elsewhere (core.reshape._Unstacker also seems to struggle with nan indexes), perhaps pandas should print a warning message when an index is created with nan values?
you should just avoid nan in mi's in general; I think a warning (controllable by an option) is a good idea. Their are some very non-trivial issues here.
When (at least) one element in a MultiIndex contains a NaN, has_duplicates starts to behave strangely:
I would expect has_duplicates to return False here, because 102 is not the same as 101.
I would also expect it to return false for the MultiIndex
since 3.5 != NaN, but this case is more debatable.
This is important because you can't call .unstack() on a series with a MultiIndex for which has_duplicates is True, even if the MultiIndex is of high dimension and the dimensions containing the NaN(s) are not involved in the operation.
This is with pandas 0.12.0
The text was updated successfully, but these errors were encountered: