-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
BUG: indexing empty pyarrow backed object returning corrupt object #51741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
pandas/core/arrays/arrow/array.py
Outdated
@@ -349,7 +349,7 @@ def __getitem__(self, item: PositionalIndexer): | |||
pa_dtype = pa.string() | |||
else: | |||
pa_dtype = self._dtype.pyarrow_dtype | |||
return type(self)(pa.chunked_array([], type=pa_dtype)) | |||
return type(self)(pa.array([], type=pa_dtype)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A chunked array without chunks should in theory also work, so this might point to something else that is buggy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the error in #51734, it might be that the type needs to be specified in the chunked_array() call in _concat_same_type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could also solve this, but imo we should rather avoid returning something here that creates these problems. When iterating over a chunked array without chunks you’ll get an empty list, which makes determining the dtype tricky, because we would have to implement upcasting logic when getting more than one object
edit: forget what I’ve said about upcasting…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but imo we should rather avoid returning something here that creates these problems
Yes, but so my point is that we should maybe rather ensure that this does not create these problems, because there can be other ways that such a chunked array gets created (eg coming from pyarrow).
A chunkedarray itself also has a type object, so you don't need to get one chunk to get the type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep I missed the concat_same_type, makes sense when we only have one type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed
I can confirm that it solves the original issue. |
@@ -1012,7 +1012,11 @@ def _concat_same_type( | |||
ArrowExtensionArray | |||
""" | |||
chunks = [array for ea in to_concat for array in ea._data.iterchunks()] | |||
arr = pa.chunked_array(chunks) | |||
if to_concat[0].dtype == "string": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is needed specifically for StringDtype("pyarrow")
and not ArrowDtype(pa.string())
?
If so, could you add a comment to that effect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes since the StringDtype does not have a pyarrow_dtype
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
# Conflicts: # pandas/tests/extension/test_arrow.py
Owee, I'm MrMeeseeks, Look at me. There seem to be a conflict, please backport manually. Here are approximate instructions:
And apply the correct labels and milestones. Congratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon! Remember to remove the If these instructions are inaccurate, feel free to suggest an improvement. |
…d object returning corrupt object) (#51841)
pd.concat
fails withGroupBy.head()
andpd.StringDtype["pyarrow"]
#51734 (Replace xxxx with the GitHub issue number)doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.looks like an empty chunked array creates problems later on.