-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: Fix using dtype with parse_dates in read_csv #34330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
) | ||
expected = expected.astype({"b": np.datetime64}) | ||
df = parser.read_csv(StringIO(data), dtype="string", parse_dates=["b"]) | ||
tm.assert_frame_equal(df, expected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you call this result instead of df? Also instead of astyping expected you should be able to set the types in the constructor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only one type is allowed in constructor, so I'm not sure how I can set it there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @gfyoung
expected = DataFrame( | ||
[["1", "2020-05-23 01:00:00"]], columns=["a", "b"], dtype="string" | ||
) | ||
expected = expected.astype({"b": np.datetime64}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is very strange to astype this way, please use pd.to_datetime
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -1708,7 +1708,9 @@ def _convert_to_ndarrays( | |||
result = {} | |||
for c, values in dct.items(): | |||
conv_f = None if converters is None else converters.get(c, None) | |||
if isinstance(dtypes, dict): | |||
if values.dtype != object: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is very odd to do as we already have a path for a single dtype, what are you trying to do here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After loading values from csv we have dictionary with column names and numpy array for each column with dtype=object
. Then we change values that are suppose to be datetime ('b'
in example). After that we want to change types of the rest of columns, that is those that have dtype=object
. In that line we're skiping columns that already have dtype set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use is_object_dtype(values.dtype)
to do the check
@@ -3264,6 +3266,9 @@ def _make_date_converter( | |||
): | |||
def converter(*date_cols): | |||
if date_parser is None: | |||
date_cols = tuple( | |||
x if isinstance(x, np.ndarray) else x.to_numpy() for x in date_cols | |||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is better off done inside concat_date_cols, but what is the incoming data here in the example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's tuple with StringArray
with dates from 'b'
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to move this to concat_date_cols
as per @jreback comment?
@mproszewska Are you still interested in working on this? If so can you merge master? |
Closing as stale, @mproszewska let us know if you'd like to reopen |
I can go back to working on this. Could you reopen? |
@mproszewska : Done |
Hello @mproszewska! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found: There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-10-08 20:36:43 UTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some small comments from me. Also @mproszewska can you merge master once more?
@@ -1708,7 +1708,9 @@ def _convert_to_ndarrays( | |||
result = {} | |||
for c, values in dct.items(): | |||
conv_f = None if converters is None else converters.get(c, None) | |||
if isinstance(dtypes, dict): | |||
if values.dtype != object: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use is_object_dtype(values.dtype)
to do the check
@@ -3264,6 +3266,9 @@ def _make_date_converter( | |||
): | |||
def converter(*date_cols): | |||
if date_parser is None: | |||
date_cols = tuple( | |||
x if isinstance(x, np.ndarray) else x.to_numpy() for x in date_cols | |||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to move this to concat_date_cols
as per @jreback comment?
seems not unreasonable, but his PR is stale. |
black pandas
git diff upstream/master -u -- "*.py" | flake8 --diff