Skip to content

read_excel() modifies provided types dict when accessing file with duplicate column #42508

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Aug 4, 2021
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 6 additions & 25 deletions pandas/io/excel/_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -358,6 +358,10 @@ def read_excel(
mangle_dupe_cols=True,
storage_options: StorageOptions = None,
):
kwargs = locals().copy()
for each in kwargs:
if isinstance(locals()[each], dict):
kwargs[each] = locals()[each].copy()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially trying to do a deepcopy?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mzeitlin11 . Yes it is. I tried to use deepcopy first, but it was failing multiple tests.

I got the idea of creating a copy of kwargs from read_csv (which as you mentioned doesn't produce this unnecessary side effect). So I thought it would be a good idea to keep it consistent.

# locals() should never be modified
kwds = locals().copy()
del kwds["filepath_or_buffer"]
del kwds["sep"]
kwds_defaults = _refine_defaults_read(
dialect,
delimiter,
delim_whitespace,
engine,
sep,
error_bad_lines,
warn_bad_lines,
on_bad_lines,
names,
prefix,
defaults={"delimiter": ","},
)
kwds.update(kwds_defaults)
return _read(filepath_or_buffer, kwds)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got the idea of creating a copy of kwargs from read_csv (which as you mentioned doesn't produce this unnecessary side effect). So I thought it would be a good idea to keep it consistent.

Makes sense. That case is a bit different though because kwargs itself was being modified. Here the issue is limited to the dtypes dict. If there's a clean way to copy (or not modify in place) the dtypes dict, I think that would be cleaner.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I will work along those lines. Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if you run into any issues!


should_close = False
if not isinstance(io, ExcelFile):
Expand All @@ -369,32 +373,9 @@ def read_excel(
"an ExcelFile - ExcelFile already has the engine set"
)

del kwargs["io"], kwargs["engine"], kwargs["storage_options"]
try:
data = io.parse(
sheet_name=sheet_name,
header=header,
names=names,
index_col=index_col,
usecols=usecols,
squeeze=squeeze,
dtype=dtype,
converters=converters,
true_values=true_values,
false_values=false_values,
skiprows=skiprows,
nrows=nrows,
na_values=na_values,
keep_default_na=keep_default_na,
na_filter=na_filter,
verbose=verbose,
parse_dates=parse_dates,
date_parser=date_parser,
thousands=thousands,
comment=comment,
skipfooter=skipfooter,
convert_float=convert_float,
mangle_dupe_cols=mangle_dupe_cols,
)
data = io.parse(**kwargs)
finally:
# make sure to close opened file handles
if should_close:
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
7 changes: 7 additions & 0 deletions pandas/tests/io/excel/test_readers.py
Original file line number Diff line number Diff line change
Expand Up @@ -1278,6 +1278,13 @@ def test_ignore_chartsheets_by_int(self, request, read_ext):
):
pd.read_excel("chartsheet" + read_ext, sheet_name=1)

def test_dtype_dict(self, read_ext):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please leave a link to the relevant github issue here? And maybe also make the name of this test more specific, eg in this care we about about dtype argument not being modified in the case of duplicate columns being present

filename = "test_common_headers" + read_ext
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you not use existing data? or simply do a round trip; i don't want to add even more files like this

dtype_dict = {"a": str, "b": str, "c": str}
dtype_dict_copy = dtype_dict.copy()
pd.read_excel(filename, dtype=dtype_dict)
assert dtype_dict == dtype_dict_copy, "dtype dict changed"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also check that the resulting frame is as expected? (I know this is focusing on the dtypes dict, but may as well also test the reading portion here since unlikely we have great coverage for dtypes dict with duplicate cols)



class TestExcelFileRead:
@pytest.fixture(autouse=True)
Expand Down