You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
importpandasaspdidx= [0,1,2]
df=pd.DataFrame(0,index=idx, columns=idx)
print(df)
df.to_csv("Test.csv")
df2=pd.read_csv("Test.csv", header=[0], index_col=[0])
print(df2)
#Test.csv #All values, including row and column indices, are integers#,0,1,2#0,0,0,0#1,0,0,0#2,0,0,0# row and column indices of original df are integersprint(df.columns)
print(df.index)
# column indices of df created from csv are stringsprint(df2.columns)
print(df2.index)
# Work-around:df_temp=pd.read_csv('Test.csv',header=[0], index_col=[0])
df3=pd.DataFrame(0,index=df_temp.index, columns=[])
forcindf_temp.columns:
newc:int=int(c)
df3[newc] =df_temp[c]
print(df3.columns)
Issue Description
Issue: read_csv() returns the column indices, created as integers, as strings rather than integers. However, the row index is returned as integers.
The programmer assumes the columns are integers because that is how the original df was created, so will get a KeyError when running df[0], when only df['0'] now works.
FYI, the same problem occurs for MultiIndex dataframes: df[('0','0') works but df[(0,0)] generates a KeyError.
This is a difficult bug (unexpected feature?) to find because the difference is not evident when printing the dataframe, only when printing df.columns. The Variable Explorer in Spyder also displays 0 not '0' for the column indices, so I may cross-post this to Spyder.
For database processing, the use of integer column indices is rare, but for those that use the index value as part of the computation, such as X(i+1) = F(X(i)), having integer indices is convenient, if not important. The main thing is that the recreation of row and column indices from csv files are unexpectedly different.
I would assume that the same behaviour could be in other file importers.
A suggestions for removing the confusion:
Add a parameter in read_csv to select the dtype of the column (and row) index, just as there are parameters to select the dtype of the data in the columns.
Add a parameter to to_csv() to specify the dtype of the indices. This is probably the best solution since it will create csv files that are correctly formated. eg. strings have quotes around them, while integers do not. That way read_csv() has a way to interprete the formating.
Expected Behavior
See Issue Description and run sample code
Installed Versions
INSTALLED VERSIONS
commit : e86ed37
python : 3.11.5.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-37-generic
Version : #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_CA.UTF-8
LOCALE : en_CA.UTF-8
Tuhin,
Go at it. I haven't had time to look at the code.
---
Bruce
On 2024-04-01 19:19, Tuhin Sharma wrote:
@thinkerf [1] can I work on it?
--
Reply to this email directly, view it on GitHub [2], or unsubscribe
[3].
You are receiving this because you were mentioned.Message ID:
***@***.***>
So for pd.read_csv() it becomes an ambiguous problem.
FWIW I would use the following code snippet as work around
import pandas as pd
import numpy as np
print("Work-around:")
df = pd.read_csv('Test.csv', header=[0], index_col=[0])
df.columns = df.columns.astype(np.int64)
print(df.columns)
Output would be as follows
Work-around:
Index([0, 1, 2], dtype='int64')
I agree with your suggestion. But I shall wait for a pandas team member reviewing this and giving a go ahead.
A suggestions for removing the confusion:
Add a parameter in read_csv to select the dtype of the column (and row) index, just as there are parameters to select the >dtype of the data in the columns.
Add a parameter to to_csv() to specify the dtype of the indices. This is probably the best solution since it will create csv >files that are correctly formated. eg. strings have quotes around them, while integers do not. That way read_csv() has a way >to interprete the formating.
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Issue: read_csv() returns the column indices, created as integers, as strings rather than integers. However, the row index is returned as integers.
The programmer assumes the columns are integers because that is how the original df was created, so will get a KeyError when running df[0], when only df['0'] now works.
FYI, the same problem occurs for MultiIndex dataframes: df[('0','0') works but df[(0,0)] generates a KeyError.
This is a difficult bug (unexpected feature?) to find because the difference is not evident when printing the dataframe, only when printing df.columns. The Variable Explorer in Spyder also displays 0 not '0' for the column indices, so I may cross-post this to Spyder.
For database processing, the use of integer column indices is rare, but for those that use the index value as part of the computation, such as X(i+1) = F(X(i)), having integer indices is convenient, if not important. The main thing is that the recreation of row and column indices from csv files are unexpectedly different.
I would assume that the same behaviour could be in other file importers.
A suggestions for removing the confusion:
Expected Behavior
See Issue Description and run sample code
Installed Versions
INSTALLED VERSIONS
commit : e86ed37
python : 3.11.5.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-37-generic
Version : #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_CA.UTF-8
LOCALE : en_CA.UTF-8
pandas : 2.1.1
numpy : 1.24.3
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.2.1
Cython : None
pytest : 7.4.0
hypothesis : None
sphinx : 5.0.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.15.0
pandas_datareader : None
bs4 : 4.12.2
bottleneck : 1.3.5
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.4.0
gcsfs : None
matplotlib : 3.7.2
numba : 0.57.1
numexpr : 2.8.4
odfpy : None
openpyxl : 3.0.10
pandas_gbq : None
pyarrow : 11.0.0
pyreadstat : None
pyxlsb : None
s3fs : 2023.4.0
scipy : 1.11.3
sqlalchemy : 1.4.39
tables : 3.8.0
tabulate : 0.8.10
xarray : 2023.6.0
xlrd : None
zstandard : 0.19.0
tzdata : 2023.3
qtpy : 2.2.0
pyqt5 : None
The text was updated successfully, but these errors were encountered: