-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
On concatenating dataframes: NoneType object is not iterable #17552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Normally, not aligned indices work fine (you just get a lot of NaNs):
So not sure what is different in the original example. @1kastner sidenote: I think you are misunderstanding what |
OK, so the cause is the fact that the index is uniform (a single value repeated), and depending on the dtype of the index, you get a different error message:
Closing this as a duplicate of #6963. It seems that @1kastner I think the reason you get this, as I said above, is from misunderstanding what
|
Duplicate of #6963 |
Thanks for the feedback, the ignore_index thing was a bit confusing, I will try to add some information to the documentation as this behavior was a bit surprising for me. The rest of the analysis seems valid as the dataframe resulting from the get_dummies invocation has a different-looking index. The solution I chose was to add the lines data_df.reset_index(inplace=True, drop=True)
df_hour.reset_index(inplace=True, drop=True)
cloud_cover_df.reset_index(inplace=True, drop=True) just before the concatenation from which I removed the ignore_index keyword. So now the complete code looks like: import io
import pandas
csv_file = io.StringIO("""datetime,cloudcover_eddh,dewpoint,dewpoint_eddh,humidity,humidity_eddh,lat,lon,precipitation_eddh,pressure_eddh,temperature,temperature_eddh,winddirection_eddh,windgust_eddh,windspeed_eddh
2016-01-01 00:00:00,CAVOC,4.1,3.0,100.0,94.0,53.5443,9.926839999999999,,1023.0,4.1,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,1.7,3.0,96.0,94.0,53.61297,9.98145,,1023.0,2.3,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,2.4,3.0,98.0,94.0,53.57735,10.09428,,1023.0,2.7,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.0,3.0,94.0,94.0,53.68849,10.1335,,1023.0,3.9,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,4.2,3.0,76.0,94.0,53.6608,10.06555,,1023.0,,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.0,3.0,100.0,94.0,53.43252,10.297989999999999,,1023.0,3.0,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,1.9,3.0,92.0,94.0,53.68937,10.13025,,1023.0,3.1,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.5,3.0,100.0,94.0,53.6344,9.966560000000001,,1023.0,3.5,3.0,160.0,0.0,7.2
2016-01-01 00:00:00,CAVOC,3.6,3.0,99.0,94.0,53.46402,9.89157,,1023.0,3.7,3.0,160.0,0.0,7.2""")
data_df = pandas.read_csv(csv_file, parse_dates=["datetime"], index_col="datetime")
cloud_cover_df = pandas.get_dummies(data_df.cloudcover_eddh, prefix="cloudcover_eddh")
df_hour = pandas.get_dummies(data_df.index.hour, prefix="hour")
data_df.reset_index(inplace=True, drop=True)
cloud_cover_df.reset_index(inplace=True, drop=True)
df_hour.reset_index(inplace=True, drop=True)
data_df = pandas.concat([
data_df,
df_hour,
cloud_cover_df
], axis=1) That way I could enter all the dummies back into the dataframe and have all columns I wanted, i. e.:
It might be a special case that I want to drop the index (the row called "datetime") but overall it can be said that get_dummies is not that convenient for the described situation. |
Improvement to the docs is always welcome! |
Another improvement as I just want to add some columns and I know that there are duplicated indices which does not work well with the logic of concatenating (joining on sth.) data_df = data_df.assign(**{column: cloud_cover_df[column] for column in cloud_cover_df.columns}) |
Code Sample, a copy-pastable example if possible
Problem description
I expect that the three dataframes are concatenated as wished. I know that the indices correspond so I do not need pandas to check that for me. Instead I get the following error message:
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.10.0-33-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 36.0.1
Cython: None
numpy: 1.13.1
scipy: 0.17.0
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.6.2
feather: None
matplotlib: 1.5.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.4.1
html5lib: 0.999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
s3fs: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: