-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
PyTables: any limitation on the number of columns? #7653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
technically no, but the problem is #6245, you are exceeding the amount of meta data allowed per node. (i think its 64kb). #6245 would remove this limit. side issue. Having lots of columns is very inefficient for retrieval. as any query gets ALL the columns. So store transposed (in this case) is MUCH better. Or split into multiple same-indexed tables (see |
Well, unfortunately it's a big refactoring job for me. I am building results in batches (2 rows were just for debugging) in order not to run out of memory and then use store.append() to add to previously saved results. So, transposing will not work. append_to_multiple/select_from_multiple looks like a pain, but thanks for the tip anyway! Oh, well... |
you can try shortening column names (painful but maybe less so) |
I am trying to use stack() and running into the following problem:
|
pls show a copy-past able example |
|
By the way, unstack(level=[1, 2]) works:
|
if you do:
marking as a bug |
I got the following error trying to do
for a DataFrame with 2 rows and 3202 columns:
Frames with 1402 columns saved just fine. Are there any limitations for the number of columns?
The text was updated successfully, but these errors were encountered: