Skip to content

BUG: provide chunks with progressively numbered (default) indices #12289

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

toobaz
Copy link
Member

@toobaz toobaz commented Feb 11, 2016

closes #12185

Notice the test I fix was indeed wrong - I had written that line as a workaround waiting for this fix.

new_rows = len(col_dict[columns[0]])
index = RangeIndex(self._currow, self._currow + new_rows)
else:
new_rows = len(index)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use new_rows = len(col_dict.values()[0]) (this can be outside the if) as I think this might fail with duplicate column names (add a test for that as well)

you don't need the else either

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK for col_dict.values(), but I think we want to consider valid csv files with only index, so the if...else is needed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, and by the way... isn't col_dict a dict?! Anyway I will add a test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you still need the if, my point is that you only need to do this if its None

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I see. My point then was that having a valid self.currow (updated just below) is an added value per se. For instance it is few lines of code away from enabling chunksize together with nrows.

@jreback jreback added Bug API Design IO CSV read_csv, to_csv labels Feb 12, 2016
@jreback
Copy link
Contributor

jreback commented Feb 12, 2016

needs to use the parser data directly. If not available, then needs to be exposed. This makes it less complicated from a future reader perspective.

@jreback
Copy link
Contributor

jreback commented Mar 12, 2016

can you rebase/update

@jreback
Copy link
Contributor

jreback commented May 7, 2016

can you rebase / update

@jreback jreback added this to the 0.19.0 milestone May 25, 2016
@jorisvandenbossche jorisvandenbossche modified the milestones: 0.20.0, 0.19.0 Jul 8, 2016
@jreback
Copy link
Contributor

jreback commented Jul 15, 2016

can you rebase and i'll have a look

index = RangeIndex(self._currow, self._currow + new_rows)
else:
new_rows = 0
index = Index([])
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An (empty) RangeIndex here breaks some comparisons with the output of the CParser with low_memory=True. Both should probably be fixed, but I'm not sure whether we need to first make RangeIndex(a, b).append(RangeIndex(b, c)) return a RangeIndex(a, c) (rather than an Int64Index, as happens currently).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you would have to special case the logic, but not s bad idea

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't specify the index here, let the pandas object creation logic do it.

@toobaz
Copy link
Member Author

toobaz commented Jul 17, 2016

Rebased. Certainly two problems left (the one you pointed out - we should already use the available info inside the parser - and the one concerning the empty chunk I commented above): I can take care of them but I don't know when, so if you want the fix in 0.19.0 I suggest to postpone them to future PRs.

@codecov-io
Copy link

codecov-io commented Jul 17, 2016

Current coverage is 85.23% (diff: 100%)

Merging #12289 into master will increase coverage by <.01%

@@             master     #12289   diff @@
==========================================
  Files           140        140          
  Lines         50415      50423     +8   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits          42970      42977     +7   
- Misses         7445       7446     +1   
  Partials          0          0          

Powered by Codecov. Last update 98c5b88...381e3b3

def test_read_chunksize_generated_index(self):
# GH 12185
reader = self.read_csv(StringIO(self.data1), chunksize=2)
df = self.read_csv(StringIO(self.data1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check with an index_col as well

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See test_read_chunksize and test_read_chunksize_named (hence the "generated_index" in the new test's name)... am I missing something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this need some more comprehensive tests. they should be in this same test method (even if they are slightly duplicated elsewhere). you are making a major change, so need to exercise multiple cases.

@jreback jreback removed this from the 0.19.0 milestone Jul 24, 2016
@toobaz toobaz force-pushed the csvstate branch 2 times, most recently from 5b65dd1 to 2677e25 Compare July 25, 2016 13:26
@toobaz
Copy link
Member Author

toobaz commented Jul 25, 2016

OK, test on index_col added, ready as far as I can tell.

@@ -289,6 +289,10 @@ Other enhancements

pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)


- ``pd.read_csv()`` with the ``chunksize=`` option and implicit index now returns an index progressively numbered, rather than in repeated chunks (:issue:`12185`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

show an example of the before and the after, this is a major change. put it in a separate sub-section.

@toobaz
Copy link
Member Author

toobaz commented Jul 26, 2016

Hope the sub-section is fine... I was unable to generate the docs (probably my IPython's fault).

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When :func:`read_csv` is called with ``chunksize='n'`` and without specifying an index,
each chunk used to have an independently generated index from `0`` to ``n``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be fully correct this would be 'n-1' I think?

@jorisvandenbossche
Copy link
Member

@toobaz The whatsnew explanation looks good I think! But can you move it a bit below to the API changes section? (it can be a subsection of that)

@jorisvandenbossche
Copy link
Member

jorisvandenbossche commented Jul 26, 2016

@toobaz Can you also take a look in the io.rst docs on the explanation about chunksize to see if there is something that should/could be changed? (edit: took a quick look in the io-chunking section http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking, and I think this is good)

@toobaz
Copy link
Member Author

toobaz commented Jul 26, 2016

@jorisvandenbossche : thanks for the comments, should have fixed everything. I also checked io.rst everywhere chunksize appears, and the index is never mentioned.

@toobaz
Copy link
Member Author

toobaz commented Jul 26, 2016

Ahem... please kill https://travis-ci.org/pydata/pandas/builds/147442885 ...

@jorisvandenbossche
Copy link
Member

@jreback this looks good to me

@jreback jreback added this to the 0.19.0 milestone Jul 29, 2016
@jreback jreback closed this in 5b0d947 Jul 29, 2016
@jreback
Copy link
Contributor

jreback commented Jul 29, 2016

thanks @toobaz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

read_csv() restarts index (if not loaded) at every chunk
4 participants