@@ -163,9 +163,6 @@ dtype : Type name or dict of column -> type, default ``None``
163
163
(unsupported with ``engine='python' ``). Use `str ` or `object ` together
164
164
with suitable ``na_values `` settings to preserve and
165
165
not interpret dtype.
166
-
167
- .. versionadded :: 0.20.0 support for the Python parser.
168
-
169
166
engine : {``'c' ``, ``'python' ``}
170
167
Parser engine to use. The C engine is faster while the Python engine is
171
168
currently more feature-complete.
@@ -417,10 +414,6 @@ However, if you wanted for all the data to be coerced, no matter the type, then
417
414
using the ``converters `` argument of :func: `~pandas.read_csv ` would certainly be
418
415
worth trying.
419
416
420
- .. versionadded :: 0.20.0 support for the Python parser.
421
-
422
- The ``dtype `` option is supported by the 'python' engine.
423
-
424
417
.. note ::
425
418
In some cases, reading in abnormal data with columns containing mixed dtypes
426
419
will result in an inconsistent dataset. If you rely on pandas to infer the
@@ -616,8 +609,6 @@ Filtering columns (``usecols``)
616
609
The ``usecols `` argument allows you to select any subset of the columns in a
617
610
file, either using the column names, position numbers or a callable:
618
611
619
- .. versionadded :: 0.20.0 support for callable `usecols` arguments
620
-
621
612
.. ipython :: python
622
613
623
614
data = ' a,b,c,d\n 1,2,3,foo\n 4,5,6,bar\n 7,8,9,baz'
@@ -1447,8 +1438,6 @@ is whitespace).
1447
1438
df = pd.read_fwf(' bar.csv' , header = None , index_col = 0 )
1448
1439
df
1449
1440
1450
- .. versionadded :: 0.20.0
1451
-
1452
1441
``read_fwf `` supports the ``dtype `` parameter for specifying the types of
1453
1442
parsed columns to be different from the inferred type.
1454
1443
@@ -2221,8 +2210,6 @@ For line-delimited json files, pandas can also return an iterator which reads in
2221
2210
Table schema
2222
2211
''''''''''''
2223
2212
2224
- .. versionadded :: 0.20.0
2225
-
2226
2213
`Table Schema `_ is a spec for describing tabular datasets as a JSON
2227
2214
object. The JSON includes information on the field names, types, and
2228
2215
other attributes. You can use the orient ``table `` to build
@@ -3071,8 +3058,6 @@ missing data to recover integer dtype:
3071
3058
Dtype specifications
3072
3059
++++++++++++++++++++
3073
3060
3074
- .. versionadded :: 0.20
3075
-
3076
3061
As an alternative to converters, the type for an entire column can
3077
3062
be specified using the `dtype ` keyword, which takes a dictionary
3078
3063
mapping column names to types. To interpret data with
@@ -3345,8 +3330,6 @@ any pickled pandas object (or any other pickled object) from file:
3345
3330
Compressed pickle files
3346
3331
'''''''''''''''''''''''
3347
3332
3348
- .. versionadded :: 0.20.0
3349
-
3350
3333
:func: `read_pickle `, :meth: `DataFrame.to_pickle ` and :meth: `Series.to_pickle ` can read
3351
3334
and write compressed pickle files. The compression types of ``gzip ``, ``bz2 ``, ``xz `` are supported for reading and writing.
3352
3335
The ``zip `` file format only supports reading and must contain only one data file
@@ -4323,8 +4306,6 @@ control compression: ``complevel`` and ``complib``.
4323
4306
- `bzip2 <http://bzip.org/ >`_: Good compression rates.
4324
4307
- `blosc <http://www.blosc.org/ >`_: Fast compression and decompression.
4325
4308
4326
- .. versionadded :: 0.20.2
4327
-
4328
4309
Support for alternative blosc compressors:
4329
4310
4330
4311
- `blosc:blosclz <http://www.blosc.org/ >`_ This is the
@@ -4651,8 +4632,6 @@ Performance
4651
4632
Feather
4652
4633
-------
4653
4634
4654
- .. versionadded :: 0.20.0
4655
-
4656
4635
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
4657
4636
frames efficient, and to make sharing data across data analysis languages easy.
4658
4637
0 commit comments