diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py index 153899e023137..777a74f46d75e 100644 --- a/pandas/io/parsers/readers.py +++ b/pandas/io/parsers/readers.py @@ -101,7 +101,8 @@ By file-like object, we refer to objects with a ``read()`` method, such as a file handle (e.g. via builtin ``open`` function) or ``StringIO``. sep : str, default {_default_sep} - Delimiter to use. If ``sep=None``, the C engine cannot automatically detect + Character or regex pattern to treat as the delimiter. If ``sep=None``, the + C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator from only the first valid row of the file by Python's builtin sniffer tool, ``csv.Sniffer``. @@ -111,9 +112,9 @@ to ignoring quoted data. Regex example: ``'\r\t'``. delimiter : str, optional Alias for ``sep``. -header : int, list of int, None, default 'infer' - Row number(s) to use as the column names, and the start of the - data. Default behavior is to infer the column names: if no ``names`` +header : int, Sequence of int, 'infer' or None, default 'infer' + Row number(s) containing column labels and marking the start of the + data (zero-indexed). Default behavior is to infer the column names: if no ``names`` are passed the behavior is identical to ``header=0`` and column names are inferred from the first line of the file, if column names are passed explicitly to ``names`` then the behavior is identical to @@ -125,20 +126,21 @@ parameter ignores commented lines and empty lines if ``skip_blank_lines=True``, so ``header=0`` denotes the first line of data rather than the first line of the file. -names : array-like, optional - List of column names to use. If the file contains a header row, +names : Sequence of Hashable, optional + Sequence of column labels to apply. If the file contains a header row, then you should explicitly pass ``header=0`` to override the column names. Duplicates in this list are not allowed. -index_col : int, str, sequence of int / str, or False, optional - Column(s) to use as the row labels of the :class:`~pandas.DataFrame`, either given as - string name or column index. If a sequence of ``int`` / ``str`` is given, a - :class:`~pandas.MultiIndex` is used. +index_col : Hashable, Sequence of Hashable or False, optional + Column(s) to use as row label(s), denoted either by column labels or column + indices. If a sequence of labels or indices is given, :class:`~pandas.MultiIndex` + will be formed for the row labels. Note: ``index_col=False`` can be used to force ``pandas`` to *not* use the first - column as the index, e.g. when you have a malformed file with delimiters at + column as the index, e.g., when you have a malformed file with delimiters at the end of each line. -usecols : list-like or callable, optional - Return a subset of the columns. If list-like, all elements must either +usecols : list of Hashable or Callable, optional + Subset of columns to select, denoted either by column labels or column indices. + If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in ``names`` or inferred from the document header row(s). If ``names`` are given, the document @@ -156,9 +158,9 @@ example of a valid callable argument would be ``lambda x: x.upper() in ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster parsing time and lower memory usage. -dtype : Type name or dict of column -> type, optional - Data type for data or columns. E.g., ``{{'a': np.float64, 'b': np.int32, - 'c': 'Int64'}}`` +dtype : dtype or dict of {{Hashable : dtype}}, optional + Data type(s) to apply to either the whole dataset or individual columns. + E.g., ``{{'a': np.float64, 'b': np.int32, 'c': 'Int64'}}`` Use ``str`` or ``object`` together with suitable ``na_values`` settings to preserve and not interpret ``dtype``. If ``converters`` are specified, they will be applied INSTEAD @@ -176,18 +178,18 @@ .. versionadded:: 1.4.0 - The "pyarrow" engine was added as an *experimental* engine, and some features + The 'pyarrow' engine was added as an *experimental* engine, and some features are unsupported, or may not work correctly, with this engine. -converters : dict, optional - ``dict`` of functions for converting values in certain columns. Keys can either - be integers or column labels. +converters : dict of {{Hashable : Callable}}, optional + Functions for converting values in specified columns. Keys can either + be column labels or column indices. true_values : list, optional - Values to consider as ``True`` in addition to case-insensitive variants of "True". + Values to consider as ``True`` in addition to case-insensitive variants of 'True'. false_values : list, optional - Values to consider as ``False`` in addition to case-insensitive variants of "False". + Values to consider as ``False`` in addition to case-insensitive variants of 'False'. skipinitialspace : bool, default False Skip spaces after delimiter. -skiprows : list-like, int or callable, optional +skiprows : int, list of int or Callable, optional Line numbers to skip (0-indexed) or number of lines to skip (``int``) at the start of the file. @@ -198,7 +200,7 @@ Number of lines at bottom of file to skip (Unsupported with ``engine='c'``). nrows : int, optional Number of rows of file to read. Useful for reading pieces of large files. -na_values : scalar, str, list-like, or dict, optional +na_values : Hashable, Iterable of Hashable or dict of {{Hashable : Iterable}}, optional Additional strings to recognize as ``NA``/``NaN``. If ``dict`` passed, specific per-column ``NA`` values. By default the following values are interpreted as ``NaN``: '""" @@ -227,7 +229,7 @@ Indicate number of ``NA`` values placed in non-numeric columns. skip_blank_lines : bool, default True If ``True``, skip over blank lines rather than interpreting as ``NaN`` values. -parse_dates : bool or list of int or names or list of lists or dict, \ +parse_dates : bool, list of Hashable, list of lists or dict of {{Hashable : list}}, \ default False The behavior is as follows: @@ -258,7 +260,7 @@ keep_date_col : bool, default False If ``True`` and ``parse_dates`` specifies combining multiple columns then keep the original columns. -date_parser : function, optional +date_parser : Callable, optional Function to use for converting a sequence of string columns to an array of ``datetime`` instances. The default uses ``dateutil.parser.parser`` to do the conversion. ``pandas`` will try to call ``date_parser`` in three different ways, @@ -273,9 +275,9 @@ Use ``date_format`` instead, or read in as ``object`` and then apply :func:`~pandas.to_datetime` as-needed. date_format : str or dict of column -> format, optional - If used in conjunction with ``parse_dates``, will parse dates according to this - format. For anything more complex, - please read in as ``object`` and then apply :func:`~pandas.to_datetime` as-needed. + Format to use for parsing dates when used in conjunction with ``parse_dates``. + For anything more complex, please read in as ``object`` and then apply + :func:`~pandas.to_datetime` as-needed. .. versionadded:: 2.0.0 dayfirst : bool, default False @@ -305,50 +307,53 @@ .. versionchanged:: 1.4.0 Zstandard support. -thousands : str, optional - Thousands separator. -decimal : str, default '.' - Character to recognize as decimal point (e.g. use ',' for European data). +thousands : str (length 1), optional + Character acting as the thousands separator in numerical values. +decimal : str (length 1), default '.' + Character to recognize as decimal point (e.g., use ',' for European data). lineterminator : str (length 1), optional - Character to break file into lines. Only valid with C parser. + Character used to denote a line break. Only valid with C parser. quotechar : str (length 1), optional - The character used to denote the start and end of a quoted item. Quoted + Character used to denote the start and end of a quoted item. Quoted items can include the ``delimiter`` and it will be ignored. -quoting : int or csv.QUOTE_* instance, default 0 - Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of - ``QUOTE_MINIMAL`` (0), ``QUOTE_ALL`` (1), ``QUOTE_NONNUMERIC`` (2) or - ``QUOTE_NONE`` (3). +quoting : {{0 or csv.QUOTE_MINIMAL, 1 or csv.QUOTE_ALL, 2 or csv.QUOTE_NONNUMERIC, \ +3 or csv.QUOTE_NONE}}, default csv.QUOTE_MINIMAL + Control field quoting behavior per ``csv.QUOTE_*`` constants. Default is + ``csv.QUOTE_MINIMAL`` (i.e., 0) which implies that only fields containing special + characters are quoted (e.g., characters defined in ``quotechar``, ``delimiter``, + or ``lineterminator``. doublequote : bool, default True When ``quotechar`` is specified and ``quoting`` is not ``QUOTE_NONE``, indicate whether or not to interpret two consecutive ``quotechar`` elements INSIDE a field as a single ``quotechar`` element. escapechar : str (length 1), optional - One-character string used to escape other characters. -comment : str, optional - Indicates remainder of line should not be parsed. If found at the beginning + Character used to escape other characters. +comment : str (length 1), optional + Character indicating that the remainder of line should not be parsed. + If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long as ``skip_blank_lines=True``), fully commented lines are ignored by the parameter ``header`` but not by ``skiprows``. For example, if ``comment='#'``, parsing ``#empty\\na,b,c\\n1,2,3`` with ``header=0`` will result in ``'a,b,c'`` being treated as the header. -encoding : str, optional, default "utf-8" +encoding : str, optional, default 'utf-8' Encoding to use for UTF when reading/writing (ex. ``'utf-8'``). `List of Python standard encodings `_ . .. versionchanged:: 1.2 - When ``encoding`` is ``None``, ``errors="replace"`` is passed to - ``open()``. Otherwise, ``errors="strict"`` is passed to ``open()``. - This behavior was previously only the case for ``engine="python"``. + When ``encoding`` is ``None``, ``errors='replace'`` is passed to + ``open()``. Otherwise, ``errors='strict'`` is passed to ``open()``. + This behavior was previously only the case for ``engine='python'``. .. versionchanged:: 1.3.0 ``encoding_errors`` is a new argument. ``encoding`` has no longer an influence on how encoding errors are handled. -encoding_errors : str, optional, default "strict" +encoding_errors : str, optional, default 'strict' How encoding errors are treated. `List of possible values `_ . @@ -360,7 +365,7 @@ ``skipinitialspace``, ``quotechar``, and ``quoting``. If it is necessary to override values, a ``ParserWarning`` will be issued. See ``csv.Dialect`` documentation for more details. -on_bad_lines : {{'error', 'warn', 'skip'}} or callable, default 'error' +on_bad_lines : {{'error', 'warn', 'skip'}} or Callable, default 'error' Specifies what to do upon encountering a bad line (a line with too many fields). Allowed values are : @@ -378,11 +383,11 @@ If the function returns ``None``, the bad line will be ignored. If the function returns a new ``list`` of strings with more elements than expected, a ``ParserWarning`` will be emitted while dropping extra elements. - Only supported when ``engine="python"`` + Only supported when ``engine='python'`` delim_whitespace : bool, default False Specifies whether or not whitespace (e.g. ``' '`` or ``'\\t'``) will be - used as the sep. Equivalent to setting ``sep='\\s+'``. If this option + used as the ``sep`` delimiter. Equivalent to setting ``sep='\\s+'``. If this option is set to ``True``, nothing should be passed in for the ``delimiter`` parameter. low_memory : bool, default True @@ -396,7 +401,7 @@ If a filepath is provided for ``filepath_or_buffer``, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead. -float_precision : str, optional +float_precision : {{'high', 'legacy', 'round_trip'}}, optional Specifies which converter the C engine should use for floating-point values. The options are ``None`` or ``'high'`` for the ordinary converter, ``'legacy'`` for the original lower precision ``pandas`` converter, and @@ -408,13 +413,14 @@ .. versionadded:: 1.2 -dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrame - Which ``dtype_backend`` to use, e.g. whether a :class:`~pandas.DataFrame` should - have NumPy arrays, nullable ``dtypes`` are used for all ``dtypes`` that have a - nullable implementation when ``"numpy_nullable"`` is set, pyarrow is used for all - dtypes if ``"pyarrow"`` is set. +dtype_backend : {{'numpy_nullable', 'pyarrow'}}, defaults to NumPy backed DataFrame + Back-end data type to use for the :class:`~pandas.DataFrame`. For + ``'numpy_nullable'``, have NumPy arrays, nullable ``dtypes`` are used for all + ``dtypes`` that have a + nullable implementation when ``'numpy_nullable'`` is set, pyarrow is used for all + dtypes if ``'pyarrow'`` is set. - The ``dtype_backends`` are still experimential. + The ``dtype_backends`` are still experimental. .. versionadded:: 2.0