From 5d14ce67209c632d136e3a988716fc2e7d1c3dba Mon Sep 17 00:00:00 2001 From: Marc Garcia Date: Fri, 14 Dec 2018 18:58:38 +0000 Subject: [PATCH] DOC: Removing tailing whitespaces in .rst files --- doc/source/advanced.rst | 4 +- doc/source/categorical.rst | 62 +++++++++++++++--------------- doc/source/comparison_with_sas.rst | 28 +++++++------- doc/source/computation.rst | 24 ++++++------ doc/source/enhancingperf.rst | 8 ++-- doc/source/gotchas.rst | 46 +++++++++++----------- doc/source/overview.rst | 8 ++-- doc/source/timedeltas.rst | 2 +- doc/source/timeseries.rst | 20 +++++----- doc/source/tutorials.rst | 4 +- 10 files changed, 103 insertions(+), 103 deletions(-) diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst index 6b30f0226ecff..1ed365d09152b 100644 --- a/doc/source/advanced.rst +++ b/doc/source/advanced.rst @@ -54,7 +54,7 @@ can think of ``MultiIndex`` as an array of tuples where each tuple is unique. A ``MultiIndex`` can be created from a list of arrays (using :meth:`MultiIndex.from_arrays`), an array of tuples (using :meth:`MultiIndex.from_tuples`), a crossed set of iterables (using -:meth:`MultiIndex.from_product`), or a :class:`DataFrame` (using +:meth:`MultiIndex.from_product`), or a :class:`DataFrame` (using :meth:`MultiIndex.from_frame`). The ``Index`` constructor will attempt to return a ``MultiIndex`` when it is passed a list of tuples. The following examples demonstrate different ways to initialize MultiIndexes. @@ -81,7 +81,7 @@ to use the :meth:`MultiIndex.from_product` method: iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']] pd.MultiIndex.from_product(iterables, names=['first', 'second']) -You can also construct a ``MultiIndex`` from a ``DataFrame`` directly, using +You can also construct a ``MultiIndex`` from a ``DataFrame`` directly, using the method :meth:`MultiIndex.from_frame`. This is a complementary method to :meth:`MultiIndex.to_frame`. diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst index 721e032b8bb92..1d25b8cfabb80 100644 --- a/doc/source/categorical.rst +++ b/doc/source/categorical.rst @@ -11,8 +11,8 @@ with R's ``factor``. `Categoricals` are a pandas data type corresponding to categorical variables in statistics. A categorical variable takes on a limited, and usually fixed, -number of possible values (`categories`; `levels` in R). Examples are gender, -social class, blood type, country affiliation, observation time or rating via +number of possible values (`categories`; `levels` in R). Examples are gender, +social class, blood type, country affiliation, observation time or rating via Likert scales. In contrast to statistical categorical variables, categorical data might have an order (e.g. @@ -133,7 +133,7 @@ This conversion is likewise done column by column: Controlling Behavior ~~~~~~~~~~~~~~~~~~~~ -In the examples above where we passed ``dtype='category'``, we used the default +In the examples above where we passed ``dtype='category'``, we used the default behavior: 1. Categories are inferred from the data. @@ -170,8 +170,8 @@ are consistent among all columns. categories for each column, the ``categories`` parameter can be determined programmatically by ``categories = pd.unique(df.to_numpy().ravel())``. -If you already have ``codes`` and ``categories``, you can use the -:func:`~pandas.Categorical.from_codes` constructor to save the factorize step +If you already have ``codes`` and ``categories``, you can use the +:func:`~pandas.Categorical.from_codes` constructor to save the factorize step during normal constructor mode: .. ipython:: python @@ -184,7 +184,7 @@ during normal constructor mode: Regaining Original Data ~~~~~~~~~~~~~~~~~~~~~~~ -To get back to the original ``Series`` or NumPy array, use +To get back to the original ``Series`` or NumPy array, use ``Series.astype(original_dtype)`` or ``np.asarray(categorical)``: .. ipython:: python @@ -222,7 +222,7 @@ This information can be stored in a :class:`~pandas.api.types.CategoricalDtype`. The ``categories`` argument is optional, which implies that the actual categories should be inferred from whatever is present in the data when the :class:`pandas.Categorical` is created. The categories are assumed to be unordered -by default. +by default. .. ipython:: python @@ -277,7 +277,7 @@ All instances of ``CategoricalDtype`` compare equal to the string ``'category'`` Description ----------- -Using :meth:`~DataFrame.describe` on categorical data will produce similar +Using :meth:`~DataFrame.describe` on categorical data will produce similar output to a ``Series`` or ``DataFrame`` of type ``string``. .. ipython:: python @@ -292,9 +292,9 @@ output to a ``Series`` or ``DataFrame`` of type ``string``. Working with categories ----------------------- -Categorical data has a `categories` and a `ordered` property, which list their -possible values and whether the ordering matters or not. These properties are -exposed as ``s.cat.categories`` and ``s.cat.ordered``. If you don't manually +Categorical data has a `categories` and a `ordered` property, which list their +possible values and whether the ordering matters or not. These properties are +exposed as ``s.cat.categories`` and ``s.cat.ordered``. If you don't manually specify categories and ordering, they are inferred from the passed arguments. .. ipython:: python @@ -314,7 +314,7 @@ It's also possible to pass in the categories in a specific order: .. note:: - New categorical data are **not** automatically ordered. You must explicitly + New categorical data are **not** automatically ordered. You must explicitly pass ``ordered=True`` to indicate an ordered ``Categorical``. @@ -338,8 +338,8 @@ It's also possible to pass in the categories in a specific order: Renaming categories ~~~~~~~~~~~~~~~~~~~ -Renaming categories is done by assigning new values to the -``Series.cat.categories`` property or by using the +Renaming categories is done by assigning new values to the +``Series.cat.categories`` property or by using the :meth:`~pandas.Categorical.rename_categories` method: @@ -385,7 +385,7 @@ Categories must also not be ``NaN`` or a `ValueError` is raised: Appending new categories ~~~~~~~~~~~~~~~~~~~~~~~~ -Appending categories can be done by using the +Appending categories can be done by using the :meth:`~pandas.Categorical.add_categories` method: .. ipython:: python @@ -397,8 +397,8 @@ Appending categories can be done by using the Removing categories ~~~~~~~~~~~~~~~~~~~ -Removing categories can be done by using the -:meth:`~pandas.Categorical.remove_categories` method. Values which are removed +Removing categories can be done by using the +:meth:`~pandas.Categorical.remove_categories` method. Values which are removed are replaced by ``np.nan``.: .. ipython:: python @@ -421,8 +421,8 @@ Removing unused categories can also be done: Setting categories ~~~~~~~~~~~~~~~~~~ -If you want to do remove and add new categories in one step (which has some -speed advantage), or simply set the categories to a predefined scale, +If you want to do remove and add new categories in one step (which has some +speed advantage), or simply set the categories to a predefined scale, use :meth:`~pandas.Categorical.set_categories`. @@ -618,10 +618,10 @@ When you compare two unordered categoricals with the same categories, the order Operations ---------- -Apart from :meth:`Series.min`, :meth:`Series.max` and :meth:`Series.mode`, the +Apart from :meth:`Series.min`, :meth:`Series.max` and :meth:`Series.mode`, the following operations are possible with categorical data: -``Series`` methods like :meth:`Series.value_counts` will use all categories, +``Series`` methods like :meth:`Series.value_counts` will use all categories, even if some categories are not present in the data: .. ipython:: python @@ -666,7 +666,7 @@ that only values already in `categories` can be assigned. Getting ~~~~~~~ -If the slicing operation returns either a ``DataFrame`` or a column of type +If the slicing operation returns either a ``DataFrame`` or a column of type ``Series``, the ``category`` dtype is preserved. .. ipython:: python @@ -681,7 +681,7 @@ If the slicing operation returns either a ``DataFrame`` or a column of type df.loc["h":"j", "cats"] df[df["cats"] == "b"] -An example where the category type is not preserved is if you take one single +An example where the category type is not preserved is if you take one single row: the resulting ``Series`` is of dtype ``object``: .. ipython:: python @@ -702,7 +702,7 @@ of length "1". The is in contrast to R's `factor` function, where ``factor(c(1,2,3))[1]`` returns a single value `factor`. -To get a single value ``Series`` of type ``category``, you pass in a list with +To get a single value ``Series`` of type ``category``, you pass in a list with a single value: .. ipython:: python @@ -756,7 +756,7 @@ That means, that the returned values from methods and properties on the accessor Setting ~~~~~~~ -Setting values in a categorical column (or ``Series``) works as long as the +Setting values in a categorical column (or ``Series``) works as long as the value is included in the `categories`: .. ipython:: python @@ -836,9 +836,9 @@ Unioning .. versionadded:: 0.19.0 -If you want to combine categoricals that do not necessarily have the same +If you want to combine categoricals that do not necessarily have the same categories, the :func:`~pandas.api.types.union_categoricals` function will -combine a list-like of categoricals. The new categories will be the union of +combine a list-like of categoricals. The new categories will be the union of the categories being combined. .. ipython:: python @@ -887,8 +887,8 @@ using the ``ignore_ordered=True`` argument. b = pd.Categorical(["c", "b", "a"], ordered=True) union_categoricals([a, b], ignore_order=True) -:func:`~pandas.api.types.union_categoricals` also works with a -``CategoricalIndex``, or ``Series`` containing categorical data, but note that +:func:`~pandas.api.types.union_categoricals` also works with a +``CategoricalIndex``, or ``Series`` containing categorical data, but note that the resulting array will always be a plain ``Categorical``: .. ipython:: python @@ -1179,8 +1179,8 @@ Setting the index will create a ``CategoricalIndex``: Side Effects ~~~~~~~~~~~~ -Constructing a ``Series`` from a ``Categorical`` will not copy the input -``Categorical``. This means that changes to the ``Series`` will in most cases +Constructing a ``Series`` from a ``Categorical`` will not copy the input +``Categorical``. This means that changes to the ``Series`` will in most cases change the original ``Categorical``: .. ipython:: python diff --git a/doc/source/comparison_with_sas.rst b/doc/source/comparison_with_sas.rst index d24647df81808..fc12c8524d3bf 100644 --- a/doc/source/comparison_with_sas.rst +++ b/doc/source/comparison_with_sas.rst @@ -364,7 +364,7 @@ String Processing Length ~~~~~~ -SAS determines the length of a character string with the +SAS determines the length of a character string with the `LENGTHN `__ and `LENGTHC `__ functions. ``LENGTHN`` excludes trailing blanks and ``LENGTHC`` includes trailing blanks. @@ -378,7 +378,7 @@ functions. ``LENGTHN`` excludes trailing blanks and ``LENGTHC`` includes trailin run; Python determines the length of a character string with the ``len`` function. -``len`` includes trailing blanks. Use ``len`` and ``rstrip`` to exclude +``len`` includes trailing blanks. Use ``len`` and ``rstrip`` to exclude trailing blanks. .. ipython:: python @@ -390,9 +390,9 @@ trailing blanks. Find ~~~~ -SAS determines the position of a character in a string with the +SAS determines the position of a character in a string with the `FINDW `__ function. -``FINDW`` takes the string defined by the first argument and searches for the first position of the substring +``FINDW`` takes the string defined by the first argument and searches for the first position of the substring you supply as the second argument. .. code-block:: sas @@ -402,10 +402,10 @@ you supply as the second argument. put(FINDW(sex,'ale')); run; -Python determines the position of a character in a string with the -``find`` function. ``find`` searches for the first position of the -substring. If the substring is found, the function returns its -position. Keep in mind that Python indexes are zero-based and +Python determines the position of a character in a string with the +``find`` function. ``find`` searches for the first position of the +substring. If the substring is found, the function returns its +position. Keep in mind that Python indexes are zero-based and the function will return -1 if it fails to find the substring. .. ipython:: python @@ -416,7 +416,7 @@ the function will return -1 if it fails to find the substring. Substring ~~~~~~~~~ -SAS extracts a substring from a string based on its position with the +SAS extracts a substring from a string based on its position with the `SUBSTR `__ function. .. code-block:: sas @@ -427,7 +427,7 @@ SAS extracts a substring from a string based on its position with the run; With pandas you can use ``[]`` notation to extract a substring -from a string by position locations. Keep in mind that Python +from a string by position locations. Keep in mind that Python indexes are zero-based. .. ipython:: python @@ -439,7 +439,7 @@ Scan ~~~~ The SAS `SCAN `__ -function returns the nth word from a string. The first argument is the string you want to parse and the +function returns the nth word from a string. The first argument is the string you want to parse and the second argument specifies which word you want to extract. .. code-block:: sas @@ -452,10 +452,10 @@ second argument specifies which word you want to extract. John Smith; Jane Cook; ;;; - run; + run; -Python extracts a substring from a string based on its text -by using regular expressions. There are much more powerful +Python extracts a substring from a string based on its text +by using regular expressions. There are much more powerful approaches, but this just shows a simple approach. .. ipython:: python diff --git a/doc/source/computation.rst b/doc/source/computation.rst index e72662be7730b..95142a7b83435 100644 --- a/doc/source/computation.rst +++ b/doc/source/computation.rst @@ -13,9 +13,9 @@ Statistical Functions Percent Change ~~~~~~~~~~~~~~ -``Series``, ``DataFrame``, and ``Panel`` all have a method -:meth:`~DataFrame.pct_change` to compute the percent change over a given number -of periods (using ``fill_method`` to fill NA/null values *before* computing +``Series``, ``DataFrame``, and ``Panel`` all have a method +:meth:`~DataFrame.pct_change` to compute the percent change over a given number +of periods (using ``fill_method`` to fill NA/null values *before* computing the percent change). .. ipython:: python @@ -35,7 +35,7 @@ the percent change). Covariance ~~~~~~~~~~ -:meth:`Series.cov` can be used to compute covariance between series +:meth:`Series.cov` can be used to compute covariance between series (excluding missing values). .. ipython:: python @@ -44,7 +44,7 @@ Covariance s2 = pd.Series(np.random.randn(1000)) s1.cov(s2) -Analogously, :meth:`DataFrame.cov` to compute pairwise covariances among the +Analogously, :meth:`DataFrame.cov` to compute pairwise covariances among the series in the DataFrame, also excluding NA/null values. .. _computation.covariance.caveats: @@ -87,7 +87,7 @@ Correlation ~~~~~~~~~~~ Correlation may be computed using the :meth:`~DataFrame.corr` method. -Using the ``method`` parameter, several methods for computing correlations are +Using the ``method`` parameter, several methods for computing correlations are provided: .. csv-table:: @@ -158,8 +158,8 @@ compute the correlation based on histogram intersection: frame.corr(method=histogram_intersection) -A related method :meth:`~DataFrame.corrwith` is implemented on DataFrame to -compute the correlation between like-labeled Series contained in different +A related method :meth:`~DataFrame.corrwith` is implemented on DataFrame to +compute the correlation between like-labeled Series contained in different DataFrame objects. .. ipython:: python @@ -176,7 +176,7 @@ DataFrame objects. Data ranking ~~~~~~~~~~~~ -The :meth:`~Series.rank` method produces a data ranking with ties being +The :meth:`~Series.rank` method produces a data ranking with ties being assigned the mean of the ranks (by default) for the group: .. ipython:: python @@ -185,8 +185,8 @@ assigned the mean of the ranks (by default) for the group: s['d'] = s['b'] # so there's a tie s.rank() -:meth:`~DataFrame.rank` is also a DataFrame method and can rank either the rows -(``axis=0``) or the columns (``axis=1``). ``NaN`` values are excluded from the +:meth:`~DataFrame.rank` is also a DataFrame method and can rank either the rows +(``axis=0``) or the columns (``axis=1``). ``NaN`` values are excluded from the ranking. .. ipython:: python @@ -637,7 +637,7 @@ perform multiple computations on the data. These operations are similar to the : r = dfa.rolling(window=60, min_periods=1) r -We can aggregate by passing a function to the entire DataFrame, or select a +We can aggregate by passing a function to the entire DataFrame, or select a Series (or multiple Series) via standard ``__getitem__``. .. ipython:: python diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst index 429ff91d88e41..a4a96eea4d8e2 100644 --- a/doc/source/enhancingperf.rst +++ b/doc/source/enhancingperf.rst @@ -7,10 +7,10 @@ Enhancing Performance ********************* In this part of the tutorial, we will investigate how to speed up certain -functions operating on pandas ``DataFrames`` using three different techniques: -Cython, Numba and :func:`pandas.eval`. We will see a speed improvement of ~200 -when we use Cython and Numba on a test function operating row-wise on the -``DataFrame``. Using :func:`pandas.eval` we will speed up a sum by an order of +functions operating on pandas ``DataFrames`` using three different techniques: +Cython, Numba and :func:`pandas.eval`. We will see a speed improvement of ~200 +when we use Cython and Numba on a test function operating row-wise on the +``DataFrame``. Using :func:`pandas.eval` we will speed up a sum by an order of ~2. .. _enhancingperf.cython: diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst index 853e9e4bdf574..7d1ba865d551d 100644 --- a/doc/source/gotchas.rst +++ b/doc/source/gotchas.rst @@ -11,9 +11,9 @@ Frequently Asked Questions (FAQ) DataFrame memory usage ---------------------- The memory usage of a ``DataFrame`` (including the index) is shown when calling -the :meth:`~DataFrame.info`. A configuration option, ``display.memory_usage`` -(see :ref:`the list of options `), specifies if the -``DataFrame``'s memory usage will be displayed when invoking the ``df.info()`` +the :meth:`~DataFrame.info`. A configuration option, ``display.memory_usage`` +(see :ref:`the list of options `), specifies if the +``DataFrame``'s memory usage will be displayed when invoking the ``df.info()`` method. For example, the memory usage of the ``DataFrame`` below is shown @@ -45,10 +45,10 @@ as it can be expensive to do this deeper introspection. By default the display option is set to ``True`` but can be explicitly overridden by passing the ``memory_usage`` argument when invoking ``df.info()``. -The memory usage of each column can be found by calling the -:meth:`~DataFrame.memory_usage` method. This returns a ``Series`` with an index -represented by column names and memory usage of each column shown in bytes. For -the ``DataFrame`` above, the memory usage of each column and the total memory +The memory usage of each column can be found by calling the +:meth:`~DataFrame.memory_usage` method. This returns a ``Series`` with an index +represented by column names and memory usage of each column shown in bytes. For +the ``DataFrame`` above, the memory usage of each column and the total memory usage can be found with the ``memory_usage`` method: .. ipython:: python @@ -67,7 +67,7 @@ the ``index=False`` argument: df.memory_usage(index=False) The memory usage displayed by the :meth:`~DataFrame.info` method utilizes the -:meth:`~DataFrame.memory_usage` method to determine the memory usage of a +:meth:`~DataFrame.memory_usage` method to determine the memory usage of a ``DataFrame`` while also formatting the output in human-readable units (base-2 representation; i.e. 1KB = 1024 bytes). @@ -78,8 +78,8 @@ See also :ref:`Categorical Memory Usage `. Using If/Truth Statements with pandas ------------------------------------- -pandas follows the NumPy convention of raising an error when you try to convert -something to a ``bool``. This happens in an ``if``-statement or when using the +pandas follows the NumPy convention of raising an error when you try to convert +something to a ``bool``. This happens in an ``if``-statement or when using the boolean operations: ``and``, ``or``, and ``not``. It is not clear what the result of the following code should be: @@ -88,7 +88,7 @@ of the following code should be: >>> if pd.Series([False, True, False]): ... pass -Should it be ``True`` because it's not zero-length, or ``False`` because there +Should it be ``True`` because it's not zero-length, or ``False`` because there are ``False`` values? It is unclear, so instead, pandas raises a ``ValueError``: .. code-block:: python @@ -118,7 +118,7 @@ Below is how to check if any of the values are ``True``: ... print("I am any") I am any -To evaluate single-element pandas objects in a boolean context, use the method +To evaluate single-element pandas objects in a boolean context, use the method :meth:`~DataFrame.bool`: .. ipython:: python @@ -215,15 +215,15 @@ arrays. For example: s2.dtype This trade-off is made largely for memory and performance reasons, and also so -that the resulting ``Series`` continues to be "numeric". One possibility is to +that the resulting ``Series`` continues to be "numeric". One possibility is to use ``dtype=object`` arrays instead. ``NA`` type promotions ~~~~~~~~~~~~~~~~~~~~~~ -When introducing NAs into an existing ``Series`` or ``DataFrame`` via -:meth:`~Series.reindex` or some other means, boolean and integer types will be -promoted to a different dtype in order to store the NAs. The promotions are +When introducing NAs into an existing ``Series`` or ``DataFrame`` via +:meth:`~Series.reindex` or some other means, boolean and integer types will be +promoted to a different dtype in order to store the NAs. The promotions are summarized in this table: .. csv-table:: @@ -279,9 +279,9 @@ integer arrays to floating when NAs must be introduced. Differences with NumPy ---------------------- -For ``Series`` and ``DataFrame`` objects, :meth:`~DataFrame.var` normalizes by -``N-1`` to produce unbiased estimates of the sample variance, while NumPy's -``var`` normalizes by N, which measures the variance of the sample. Note that +For ``Series`` and ``DataFrame`` objects, :meth:`~DataFrame.var` normalizes by +``N-1`` to produce unbiased estimates of the sample variance, while NumPy's +``var`` normalizes by N, which measures the variance of the sample. Note that :meth:`~DataFrame.cov` normalizes by ``N-1`` in both pandas and NumPy. @@ -289,8 +289,8 @@ Thread-safety ------------- As of pandas 0.11, pandas is not 100% thread safe. The known issues relate to -the :meth:`~DataFrame.copy` method. If you are doing a lot of copying of -``DataFrame`` objects shared among threads, we recommend holding locks inside +the :meth:`~DataFrame.copy` method. If you are doing a lot of copying of +``DataFrame`` objects shared among threads, we recommend holding locks inside the threads where the data copying occurs. See `this link `__ @@ -300,7 +300,7 @@ for more information. Byte-Ordering Issues -------------------- Occasionally you may have to deal with data that were created on a machine with -a different byte order than the one on which you are running Python. A common +a different byte order than the one on which you are running Python. A common symptom of this issue is an error like: .. code-block:: python-traceback @@ -311,7 +311,7 @@ symptom of this issue is an error like: To deal with this issue you should convert the underlying NumPy array to the native -system byte order *before* passing it to ``Series`` or ``DataFrame`` +system byte order *before* passing it to ``Series`` or ``DataFrame`` constructors using something similar to the following: .. ipython:: python diff --git a/doc/source/overview.rst b/doc/source/overview.rst index 351cc09c07cab..b3b6ae54978ba 100644 --- a/doc/source/overview.rst +++ b/doc/source/overview.rst @@ -6,7 +6,7 @@ Package overview **************** -:mod:`pandas` is an open source, BSD-licensed library providing high-performance, +:mod:`pandas` is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the `Python `__ programming language. @@ -87,8 +87,8 @@ pandas community experts can answer through `Stack Overflow Community --------- -pandas is actively supported today by a community of like-minded individuals around -the world who contribute their valuable time and energy to help make open source +pandas is actively supported today by a community of like-minded individuals around +the world who contribute their valuable time and energy to help make open source pandas possible. Thanks to `all of our contributors `__. If you're interested in contributing, please @@ -110,7 +110,7 @@ Development Team ----------------- The list of the Core Team members and more detailed information can be found on the `people’s page `__ of the governance repo. - + Institutional Partners ---------------------- diff --git a/doc/source/timedeltas.rst b/doc/source/timedeltas.rst index 8c4928cd8165e..37cf6afcb96a3 100644 --- a/doc/source/timedeltas.rst +++ b/doc/source/timedeltas.rst @@ -365,7 +365,7 @@ Generating Ranges of Time Deltas ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Similar to :func:`date_range`, you can construct regular ranges of a ``TimedeltaIndex`` -using :func:`timedelta_range`. The default frequency for ``timedelta_range`` is +using :func:`timedelta_range`. The default frequency for ``timedelta_range`` is calendar day: .. ipython:: python diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index 2a6249bef112b..c29b9593fa59d 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -147,7 +147,7 @@ For example: pd.Period('2012-05', freq='D') -:class:`Timestamp` and :class:`Period` can serve as an index. Lists of +:class:`Timestamp` and :class:`Period` can serve as an index. Lists of ``Timestamp`` and ``Period`` are automatically coerced to :class:`DatetimeIndex` and :class:`PeriodIndex` respectively. @@ -212,7 +212,7 @@ you can pass the ``dayfirst`` flag: can't be parsed with the day being first it will be parsed as if ``dayfirst`` were False. -If you pass a single string to ``to_datetime``, it returns a single ``Timestamp``. +If you pass a single string to ``to_datetime``, it returns a single ``Timestamp``. ``Timestamp`` can also accept string input, but it doesn't accept string parsing options like ``dayfirst`` or ``format``, so use ``to_datetime`` if these are required. @@ -247,7 +247,7 @@ This could also potentially speed up the conversion considerably. pd.to_datetime('12-11-2010 00:00', format='%d-%m-%Y %H:%M') -For more information on the choices available when specifying the ``format`` +For more information on the choices available when specifying the ``format`` option, see the Python `datetime documentation`_. .. _datetime documentation: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior @@ -467,7 +467,7 @@ Custom Frequency Ranges This functionality was originally exclusive to ``cdate_range``, which is deprecated as of version 0.21.0 in favor of ``bdate_range``. Note that ``cdate_range`` only utilizes the ``weekmask`` and ``holidays`` parameters - when custom business day, 'C', is passed as the frequency string. Support has + when custom business day, 'C', is passed as the frequency string. Support has been expanded with ``bdate_range`` to work with any custom frequency string. .. versionadded:: 0.21.0 @@ -582,7 +582,7 @@ would include matching times on an included date: dft dft['2013'] -This starts on the very first time in the month, and includes the last date and +This starts on the very first time in the month, and includes the last date and time for the month: .. ipython:: python @@ -656,7 +656,7 @@ A timestamp string with minute resolution (or more accurate), gives a scalar ins series_minute['2011-12-31 23:59'] series_minute['2011-12-31 23:59:00'] -If index resolution is second, then the minute-accurate timestamp gives a +If index resolution is second, then the minute-accurate timestamp gives a ``Series``. .. ipython:: python @@ -719,9 +719,9 @@ With no defaults. Truncating & Fancy Indexing ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -A :meth:`~DataFrame.truncate` convenience function is provided that is similar -to slicing. Note that ``truncate`` assumes a 0 value for any unspecified date -component in a ``DatetimeIndex`` in contrast to slicing which returns any +A :meth:`~DataFrame.truncate` convenience function is provided that is similar +to slicing. Note that ``truncate`` assumes a 0 value for any unspecified date +component in a ``DatetimeIndex`` in contrast to slicing which returns any partially matching dates: .. ipython:: python @@ -805,7 +805,7 @@ There are several time/date properties that one can access from ``Timestamp`` or is_year_end,"Logical indicating if last day of year (defined by frequency)" is_leap_year,"Logical indicating if the date belongs to a leap year" -Furthermore, if you have a ``Series`` with datetimelike values, then you can +Furthermore, if you have a ``Series`` with datetimelike values, then you can access these properties via the ``.dt`` accessor, as detailed in the section on :ref:`.dt accessors`. diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst index c07319fff777b..0ea0e04f9a1b3 100644 --- a/doc/source/tutorials.rst +++ b/doc/source/tutorials.rst @@ -28,7 +28,7 @@ give you some concrete examples for getting started with pandas. These are examples with real-world data, and all the bugs and weirdness that entails. For the table of contents, see the `pandas-cookbook GitHub -repository `_. +repository `_. Learn Pandas by Hernan Rojas ---------------------------- @@ -56,7 +56,7 @@ For more resources, please visit the main `repository `_. The source may be found in the GitHub repository `TomAugspurger/effective-pandas `_.