From 1e104cc04a7b03f259cd225d8aef0d5998cc5ee2 Mon Sep 17 00:00:00 2001 From: Joris Van den Bossche Date: Thu, 7 Apr 2016 23:07:57 +0200 Subject: [PATCH] DOC: fix code-block ipython highlighting --- doc/source/advanced.rst | 4 +-- doc/source/basics.rst | 4 +-- doc/source/computation.rst | 2 +- doc/source/enhancingperf.rst | 12 ++++----- doc/source/indexing.rst | 2 +- doc/source/io.rst | 10 +++---- doc/source/options.rst | 2 +- doc/source/release.rst | 2 +- doc/source/remote_data.rst | 12 ++++----- doc/source/timeseries.rst | 4 +-- doc/source/whatsnew/v0.10.0.txt | 2 +- doc/source/whatsnew/v0.12.0.txt | 2 +- doc/source/whatsnew/v0.13.0.txt | 8 +++--- doc/source/whatsnew/v0.14.0.txt | 4 +-- doc/source/whatsnew/v0.14.1.txt | 2 +- doc/source/whatsnew/v0.15.0.txt | 30 ++++++++++----------- doc/source/whatsnew/v0.15.1.txt | 12 ++++----- doc/source/whatsnew/v0.15.2.txt | 6 ++--- doc/source/whatsnew/v0.16.0.txt | 22 +++++++-------- doc/source/whatsnew/v0.16.1.txt | 2 +- doc/source/whatsnew/v0.17.0.txt | 26 +++++++++--------- doc/source/whatsnew/v0.18.0.txt | 48 ++++++++++++++++----------------- doc/source/whatsnew/v0.18.1.txt | 4 +-- doc/source/whatsnew/v0.9.1.txt | 2 +- 24 files changed, 112 insertions(+), 112 deletions(-) diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst index 4d1354a515b1c..ef2df3f925e6b 100644 --- a/doc/source/advanced.rst +++ b/doc/source/advanced.rst @@ -790,7 +790,7 @@ In float indexes, slicing using floats is allowed In non-float indexes, slicing using floats will raise a ``TypeError`` -.. code-block:: python +.. code-block:: ipython In [1]: pd.Series(range(5))[3.5] TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index) @@ -802,7 +802,7 @@ In non-float indexes, slicing using floats will raise a ``TypeError`` Using a scalar float indexer for ``.iloc`` has been removed in 0.18.0, so the following will raise a ``TypeError`` - .. code-block:: python + .. code-block:: ipython In [3]: pd.Series(range(5)).iloc[3.0] TypeError: cannot do positional indexing on with these indexers [3.0] of diff --git a/doc/source/basics.rst b/doc/source/basics.rst index 1e30921e7248f..e3b0915cd571d 100644 --- a/doc/source/basics.rst +++ b/doc/source/basics.rst @@ -272,7 +272,7 @@ To evaluate single-element pandas objects in a boolean context, use the method .. code-block:: python - >>>if df: + >>> if df: ... Or @@ -352,7 +352,7 @@ objects of the same length: Trying to compare ``Index`` or ``Series`` objects of different lengths will raise a ValueError: -.. code-block:: python +.. code-block:: ipython In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar']) ValueError: Series lengths must match to compare diff --git a/doc/source/computation.rst b/doc/source/computation.rst index d247f79c00a46..59675e33e724b 100644 --- a/doc/source/computation.rst +++ b/doc/source/computation.rst @@ -236,7 +236,7 @@ These are created from methods on ``Series`` and ``DataFrame``. These object provide tab-completion of the avaible methods and properties. -.. code-block:: python +.. code-block:: ipython In [14]: r. r.agg r.apply r.count r.exclusions r.max r.median r.name r.skew r.sum diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst index b4b79a87f898a..a4db4b7c0d953 100644 --- a/doc/source/enhancingperf.rst +++ b/doc/source/enhancingperf.rst @@ -68,7 +68,7 @@ Here's the function in pure python: We achieve our result by using ``apply`` (row-wise): -.. code-block:: python +.. code-block:: ipython In [7]: %timeit df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1) 10 loops, best of 3: 174 ms per loop @@ -125,7 +125,7 @@ is here to distinguish between function versions): to be using bleeding edge ipython for paste to play well with cell magics. -.. code-block:: python +.. code-block:: ipython In [4]: %timeit df.apply(lambda x: integrate_f_plain(x['a'], x['b'], x['N']), axis=1) 10 loops, best of 3: 85.5 ms per loop @@ -154,7 +154,7 @@ We get another huge improvement simply by providing type information: ...: return s * dx ...: -.. code-block:: python +.. code-block:: ipython In [4]: %timeit df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1) 10 loops, best of 3: 20.3 ms per loop @@ -234,7 +234,7 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra Loops like this would be *extremely* slow in python, but in Cython looping over numpy arrays is *fast*. -.. code-block:: python +.. code-block:: ipython In [4]: %timeit apply_integrate_f(df['a'].values, df['b'].values, df['N'].values) 1000 loops, best of 3: 1.25 ms per loop @@ -284,7 +284,7 @@ advanced cython techniques: ...: return res ...: -.. code-block:: python +.. code-block:: ipython In [4]: %timeit apply_integrate_f_wrap(df['a'].values, df['b'].values, df['N'].values) 1000 loops, best of 3: 987 us per loop @@ -348,7 +348,7 @@ Using ``numba`` to just-in-time compile your code. We simply take the plain pyth Note that we directly pass ``numpy`` arrays to the numba function. ``compute_numba`` is just a wrapper that provides a nicer interface by passing/returning pandas objects. -.. code-block:: python +.. code-block:: ipython In [4]: %timeit compute_numba(df) 1000 loops, best of 3: 798 us per loop diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst index 04b166dacf2b7..5afe69791bbdf 100644 --- a/doc/source/indexing.rst +++ b/doc/source/indexing.rst @@ -297,7 +297,7 @@ Selection By Label dfl = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD'), index=pd.date_range('20130101',periods=5)) dfl - .. code-block:: python + .. code-block:: ipython In [4]: dfl.loc[2:3] TypeError: cannot do slice indexing on with these indexers [2] of diff --git a/doc/source/io.rst b/doc/source/io.rst index 6b287a2eea532..351a7059b2739 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -4375,7 +4375,7 @@ Creating BigQuery Tables As of 0.15.2, the gbq module has a function :func:`~pandas.io.gbq.generate_bq_schema` which will produce the dictionary representation schema of the specified pandas DataFrame. -.. code-block:: python +.. code-block:: ipython In [10]: gbq.generate_bq_schema(df, default_type='STRING') @@ -4633,7 +4633,7 @@ Performance Considerations This is an informal comparison of various IO methods, using pandas 0.13.1. -.. code-block:: python +.. code-block:: ipython In [1]: df = DataFrame(randn(1000000,2),columns=list('AB')) @@ -4648,7 +4648,7 @@ This is an informal comparison of various IO methods, using pandas 0.13.1. Writing -.. code-block:: python +.. code-block:: ipython In [14]: %timeit test_sql_write(df) 1 loops, best of 3: 6.24 s per loop @@ -4670,7 +4670,7 @@ Writing Reading -.. code-block:: python +.. code-block:: ipython In [18]: %timeit test_sql_read() 1 loops, best of 3: 766 ms per loop @@ -4692,7 +4692,7 @@ Reading Space on disk (in bytes) -.. code-block:: python +.. code-block:: 25843712 Apr 8 14:11 test.sql 24007368 Apr 8 14:11 test_fixed.hdf diff --git a/doc/source/options.rst b/doc/source/options.rst index 98187d7be762e..d761d827006be 100644 --- a/doc/source/options.rst +++ b/doc/source/options.rst @@ -130,7 +130,7 @@ Setting Startup Options in python/ipython Environment Using startup scripts for the python/ipython environment to import pandas and set options makes working with pandas more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where the startup folder is in a default ipython profile can be found at: -.. code-block:: python +.. code-block:: none $IPYTHONDIR/profile_default/startup diff --git a/doc/source/release.rst b/doc/source/release.rst index 3ae20e3202efc..715df2b6bd018 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -1521,7 +1521,7 @@ API Changes of the future import. You can use ``//`` and ``floordiv`` to do integer division. -.. code-block:: python +.. code-block:: ipython In [3]: arr = np.array([1, 2, 3, 4]) diff --git a/doc/source/remote_data.rst b/doc/source/remote_data.rst index 01eba8e826039..842fcb6896680 100644 --- a/doc/source/remote_data.rst +++ b/doc/source/remote_data.rst @@ -192,7 +192,7 @@ every world bank indicator is accessible. For example, if you wanted to compare the Gross Domestic Products per capita in constant dollars in North America, you would use the ``search`` function: -.. code-block:: python +.. code-block:: ipython In [1]: from pandas.io import wb @@ -207,7 +207,7 @@ constant dollars in North America, you would use the ``search`` function: Then you would use the ``download`` function to acquire the data from the World Bank's servers: -.. code-block:: python +.. code-block:: ipython In [3]: dat = wb.download(indicator='NY.GDP.PCAP.KD', country=['US', 'CA', 'MX'], start=2005, end=2008) @@ -230,7 +230,7 @@ Bank's servers: The resulting dataset is a properly formatted ``DataFrame`` with a hierarchical index, so it is easy to apply ``.groupby`` transformations to it: -.. code-block:: python +.. code-block:: ipython In [6]: dat['NY.GDP.PCAP.KD'].groupby(level=0).mean() Out[6]: @@ -243,7 +243,7 @@ index, so it is easy to apply ``.groupby`` transformations to it: Now imagine you want to compare GDP to the share of people with cellphone contracts around the world. -.. code-block:: python +.. code-block:: ipython In [7]: wb.search('cell.*%').iloc[:,:2] Out[7]: @@ -255,7 +255,7 @@ contracts around the world. Notice that this second search was much faster than the first one because ``pandas`` now has a cached list of available data series. -.. code-block:: python +.. code-block:: ipython In [13]: ind = ['NY.GDP.PCAP.KD', 'IT.MOB.COV.ZS'] In [14]: dat = wb.download(indicator=ind, country='all', start=2011, end=2011).dropna() @@ -273,7 +273,7 @@ Finally, we use the ``statsmodels`` package to assess the relationship between our two variables using ordinary least squares regression. Unsurprisingly, populations in rich countries tend to use cellphones at a higher rate: -.. code-block:: python +.. code-block:: ipython In [17]: import numpy as np In [18]: import statsmodels.formula.api as smf diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index 92b904bc683f4..b52612a857925 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -1487,7 +1487,7 @@ If ``Period`` freq is daily or higher (``D``, ``H``, ``T``, ``S``, ``L``, ``U``, p + timedelta(minutes=120) p + np.timedelta64(7200, 's') -.. code-block:: python +.. code-block:: ipython In [1]: p + Minute(5) Traceback @@ -1501,7 +1501,7 @@ If ``Period`` has other freqs, only the same ``offsets`` can be added. Otherwise p = Period('2014-07', freq='M') p + MonthEnd(3) -.. code-block:: python +.. code-block:: ipython In [1]: p + MonthBegin(3) Traceback diff --git a/doc/source/whatsnew/v0.10.0.txt b/doc/source/whatsnew/v0.10.0.txt index f409be7dd0f41..ce20de654ffd8 100644 --- a/doc/source/whatsnew/v0.10.0.txt +++ b/doc/source/whatsnew/v0.10.0.txt @@ -70,7 +70,7 @@ nfrequencies are unaffected. The prior defaults were causing a great deal of confusion for users, especially resampling data to daily frequency (which labeled the aggregated group with the end of the interval: the next day). -.. code-block:: python +.. code-block:: ipython In [1]: dates = pd.date_range('1/1/2000', '1/5/2000', freq='4h') diff --git a/doc/source/whatsnew/v0.12.0.txt b/doc/source/whatsnew/v0.12.0.txt index 4c7d799ec5202..c4188898bdf71 100644 --- a/doc/source/whatsnew/v0.12.0.txt +++ b/doc/source/whatsnew/v0.12.0.txt @@ -252,7 +252,7 @@ I/O Enhancements - Iterator support via ``read_hdf`` that automatically opens and closes the store when iteration is finished. This is only for *tables* - .. code-block:: python + .. code-block:: ipython In [25]: path = 'store_iterator.h5' diff --git a/doc/source/whatsnew/v0.13.0.txt b/doc/source/whatsnew/v0.13.0.txt index 8e3e8feebdaed..e8f2f54b873d6 100644 --- a/doc/source/whatsnew/v0.13.0.txt +++ b/doc/source/whatsnew/v0.13.0.txt @@ -80,7 +80,7 @@ API changes Integer division - .. code-block:: python + .. code-block:: ipython In [3]: arr = np.array([1, 2, 3, 4]) @@ -99,7 +99,7 @@ API changes True Division - .. code-block:: python + .. code-block:: ipython In [7]: pd.Series(arr) / pd.Series(arr2) # no future import required Out[7]: @@ -304,7 +304,7 @@ Float64Index API Change - Indexing on other index types are preserved (and positional fallback for ``[],ix``), with the exception, that floating point slicing on indexes on non ``Float64Index`` will now raise a ``TypeError``. - .. code-block:: python + .. code-block:: ipython In [1]: Series(range(5))[3.5] TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index) @@ -314,7 +314,7 @@ Float64Index API Change Using a scalar float indexer will be deprecated in a future version, but is allowed for now. - .. code-block:: python + .. code-block:: ipython In [3]: Series(range(5))[3.0] Out[3]: 3 diff --git a/doc/source/whatsnew/v0.14.0.txt b/doc/source/whatsnew/v0.14.0.txt index 67928af30bead..a91e0ab9e4961 100644 --- a/doc/source/whatsnew/v0.14.0.txt +++ b/doc/source/whatsnew/v0.14.0.txt @@ -170,7 +170,7 @@ API changes :ref:`Computing rolling pairwise covariances and correlations ` in the docs. - .. code-block:: python + .. code-block:: ipython In [1]: df = DataFrame(np.random.randn(10,4),columns=list('ABCD')) @@ -661,7 +661,7 @@ Deprecations - Indexers will warn ``FutureWarning`` when used with a scalar indexer and a non-floating point Index (:issue:`4892`, :issue:`6960`) - .. code-block:: python + .. code-block:: ipython # non-floating point indexes can only be indexed by integers / labels In [1]: Series(1,np.arange(5))[3.0] diff --git a/doc/source/whatsnew/v0.14.1.txt b/doc/source/whatsnew/v0.14.1.txt index 9e19161847327..84f2a77203c41 100644 --- a/doc/source/whatsnew/v0.14.1.txt +++ b/doc/source/whatsnew/v0.14.1.txt @@ -48,7 +48,7 @@ API changes offsets (BusinessMonthBegin, MonthEnd, BusinessMonthEnd, CustomBusinessMonthEnd, BusinessYearBegin, LastWeekOfMonth, FY5253Quarter, LastWeekOfMonth, Easter): - .. code-block:: python + .. code-block:: ipython In [6]: from pandas.tseries import offsets diff --git a/doc/source/whatsnew/v0.15.0.txt b/doc/source/whatsnew/v0.15.0.txt index 3d992206cb426..df1171fb34486 100644 --- a/doc/source/whatsnew/v0.15.0.txt +++ b/doc/source/whatsnew/v0.15.0.txt @@ -112,7 +112,7 @@ This type is very similar to how ``Timestamp`` works for ``datetimes``. It is a ``Timedelta`` scalars (and ``TimedeltaIndex``) component fields are *not the same* as the component fields on a ``datetime.timedelta`` object. For example, ``.seconds`` on a ``datetime.timedelta`` object returns the total number of seconds combined between ``hours``, ``minutes`` and ``seconds``. In contrast, the pandas ``Timedelta`` breaks out hours, minutes, microseconds and nanoseconds separately. - .. code-block:: python + .. code-block:: ipython # Timedelta accessor In [9]: tds = Timedelta('31 days 5 min 3 sec') @@ -346,14 +346,14 @@ Rolling/Expanding Moments improvements s = Series([10, 11, 12, 13]) - .. code-block:: python + .. code-block:: ipython In [15]: rolling_min(s, window=10, min_periods=5) ValueError: min_periods (5) must be <= window (4) New behavior - .. code-block:: python + .. code-block:: ipython In [4]: pd.rolling_min(s, window=10, min_periods=5) Out[4]: @@ -375,7 +375,7 @@ Rolling/Expanding Moments improvements Prior behavior (note final value is ``NaN``): - .. code-block:: python + .. code-block:: ipython In [7]: rolling_sum(Series(range(4)), window=3, min_periods=0, center=True) Out[7]: @@ -387,7 +387,7 @@ Rolling/Expanding Moments improvements New behavior (note final value is ``5 = sum([2, 3, NaN])``): - .. code-block:: python + .. code-block:: ipython In [7]: rolling_sum(Series(range(4)), window=3, min_periods=0, center=True) Out[7]: @@ -407,7 +407,7 @@ Rolling/Expanding Moments improvements Behavior prior to 0.15.0: - .. code-block:: python + .. code-block:: ipython In [39]: rolling_window(s, window=3, win_type='triang', center=True) Out[39]: @@ -420,7 +420,7 @@ Rolling/Expanding Moments improvements New behavior - .. code-block:: python + .. code-block:: ipython In [10]: pd.rolling_window(s, window=3, win_type='triang', center=True) Out[10]: @@ -454,7 +454,7 @@ Rolling/Expanding Moments improvements s = Series([1, None, None, None, 2, 3]) - .. code-block:: python + .. code-block:: ipython In [51]: ewma(s, com=3., min_periods=2) Out[51]: @@ -468,7 +468,7 @@ Rolling/Expanding Moments improvements New behavior (note values start at index ``4``, the location of the 2nd (since ``min_periods=2``) non-empty value): - .. code-block:: python + .. code-block:: ipython In [2]: pd.ewma(s, com=3., min_periods=2) Out[2]: @@ -492,7 +492,7 @@ Rolling/Expanding Moments improvements When ``ignore_na=True`` (which reproduces the pre-0.15.0 behavior), missing values are ignored in the weights calculation. (:issue:`7543`) - .. code-block:: python + .. code-block:: ipython In [7]: pd.ewma(Series([None, 1., 8.]), com=2.) Out[7]: @@ -547,7 +547,7 @@ Rolling/Expanding Moments improvements s = Series([1., 2., 0., 4.]) - .. code-block:: python + .. code-block:: ipython In [89]: ewmvar(s, com=2., bias=False) Out[89]: @@ -569,7 +569,7 @@ Rolling/Expanding Moments improvements By comparison, the following 0.15.0 results have a ``NaN`` for entry ``0``, and the debiasing factors are decreasing (towards 1.25): - .. code-block:: python + .. code-block:: ipython In [14]: pd.ewmvar(s, com=2., bias=False) Out[14]: @@ -637,7 +637,7 @@ for more details): will have to adapted to the following to keep the same behaviour: - .. code-block:: python + .. code-block:: ipython In [2]: pd.Categorical.from_codes([0,1,0,2,1], categories=['a', 'b', 'c']) Out[2]: @@ -747,7 +747,7 @@ Other notable API changes: Behavior prior to v0.15.0 - .. code-block:: python + .. code-block:: ipython # the original object @@ -1037,7 +1037,7 @@ Other: - ``Index.isin`` now supports a ``level`` argument to specify which index level to use for membership tests (:issue:`7892`, :issue:`7890`) - .. code-block:: python + .. code-block:: ipython In [1]: idx = MultiIndex.from_product([[0, 1], ['a', 'b', 'c']]) diff --git a/doc/source/whatsnew/v0.15.1.txt b/doc/source/whatsnew/v0.15.1.txt index 79efa2b278ae7..2a4104c2d5dc4 100644 --- a/doc/source/whatsnew/v0.15.1.txt +++ b/doc/source/whatsnew/v0.15.1.txt @@ -26,7 +26,7 @@ API changes previous behavior: - .. code-block:: python + .. code-block:: ipython In [6]: s.dt.hour Out[6]: @@ -57,7 +57,7 @@ API changes previous behavior: - .. code-block:: python + .. code-block:: ipython In [4]: df.groupby(ts, as_index=False).max() Out[4]: @@ -83,7 +83,7 @@ API changes previous behavior (excludes 1st column from output): - .. code-block:: python + .. code-block:: ipython In [4]: gr.apply(sum) Out[4]: @@ -108,7 +108,7 @@ API changes previous behavior: - .. code-block:: python + .. code-block:: ipython In [8]: s.loc[3.5:1.5] KeyError: 3.5 @@ -180,7 +180,7 @@ Enhancements previous behavior: - .. code-block:: python + .. code-block:: ipython In [7]: pd.concat(deque((df1, df2))) TypeError: first argument must be a list-like of pandas objects, you passed an object of type "deque" @@ -199,7 +199,7 @@ Enhancements previous behavior: - .. code-block:: python + .. code-block:: ipython # this was underreported in prior versions In [1]: dfi.memory_usage(index=True) diff --git a/doc/source/whatsnew/v0.15.2.txt b/doc/source/whatsnew/v0.15.2.txt index a2597757c3353..3a62ac38f7260 100644 --- a/doc/source/whatsnew/v0.15.2.txt +++ b/doc/source/whatsnew/v0.15.2.txt @@ -44,7 +44,7 @@ API changes whether they were "used" or not (see :issue:`8559` for the discussion). Previous behaviour was to return all categories: - .. code-block:: python + .. code-block:: ipython In [3]: cat = pd.Categorical(['a', 'b', 'a'], categories=['a', 'b', 'c']) @@ -81,7 +81,7 @@ API changes Old behavior: - .. code-block:: python + .. code-block:: ipython In [6]: data.y Out[6]: 2 @@ -102,7 +102,7 @@ API changes Old behavior: - .. code-block:: python + .. code-block:: ipython In [1]: s = pd.Series(np.arange(3), ['a', 'b', 'c']) Out[1]: diff --git a/doc/source/whatsnew/v0.16.0.txt b/doc/source/whatsnew/v0.16.0.txt index a78d776403528..68a558a2b7fd0 100644 --- a/doc/source/whatsnew/v0.16.0.txt +++ b/doc/source/whatsnew/v0.16.0.txt @@ -225,7 +225,7 @@ So in v0.16.0, we are restoring the API to match that of ``datetime.timedelta``. Previous Behavior -.. code-block:: python +.. code-block:: ipython In [2]: t = pd.Timedelta('1 day, 10:11:12.100123') @@ -274,7 +274,7 @@ The behavior of a small sub-set of edge cases for using ``.loc`` have changed (: Previous Behavior - .. code-block:: python + .. code-block:: ipython In [4]: df.loc['2013-01-02':'2013-01-10'] KeyError: 'stop bound [2013-01-10] is not in the [index]' @@ -293,7 +293,7 @@ The behavior of a small sub-set of edge cases for using ``.loc`` have changed (: Previous Behavior - .. code-block:: python + .. code-block:: ipython In [8]: s.ix[-1.0:2] TypeError: the slice start value [-1.0] is not a proper indexer for this index type (Int64Index) @@ -315,7 +315,7 @@ The behavior of a small sub-set of edge cases for using ``.loc`` have changed (: New Behavior - .. code-block:: python + .. code-block:: ipython In [4]: df.loc[2:3] TypeError: Cannot do slice indexing on with keys @@ -332,7 +332,7 @@ Furthermore, previously you *could* change the ``ordered`` attribute of a Catego Previous Behavior -.. code-block:: python +.. code-block:: ipython In [3]: s = Series([0,1,2], dtype='category') @@ -394,14 +394,14 @@ Other API Changes Previously data was coerced to a common dtype before serialisation, which for example resulted in integers being serialised to floats: - .. code-block:: python + .. code-block:: ipython In [2]: pd.DataFrame({'i': [1,2], 'f': [3.0, 4.2]}).to_json() Out[2]: '{"f":{"0":3.0,"1":4.2},"i":{"0":1.0,"1":2.0}}' Now each column is serialised using its correct dtype: - .. code-block:: python + .. code-block:: ipython In [2]: pd.DataFrame({'i': [1,2], 'f': [3.0, 4.2]}).to_json() Out[2]: '{"f":{"0":3.0,"1":4.2},"i":{"0":1,"1":2}}' @@ -417,7 +417,7 @@ Other API Changes Previous Behavior - .. code-block:: python + .. code-block:: ipython In [2]: pd.Series([0,1,2,3], list('abcd')) | pd.Series([4,4,4,4], list('abcd')) Out[2]: @@ -430,7 +430,7 @@ Other API Changes New Behavior. If the input dtypes are integral, the output dtype is also integral and the output values are the result of the bitwise operation. - .. code-block:: python + .. code-block:: ipython In [2]: pd.Series([0,1,2,3], list('abcd')) | pd.Series([4,4,4,4], list('abcd')) Out[2]: @@ -445,7 +445,7 @@ Other API Changes Previous Behavior - .. code-block:: python + .. code-block:: ipython In [2]: p = pd.Series([0, 1]) @@ -478,7 +478,7 @@ Other API Changes Old behavior: - .. code-block:: python + .. code-block:: ipython In [4]: pd.to_datetime(['2000-01-31', '2000-02-28']).asof('2000-02') Out[4]: Timestamp('2000-01-31 00:00:00') diff --git a/doc/source/whatsnew/v0.16.1.txt b/doc/source/whatsnew/v0.16.1.txt index e1a58a443aa55..1a3b8319aeb59 100755 --- a/doc/source/whatsnew/v0.16.1.txt +++ b/doc/source/whatsnew/v0.16.1.txt @@ -287,7 +287,7 @@ The string representation of ``Index`` and its sub-classes have now been unified Previous Behavior -.. code-block:: python +.. code-block:: ipython In [2]: pd.Index(range(4),name='foo') Out[2]: Int64Index([0, 1, 2, 3], dtype='int64') diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt index 92eafdac387fa..ef9785d25f014 100644 --- a/doc/source/whatsnew/v0.17.0.txt +++ b/doc/source/whatsnew/v0.17.0.txt @@ -102,7 +102,7 @@ This uses a new-dtype representation as well, that is very similar in look-and-f Previous Behavior: - .. code-block:: python + .. code-block:: ipython In [1]: pd.date_range('20130101',periods=3,tz='US/Eastern') Out[1]: DatetimeIndex(['2013-01-01 00:00:00-05:00', '2013-01-02 00:00:00-05:00', @@ -410,7 +410,7 @@ Other enhancements Previous Behavior: - .. code-block:: python + .. code-block:: ipython In [1] pd.concat([foo, bar, baz], 1) Out[1]: @@ -607,14 +607,14 @@ will raise rather that return the original input as in previous versions. (:issu Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [2]: pd.to_datetime(['2009-07-31', 'asd']) Out[2]: array(['2009-07-31', 'asd'], dtype=object) New Behavior: -.. code-block:: python +.. code-block:: ipython In [3]: pd.to_datetime(['2009-07-31', 'asd']) ValueError: Unknown string format @@ -648,7 +648,7 @@ can parse, such as a quarterly string. Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [1]: Timestamp('2012Q2') Traceback @@ -689,7 +689,7 @@ a ``ValueError``. This is to be consistent with the behavior of ``Series``. Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [2]: pd.Index([1, 2, 3]) == pd.Index([1, 4, 5]) Out[2]: array([ True, False, False], dtype=bool) @@ -702,7 +702,7 @@ Previous Behavior: New Behavior: -.. code-block:: python +.. code-block:: ipython In [8]: pd.Index([1, 2, 3]) == pd.Index([1, 4, 5]) Out[8]: array([ True, False, False], dtype=bool) @@ -740,7 +740,7 @@ Boolean comparisons of a ``Series`` vs ``None`` will now be equivalent to compar Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [5]: s==None TypeError: Could not compare type with Series @@ -784,15 +784,15 @@ Previous Behavior: df_with_missing -.. code-block:: python +.. code-block:: ipython - In [28]: + In [27]: df_with_missing.to_hdf('file.h5', 'df_with_missing', format='table', mode='w') - pd.read_hdf('file.h5', 'df_with_missing') + In [28]: pd.read_hdf('file.h5', 'df_with_missing') Out [28]: col1 col2 @@ -833,7 +833,7 @@ The ``display.precision`` option has been clarified to refer to decimal places ( Earlier versions of pandas would format floating point numbers to have one less decimal place than the value in ``display.precision``. -.. code-block:: python +.. code-block:: ipython In [1]: pd.set_option('display.precision', 2) @@ -987,7 +987,7 @@ Removal of prior version deprecations/changes Previously - .. code-block:: python + .. code-block:: ipython In [3]: df + df.A FutureWarning: TimeSeries broadcasting along DataFrame index by default is deprecated. diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index fac2b5e46398a..0d9d9bba8fa25 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -61,7 +61,7 @@ Window functions have been refactored to be methods on ``Series/DataFrame`` obje Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [8]: pd.rolling_mean(df,window=3) FutureWarning: pd.rolling_mean is deprecated for DataFrame and will be removed in a future version, replace with @@ -92,7 +92,7 @@ These show a descriptive repr r with tab-completion of available methods and properties. -.. code-block:: python +.. code-block:: ipython In [9]: r. r.A r.agg r.apply r.count r.exclusions r.max r.median r.name r.skew r.sum @@ -151,7 +151,7 @@ This will now be the default constructed index for ``NDFrame`` objects, rather t Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [3]: s = pd.Series(range(1000)) @@ -191,7 +191,7 @@ In v0.18.0, the ``expand`` argument was added to Currently the default is ``expand=None`` which gives a ``FutureWarning`` and uses ``expand=False``. To avoid this warning, please explicitly specify ``expand``. -.. code-block:: python +.. code-block:: ipython In [1]: pd.Series(['a1', 'b2', 'c3']).str.extract('[ab](\d)', expand=None) FutureWarning: currently extract(expand=None) means expand=False (return Index/Series/DataFrame) @@ -284,7 +284,7 @@ A new, friendlier ``ValueError`` is added to protect against the mistake of supp pd.Series(['a','b',np.nan,'c']).str.cat(sep=' ') pd.Series(['a','b',np.nan,'c']).str.cat(sep=' ', na_rep='?') -.. code-block:: python +.. code-block:: ipython In [2]: pd.Series(['a','b',np.nan,'c']).str.cat(' ') ValueError: Did you mean to supply a `sep` keyword? @@ -346,7 +346,7 @@ This change not only affects the display to the console, but also the output of Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [2]: s = pd.Series([1,2,3], index=np.arange(3.)) @@ -382,7 +382,7 @@ When a DataFrame's slice is updated with a new slice of the same dtype, the dtyp Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [5]: df = pd.DataFrame({'a': [0, 1, 1], 'b': pd.Series([100, 200, 300], dtype='uint32')}) @@ -418,7 +418,7 @@ When a DataFrame's integer slice is partially updated with a new slice of floats Previous Behavior: -.. code-block:: python +.. code-block:: ipython In [4]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3), columns=list('abc'), @@ -462,7 +462,7 @@ a pandas-like interface for > 2 ndim. (:issue:`11972`) See the `xarray full-documentation here `__. -.. code-block:: python +.. code-block:: ipython In [1]: p = Panel(np.arange(2*3*4).reshape(2,3,4)) @@ -574,7 +574,7 @@ to succeed. as opposed to -.. code-block:: python +.. code-block:: ipython In [3]: pd.Timestamp('19900315') + pd.Timestamp('19900315') TypeError: unsupported operand type(s) for +: 'Timestamp' and 'Timestamp' @@ -582,7 +582,7 @@ as opposed to However, when wrapped in a ``Series`` whose ``dtype`` is ``datetime64[ns]`` or ``timedelta64[ns]``, the ``dtype`` information is respected. -.. code-block:: python +.. code-block:: ipython In [1]: pd.Series([pd.NaT], dtype=' with these indexers [2.0] of diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.txt index edbaeb65c45eb..82420b036075a 100644 --- a/doc/source/whatsnew/v0.18.1.txt +++ b/doc/source/whatsnew/v0.18.1.txt @@ -116,7 +116,7 @@ API changes Using ``.apply`` on groupby resampling ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Using ``apply`` on resampling groupby operations (using a ``pd.TimeGrouper``) now has the same output types as a similar ``apply`` on other groupby operations. (:issue:`11742`). +Using ``apply`` on resampling groupby operations (using a ``pd.TimeGrouper``) now has the same output types as similar ``apply`` calls on other groupby operations. (:issue:`11742`). .. ipython:: python @@ -125,7 +125,7 @@ Using ``apply`` on resampling groupby operations (using a ``pd.TimeGrouper``) no Previous behavior: -.. code-block:: python +.. code-block:: ipython In [1]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x.value.sum()) Out[1]: diff --git a/doc/source/whatsnew/v0.9.1.txt b/doc/source/whatsnew/v0.9.1.txt index c803e063da843..51788f77a6f0f 100644 --- a/doc/source/whatsnew/v0.9.1.txt +++ b/doc/source/whatsnew/v0.9.1.txt @@ -112,7 +112,7 @@ API changes - Upsampling data with a PeriodIndex will result in a higher frequency TimeSeries that spans the original time window - .. code-block:: python + .. code-block:: ipython In [1]: prng = period_range('2012Q1', periods=2, freq='Q')