forked from pandas-dev/pandas
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathv0.14.0.txt
437 lines (297 loc) · 17 KB
/
v0.14.0.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
.. _whatsnew_0140:
v0.14.0 (May ? , 2014)
----------------------
This is a major release from 0.13.1 and includes a small number of API changes, several new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
users upgrade to this version.
Highlights include:
- MultIndexing Using Slicers
- Joining a singly-indexed DataFrame with a multi-indexed DataFrame
- More consistency in groupby results and more flexible groupby specifications
API changes
~~~~~~~~~~~
- ``read_excel`` uses 0 as the default sheet (:issue:`6573`)
- ``iloc`` will now accept out-of-bounds indexers for slices, e.g. a value that exceeds the length of the object being
indexed. These will be excluded. This will make pandas conform more with python/numpy indexing of out-of-bounds
values. A single indexer / list of indexers that is out-of-bounds will still raise
``IndexError`` (:issue:`6296`, :issue:`6299`). This could result in an empty axis (e.g. an empty DataFrame being returned)
.. ipython:: python
dfl = DataFrame(np.random.randn(5,2),columns=list('AB'))
dfl
dfl.iloc[:,2:3]
dfl.iloc[:,1:3]
dfl.iloc[4:6]
These are out-of-bounds selections
.. code-block:: python
dfl.iloc[[4,5,6]]
IndexError: positional indexers are out-of-bounds
dfl.iloc[:,4]
IndexError: single positional indexer is out-of-bounds
- The ``DataFrame.interpolate()`` ``downcast`` keyword default has been changed from ``infer`` to
``None``. This is to preseve the original dtype unless explicitly requested otherwise (:issue:`6290`).
- When converting a dataframe to HTML it used to return `Empty DataFrame`. This special case has
been removed, instead a header with the column names is returned (:issue:`6062`).
- allow a Series to utilize index methods depending on its index type, e.g. ``Series.year`` is now defined
for a Series with a ``DatetimeIndex`` or a ``PeriodIndex``; trying this on a non-supported Index type will
now raise a ``TypeError``. (:issue:`4551`, :issue:`4056`, :issue:`5519`)
The following affected:
- ``date,time,year,month,day``
- ``hour,minute,second,weekofyear``
- ``week,dayofweek,dayofyear,quarter``
- ``microsecond,nanosecond,qyear``
- ``min(),max()``
- ``pd.infer_freq()``
.. ipython:: python
s = Series(np.random.randn(5),index=tm.makeDateIndex(5))
s
s.year
s.index.year
- More consistent behaviour for some groupby methods:
groupby ``head`` and ``tail`` now act more like ``filter`` rather than an aggregation:
.. ipython:: python
df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=['A', 'B'])
g = df.groupby('A')
g.head(1) # filters DataFrame
g.apply(lambda x: x.head(1)) # used to simply fall-through
groupby head and tail respect column selection:
.. ipython:: python
g[['B']].head(1)
groupby ``nth`` now filters by default, with optional dropna argument to ignore
NaN (to replicate the previous behaviour.), See :ref:`the docs <groupby.nth>`.
.. ipython:: python
DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
g = df.groupby('A')
g.nth(0) # can also use negative ints
g.nth(0, dropna='any') # similar to old behaviour
- Allow specification of a more complex groupby via ``pd.Grouper``, such as grouping
by a Time and a string field simultaneously. See :ref:`the docs <groupby.specify>`. (:issue:`3794`)
- Local variable usage has changed in
:func:`pandas.eval`/:meth:`DataFrame.eval`/:meth:`DataFrame.query`
(:issue:`5987`). For the :class:`~pandas.DataFrame` methods, two things have
changed
- Column names are now given precedence over locals
- Local variables must be referred to explicitly. This means that even if
you have a local variable that is *not* a column you must still refer to
it with the ``'@'`` prefix.
- You can have an expression like ``df.query('@a < a')`` with no complaints
from ``pandas`` about ambiguity of the name ``a``.
- The top-level :func:`pandas.eval` function does not allow you use the
``'@'`` prefix and provides you with an error message telling you so.
- ``NameResolutionError`` was removed because it isn't necessary anymore.
- ``concat`` will now concatenate mixed Series and DataFrames using the Series name
or numbering columns as needed (:issue:`2385`). See :ref:`the docs <merging.mixed_ndims>`
- Slicing and advanced/boolean indexing operations on ``Index`` classes will no
longer change type of the resulting index (:issue:`6440`)
.. ipython:: python
i = pd.Index([1, 2, 3, 'a' , 'b', 'c'])
i[[0,1,2]]
Previously, the above operation would return ``Int64Index``. If you'd like
to do this manually, use :meth:`Index.astype`
.. ipython:: python
i[[0,1,2]].astype(np.int_)
- ``set_index`` no longer converts MultiIndexes to an Index of tuples. For example,
the old behavior returned an Index in this case (:issue:`6459`):
.. ipython:: python
:suppress:
from itertools import product
tuples = list(product(('a', 'b'), ('c', 'd')))
mi = MultiIndex.from_tuples(tuples)
df_multi = DataFrame(np.random.randn(4, 2), index=mi)
tuple_ind = pd.Index(tuples)
.. ipython:: python
df_multi.index
@suppress
df_multi.index = tuple_ind
# Old behavior, casted MultiIndex to an Index
df_multi.set_index(df_multi.index)
@suppress
df_multi.index = mi
# New behavior
df_multi.set_index(df_multi.index)
This also applies when passing multiple indices to ``set_index``:
.. ipython:: python
@suppress
df_multi.index = tuple_ind
# Old output, 2-level MultiIndex of tuples
df_multi.set_index([df_multi.index, df_multi.index])
@suppress
df_multi.index = mi
# New output, 4-level MultiIndex
df_multi.set_index([df_multi.index, df_multi.index])
- Following keywords are now acceptable for :meth:`DataFrame.plot(kind='bar')` and :meth:`DataFrame.plot(kind='barh')`.
- `width`: Specify the bar width. In previous versions, static value 0.5 was passed to matplotlib and it cannot be overwritten. (:issue:`6604`)
- `align`: Specify the bar alignment. Default is `center` (different from matplotlib). In previous versions, pandas passes `align='edge'` to matplotlib and adjust the location to `center` by itself, and it results `align` keyword is not applied as expected. (:issue:`4525`)
- `position`: Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1(right/top-end). Default is 0.5 (center). (:issue:`6604`)
Because of the default `align` value changes, coordinates of bar plots are now located on integer values (0.0, 1.0, 2.0 ...). This is intended to make bar plot be located on the same coodinates as line plot. However, bar plot may differs unexpectedly when you manually adjust the bar location or drawing area, such as using `set_xlim`, `set_ylim`, etc. In this cases, please modify your script to meet with new coordinates.
- ``pairwise`` keyword was added to the statistical moment functions
``rolling_cov``, ``rolling_corr``, ``ewmcov``, ``ewmcorr``,
``expanding_cov``, ``expanding_corr`` to allow the calculation of moving
window covariance and correlation matrices (:issue:`4950`). See
:ref:`Computing rolling pairwise covariances and correlations
<stats.moments.corr_pairwise>` in the docs.
.. ipython:: python
df = DataFrame(np.random.randn(10,4),columns=list('ABCD'))
covs = rolling_cov(df[['A','B','C']], df[['B','C','D']], 5, pairwise=True)
covs[df.index[-1]]
- ``Series.iteritems()`` is now lazy (returns an iterator rather than a list). This was the documented behavior prior to 0.14. (:issue:`6760`)
- ``Panel.shift`` now uses ``NDFrame.shift``. It no longer drops the ``nan`` data and retains its original shape. (:issue:`4867`)
MultiIndexing Using Slicers
~~~~~~~~~~~~~~~~~~~~~~~~~~~
In 0.14.0 we added a new way to slice multi-indexed objects.
You can slice a multi-index by providing multiple indexers.
You can provide any of the selectors as if you are indexing by label, see :ref:`Selection by Label <indexing.label>`,
including slices, lists of labels, labels, and boolean indexers.
You can use ``slice(None)`` to select all the contents of *that* level. You do not need to specify all the
*deeper* levels, they will be implied as ``slice(None)``.
As usual, **both sides** of the slicers are included as this is label indexing.
See :ref:`the docs<indexing.mi_slicers>`
See also issues (:issue:`6134`, :issue:`4036`, :issue:`3057`, :issue:`2598`, :issue:`5641`)
.. warning::
You should specify all axes in the ``.loc`` specifier, meaning the indexer for the **index** and
for the **columns**. Their are some ambiguous cases where the passed indexer could be mis-interpreted
as indexing *both* axes, rather than into say the MuliIndex for the rows.
You should do this:
.. code-block:: python
df.loc[(slice('A1','A3'),.....),:]
rather than this:
.. code-block:: python
df.loc[(slice('A1','A3'),.....)]
.. warning::
You will need to make sure that the selection axes are fully lexsorted!
.. ipython:: python
def mklbl(prefix,n):
return ["%s%s" % (prefix,i) for i in range(n)]
index = MultiIndex.from_product([mklbl('A',4),
mklbl('B',2),
mklbl('C',4),
mklbl('D',2)])
columns = MultiIndex.from_tuples([('a','foo'),('a','bar'),
('b','foo'),('b','bah')],
names=['lvl0', 'lvl1'])
df = DataFrame(np.arange(len(index)*len(columns)).reshape((len(index),len(columns))),
index=index,
columns=columns).sortlevel().sortlevel(axis=1)
df
Basic multi-index slicing using slices, lists, and labels.
.. ipython:: python
df.loc[(slice('A1','A3'),slice(None), ['C1','C3']),:]
You can use a ``pd.IndexSlice`` to shortcut the creation of these slices
.. ipython:: python
idx = pd.IndexSlice
df.loc[idx[:,:,['C1','C3']],idx[:,'foo']]
It is possible to perform quite complicated selections using this method on multiple
axes at the same time.
.. ipython:: python
df.loc['A1',(slice(None),'foo')]
df.loc[idx[:,:,['C1','C3']],idx[:,'foo']]
Using a boolean indexer you can provide selection related to the *values*.
.. ipython:: python
mask = df[('a','foo')]>200
df.loc[idx[mask,:,['C1','C3']],idx[:,'foo']]
You can also specify the ``axis`` argument to ``.loc`` to interpret the passed
slicers on a single axis.
.. ipython:: python
df.loc(axis=0)[:,:,['C1','C3']]
Furthermore you can *set* the values using these methods
.. ipython:: python
df2 = df.copy()
df2.loc(axis=0)[:,:,['C1','C3']] = -10
df2
You can use a right-hand-side of an alignable object as well.
.. ipython:: python
df2 = df.copy()
df2.loc[idx[:,:,['C1','C3']],:] = df2*1000
df2
Plotting With Errorbars
~~~~~~~~~~~~~~~~~~~~~~~
Plotting with error bars is now supported in the ``.plot`` method of ``DataFrame`` and ``Series`` objects (:issue:`3796`).
x and y errorbars are supported and can be supplied using the ``xerr`` and ``yerr`` keyword arguments to ``.plot()`` The error values can be specified using a variety of formats.
- As a ``DataFrame`` or ``dict`` of errors with one or more of the column names (or dictionary keys) matching one or more of the column names of the plotting ``DataFrame`` or matching the ``name`` attribute of the ``Series``
- As a ``str`` indicating which of the columns of plotting ``DataFrame`` contain the error values
- As raw values (``list``, ``tuple``, or ``np.ndarray``). Must be the same length as the plotting ``DataFrame``/``Series``
Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a ``M`` length ``Series``, a ``Mx2`` array should be provided indicating lower and upper (or left and right) errors. For a ``MxN`` ``DataFrame``, asymmetrical errors should be in a ``Mx2xN`` array.
Prior Version Deprecations/Changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Therse are prior version deprecations that are taking effect as of 0.14.0.
- Remove ``column`` keyword from ``DataFrame.sort`` (:issue:`4370`)
- Remove ``precision`` keyword from :func:`set_eng_float_format` (:issue:`6641`)
- Remove ``force_unicode`` keyword from :meth:`DataFrame.to_string`,
:meth:`DataFrame.to_latex`, and :meth:`DataFrame.to_html`; these function
encode in unicode by default (:issue:`6641`)
- Remove ``nanRep`` keyword from :meth:`DataFrame.to_csv` and
:meth:`DataFrame.to_string` (:issue:`6641`)
- Remove ``unique`` keyword from :meth:`HDFStore.select_column` (:issue:`6641`)
- Remove ``inferTimeRule`` keyword from :func:`Timestamp.offset` (:issue:`6641`)
- Remove ``name`` keyword from :func:`get_data_yahoo` and
:func:`get_data_google` (:issue:`6641`)
- Remove ``offset`` keyword from :class:`DatetimeIndex` constructor
(:issue:`6641`)
- Remove ``time_rule`` from several rolling-moment statistical functions, such
as :func:`rolling_sum` (:issue:`6641`)
Deprecations
~~~~~~~~~~~~
- The :func:`pivot_table`/:meth:`DataFrame.pivot_table` and :func:`crosstab` functions
now take arguments ``index`` and ``columns`` instead of ``rows`` and ``cols``. A
``FutureWarning`` is raised to alert that the old ``rows`` and ``cols`` arguments
will not be supported in a future release (:issue:`5505`)
- The :meth:`DataFrame.drop_duplicates` and :meth:`DataFrame.duplicated` methods
now take argument ``subset`` instead of ``cols`` to better align with
:meth:`DataFrame.dropna`. A ``FutureWarning`` is raised to alert that the old
``cols`` arguments will not be supported in a future release (:issue:`6680`)
- The :meth:`DataFrame.to_csv` and :meth:`DataFrame.to_excel` functions
now takes argument ``columns`` instead of ``cols``. A
``FutureWarning`` is raised to alert that the old ``cols`` arguments
will not be supported in a future release (:issue:`6645`)
Enhancements
~~~~~~~~~~~~
- ``DataFrame.to_latex`` now takes a longtable keyword, which if True will return a table in a longtable environment. (:issue:`6617`)
- ``pd.read_clipboard`` will, if 'sep' is unspecified, try to detect data copied from a spreadsheet
and parse accordingly. (:issue:`6223`)
- ``plot(legend='reverse')`` will now reverse the order of legend labels for
most plot kinds. (:issue:`6014`)
- Hexagonal bin plots from ``DataFrame.plot`` with ``kind='hexbin'`` (:issue:`5478`)
- Joining a singly-indexed DataFrame with a multi-indexed DataFrame (:issue:`3662`)
See :ref:`the docs<merging.join_on_mi>`. Joining multi-index DataFrames on both the left and right is not yet supported ATM.
.. ipython:: python
household = DataFrame(dict(household_id = [1,2,3],
male = [0,1,0],
wealth = [196087.3,316478.7,294750]),
columns = ['household_id','male','wealth']
).set_index('household_id')
household
portfolio = DataFrame(dict(household_id = [1,2,2,3,3,3,4],
asset_id = ["nl0000301109","nl0000289783","gb00b03mlx29",
"gb00b03mlx29","lu0197800237","nl0000289965",np.nan],
name = ["ABN Amro","Robeco","Royal Dutch Shell","Royal Dutch Shell",
"AAB Eastern Europe Equity Fund","Postbank BioTech Fonds",np.nan],
share = [1.0,0.4,0.6,0.15,0.6,0.25,1.0]),
columns = ['household_id','asset_id','name','share']
).set_index(['household_id','asset_id'])
portfolio
household.join(portfolio, how='inner')
- ``quotechar``, ``doublequote``, and ``escapechar`` can now be specified when
using ``DataFrame.to_csv`` (:issue:`5414`, :issue:`4528`)
- Added a ``to_julian_date`` function to ``TimeStamp`` and ``DatetimeIndex``
to convert to the Julian Date used primarily in astronomy. (:issue:`4041`)
- ``DataFrame.to_stata`` will now check data for compatibility with Stata data types
and will upcast when needed. When it isn't possibly to losslessly upcast, a warning
is raised (:issue:`6327`)
- ``DataFrame.to_stata`` and ``StataWriter`` will accept keyword arguments time_stamp
and data_label which allow the time stamp and dataset label to be set when creating a
file. (:issue:`6545`)
- ``pandas.io.gbq`` now handles reading unicode strings properly. (:issue:`5940`)
Performance
~~~~~~~~~~~
- perf improvements in DataFrame construction with certain offsets, by removing faulty caching
(e.g. MonthEnd,BusinessMonthEnd), (:issue:`6479`)
- Improve performance of ``CustomBusinessDay`` (:issue:`6584`)
- improve performance of slice indexing on Series with string keys (:issue:`6341`, :issue:`6372`)
Experimental
~~~~~~~~~~~~
There are no experimental changes in 0.14.0
Bug Fixes
~~~~~~~~~
See :ref:`V0.14.0 Bug Fixes<release.bug_fixes-0.14.0>` for an extensive list of bugs that have been fixed in 0.14.0.
See the :ref:`full release notes
<release>` or issue tracker
on GitHub for a complete list of all API changes, Enhancements and Bug Fixes.