Skip to content

DOC: lrange, lzip --> list(range and list(zip #4450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 2, 2013
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1093,7 +1093,7 @@ By default integer types are ``int64`` and float types are ``float64``,

DataFrame([1, 2], columns=['a']).dtypes
DataFrame({'a': [1, 2]}).dtypes
DataFrame({'a': 1 }, index=lrange(2)).dtypes
DataFrame({'a': 1 }, index=list(range(2))).dtypes

Numpy, however will choose *platform-dependent* types when creating arrays.
The following **WILL** result in ``int32`` on 32-bit platform.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ life easier is missing. In that case you have several options:
return [x for x in self.columns if 'foo' in x]

pd.DataFrame.just_foo_cols = just_foo_cols # monkey-patch the DataFrame class
df = pd.DataFrame([lrange(4)],columns= ["A","foo","foozball","bar"])
df = pd.DataFrame([list(range(4))], columns=["A","foo","foozball","bar"])
df.just_foo_cols()
del pd.DataFrame.just_foo_cols # you can also remove the new method

Expand Down Expand Up @@ -259,7 +259,7 @@ using something similar to the following:

.. ipython:: python

x = np.array(lrange(10), '>i4') # big endian
x = np.array(list(range(10)), '>i4') # big endian
newx = x.byteswap().newbyteorder() # force native byteorder
s = Series(newx)

Expand Down
2 changes: 1 addition & 1 deletion doc/source/gotchas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,7 @@ using something similar to the following:

.. ipython:: python

x = np.array(lrange(10), '>i4') # big endian
x = np.array(list(range(10)), '>i4') # big endian
newx = x.byteswap().newbyteorder() # force native byteorder
s = Series(newx)

Expand Down
6 changes: 3 additions & 3 deletions doc/source/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import matplotlib.pyplot as plt
plt.close('all')
options.display.mpl_style='default'
from pandas.compat import lzip
from pandas.compat import zip

*****************************
Group By: split-apply-combine
Expand Down Expand Up @@ -202,7 +202,7 @@ natural to group by one of the levels of the hierarchy.

arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
tuples = lzip(*arrays)
tuples = list(zip(*arrays))
tuples
index = MultiIndex.from_tuples(tuples, names=['first', 'second'])
s = Series(randn(8), index=index)
Expand Down Expand Up @@ -236,7 +236,7 @@ Also as of v0.6, grouping with multiple levels is supported.
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['doo', 'doo', 'bee', 'bee', 'bop', 'bop', 'bop', 'bop'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
tuples = lzip(*arrays)
tuples = list(zip(*arrays))
index = MultiIndex.from_tuples(tuples, names=['first', 'second', 'third'])
s = Series(randn(8), index=index)

Expand Down
14 changes: 7 additions & 7 deletions doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
randn = np.random.randn
randint = np.random.randint
np.set_printoptions(precision=4, suppress=True)
from pandas.compat import lrange, lzip
from pandas.compat import range, zip

***************************
Indexing and Selecting Data
Expand Down Expand Up @@ -294,7 +294,7 @@ The ``.iloc`` attribute is the primary access method. The following are valid in

.. ipython:: python

s1 = Series(np.random.randn(5),index=lrange(0,10,2))
s1 = Series(np.random.randn(5),index=list(range(0,10,2)))
s1
s1.iloc[:3]
s1.iloc[3]
Expand All @@ -311,8 +311,8 @@ With a DataFrame
.. ipython:: python

df1 = DataFrame(np.random.randn(6,4),
index=lrange(0,12,2),
columns=lrange(0,8,2))
index=list(range(0,12,2)),
columns=list(range(0,8,2)))
df1

Select via integer slicing
Expand Down Expand Up @@ -787,7 +787,7 @@ numpy array. For instance,
.. ipython:: python

dflookup = DataFrame(np.random.rand(20,4), columns = ['A','B','C','D'])
dflookup.lookup(lrange(0,10,2), ['B','C','A','B','D'])
dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D'])

Setting values in mixed-type DataFrame
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -897,7 +897,7 @@ display:

.. ipython:: python

index = Index(lrange(5), name='rows')
index = Index(list(range(5)), name='rows')
columns = Index(['A', 'B', 'C'], name='cols')
df = DataFrame(np.random.randn(5, 3), index=index, columns=columns)
df
Expand Down Expand Up @@ -972,7 +972,7 @@ can think of ``MultiIndex`` an array of tuples where each tuple is unique. A

arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
tuples = lzip(*arrays)
tuples = list(zip(*arrays))
tuples
index = MultiIndex.from_tuples(tuples, names=['first', 'second'])
s = Series(randn(8), index=index)
Expand Down
10 changes: 5 additions & 5 deletions doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1061,7 +1061,7 @@ Writing to a file, with a date index and a date column

dfj2 = dfj.copy()
dfj2['date'] = Timestamp('20130101')
dfj2['ints'] = lrange(5)
dfj2['ints'] = list(range(5))
dfj2['bools'] = True
dfj2.index = date_range('20130101',periods=5)
dfj2.to_json('test.json')
Expand Down Expand Up @@ -1156,7 +1156,7 @@ I like my string indicies
.. ipython:: python

si = DataFrame(np.zeros((4, 4)),
columns=lrange(4),
columns=list(range(4)),
index=[str(i) for i in range(4)])
si
si.index
Expand Down Expand Up @@ -1741,7 +1741,7 @@ similar to how ``read_csv`` and ``to_csv`` work. (new in 0.11.0)

.. ipython:: python

df_tl = DataFrame(dict(A=lrange(5), B=lrange(5)))
df_tl = DataFrame(dict(A=list(range(5)), B=list(range(5))))
df_tl.to_hdf('store_tl.h5','table',append=True)
read_hdf('store_tl.h5', 'table', where = ['index>2'])

Expand Down Expand Up @@ -1863,7 +1863,7 @@ defaults to `nan`.
'int' : 1,
'bool' : True,
'datetime64' : Timestamp('20010102')},
index=lrange(8))
index=list(range(8)))
df_mixed.ix[3:5,['A', 'B', 'string', 'datetime64']] = np.nan

store.append('df_mixed', df_mixed, min_itemsize = {'values': 50})
Expand Down Expand Up @@ -2288,7 +2288,7 @@ Starting in 0.11, passing a ``min_itemsize`` dict will cause all passed columns

.. ipython:: python

dfs = DataFrame(dict(A = 'foo', B = 'bar'),index=lrange(5))
dfs = DataFrame(dict(A = 'foo', B = 'bar'),index=list(range(5)))
dfs

# A and B have a size of 30
Expand Down
6 changes: 3 additions & 3 deletions doc/source/missing_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -363,7 +363,7 @@ Replace the '.' with ``nan`` (str -> str)

.. ipython:: python

d = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']}
d = {'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']}
df = DataFrame(d)
df.replace('.', nan)

Expand Down Expand Up @@ -500,7 +500,7 @@ For example:
s = Series(randn(5), index=[0, 2, 4, 6, 7])
s > 0
(s > 0).dtype
crit = (s > 0).reindex(lrange(8))
crit = (s > 0).reindex(list(range(8)))
crit
crit.dtype

Expand All @@ -512,7 +512,7 @@ contains NAs, an exception will be generated:
.. ipython:: python
:okexcept:

reindexed = s.reindex(lrange(8)).fillna(0)
reindexed = s.reindex(list(range(8))).fillna(0)
reindexed[crit]

However, these can be filled in using **fillna** and it will work fine:
Expand Down
6 changes: 3 additions & 3 deletions doc/source/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
randn = np.random.randn
np.set_printoptions(precision=4, suppress=True)
from pandas.tools.tile import *
from pandas.compat import lzip
from pandas.compat import zip

**************************
Reshaping and Pivot Tables
Expand Down Expand Up @@ -117,10 +117,10 @@ from the hierarchical indexing section:

.. ipython:: python

tuples = lzip(*[['bar', 'bar', 'baz', 'baz',
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two',
'one', 'two', 'one', 'two']])
'one', 'two', 'one', 'two']]))
index = MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = DataFrame(randn(8, 2), index=index, columns=['A', 'B'])
df2 = df[:4]
Expand Down
2 changes: 1 addition & 1 deletion doc/source/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ You can plot one column versus another using the `x` and `y` keywords in
plt.figure()

df3 = DataFrame(randn(1000, 2), columns=['B', 'C']).cumsum()
df3['A'] = Series(lrange(len(df)))
df3['A'] = Series(list(range(len(df))))

@savefig df_plot_xy.png
df3.plot(x='A', y='B')
Expand Down