-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
pd.Series.map is unreasonably slow. #21278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
your results seem odd for in 0.23
and in 0.22
note that we build a hashtable for indexing. once its built, these things are very cheap to do. sure you could take advantage of monotonic indexes, but there is a cost (in code complexity) and is only sometimes worth it. |
Nice! Warmup was the issue. Actually, performance looks very reasonably with this in mind - there's basically no difference between n=10 and n=100000000, and the cost to build the index is not bad. It also seems there's no benefit in calling sort_index. |
Code Sample, a copy-pastable example if possible
Problem description
The above tries to map 3 values in a lookup table, much like looking up values in python dictionary.
n is the size of the table (not the query).
At n=10000000 its taken (0.01/3) seconds per mapped value -- unbelievably shockingly slow.
Expected Output
Costs are growing with O(len(maptable)).
My series' index is sorted. My expectation is that pandas costs <<< O(maptable) for this type of operation.
Costs can scale equal to or less than:
... in other words, there shouldn't be huge differences in the timing of n=10, and n=10000000
for a trivial implementation.
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-24-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 36.0.1
Cython: 0.27.3
numpy: 1.14.0
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 5.3.0
sphinx: None
patsy: 0.2.1
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: 3.1.1
numexpr: 2.6.2
feather: None
matplotlib: 2.1.0
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: 1.2.7
pymysql: None
psycopg2: 2.7.4 (dt dec pq3 ext lo64)
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: