-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
BUG: Possible bug in rolling standard deviation calculation when using custom weights (pandas.core.window.rolling.Window.std) #53273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This looks like a duplicate of #53289. Can you confirm? |
Thank you for the question @topper-123. I think these are separate issues. If I use my example with import pandas as pd
import numpy as np
np.random.seed(42)
df = pd.Series(np.random.normal(size=1200))
df_std = df.rolling(window=220).std(ddof=0)
df_std_np = df.rolling(window=220).apply(np.std)
df_custom = (df ** 2).rolling(window=220).mean()
df_custom = (df_custom - df.rolling(window=220).mean()**2).pow(0.5) The linked issues suggests a problem with data type as it involves floats going over a wide range of magnitudes. Also, #53289 does not involve custom weights and possibly different code paths are involved. In the latter, code variance is calculated with this pandas/pandas/_libs/window/aggregations.pyx Lines 415 to 416 in 37ea63d
while in this issue variance is calculated here pandas/pandas/_libs/window/aggregations.pyx Lines 1667 to 1668 in 37ea63d
|
Ok, thanks for looking into it. |
Hey @WillAyd, @mroeschke and @jreback! I hope not to bother and I am sorry for being so late to spot this, but why was PR #55153 closed? Is there a chance this could be reopen? I think @ihsansecer made a very compelling case for the fix, which may have been missed because of the fragmented nature of PR comments. It is true that the new algorithm scales differently than the current one. However, the current algorithm is incorrect for rolling variance estimates and cannot produce meaningful results, so it is hard to say that this is a regression. @ihsansecer also points out that it is unlikely that there is a better solution, and I have to agree after reviewing standard references online. Most standard references, such as West (1979), are meant for estimate updates where the observed data/weight tuples do not change and new tuples are coming in, which is not the case in rolling calculations. In a rolling calculations with arbitrary weights, one needs to reweigh every observation once a window moves, so some nested looping needs to take place. In addition, scaling of the new algorithm, which is Besides the issue of a more complex implementation of Thank you in advance for taking the time to consider these points and to respond. Let me know if I missed something important that could be helpful for making progress with this issue. I really appreciate the work of the community and am happy to help to the best of my abilities. |
Hey @jevkus, I've also noticed that PR #55153 was closed, and the explanation by @ihsansecer makes sense to me. It seems there isn't an O(n) online rolling method for weighted mean/variance. However, if there is some pattern in the weights, such as a linear decay given the weights [1, 2, 3, 4...], a faster O(n) method for the weighted mean might be possible, though it may not be numerically stable. |
@kaixiongg, the explanation makes sense to me too and exponential weights in my example would fit this case. General applications of |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Hey, Team! The series of standard deviations produced by the rolling calculation with custom weights does not look right and I hope that a visual demonstration illustrates that best.

The blue line is the result of the calculation above and it is visible how after
window
observations the series no longer produces sensible results. Odd and strong changes in behavior are visible every pass over a window multiple, which is even more suspicious. The orange series is what I would expect the result to be and the calculation snippet that produced it is in the box below. The algorithm for the expected behavior is known to be problematic though.Thank you for taking the time to consider my report and please let me know if further details are required. Lover your package and really hope that his helps making it even better.
Expected Behavior
Installed Versions
INSTALLED VERSIONS
commit : 37ea63d
python : 3.9.16.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19044
machine : AMD64
processor : Intel64 Family 6 Model 141 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.0.1
numpy : 1.22.4
pytz : 2023.3
dateutil : 2.8.2
setuptools : 66.0.0
pip : 23.0.1
Cython : 0.29.34
pytest : 7.3.1
hypothesis : 6.75.3
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.13.2
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.7.1
numba : None
numexpr : 2.8.4
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 12.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.7.1
snappy : None
sqlalchemy : None
tables : 3.8.0
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: