-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Any interest in a multi-dataset backtesting wrapper? #508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
enhancement
New feature or request
Comments
Definitely think this should be included |
I solved this a slightly different way class MultiBacktest(Backtest):
datasets = []
def __init__(self, datasets, strategy, **kwargs):
for dataset in datasets:
dataset.backtest = Backtest(
dataset.data,
strategy=strategy,
**kwargs
)
self.datasets = datasets
def run(self, *args, **kwargs):
results = [dataset.backtest.run(*args, **kwargs) for dataset in self.datasets]
aggregate = pd.DataFrame(results).mean()
aggregate['_strategy'] = results[0]['_strategy'] # Save the strategy used for this round... mean() blows it away
return aggregate
def optimize(self, **kwargs):
optimize_args = {
"return_heatmap": True,
**kwargs
}
return super().optimize(**optimize_args) This takes the mean of the results across the backtests and returns the best. |
Definitely think this should be included |
Thanks for sharing this. I had the same questions in my mind. |
The OP example so neat and lovely. I definitely agree this should be included. Probably in |
kernc
added a commit
that referenced
this issue
Feb 20, 2025
Fixes #508 Thanks! Co-Authored-By: Mike Judge <[email protected]>
kernc
added a commit
that referenced
this issue
Feb 20, 2025
Fixes #508 Thanks! Co-Authored-By: Mike Judge <[email protected]>
kernc
added a commit
that referenced
this issue
Feb 20, 2025
Fixes #508 Thanks! Co-Authored-By: Mike Judge <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I wanted to make a strategy that would work well against LOTS of cryptocurrencies, with the idea being that maybe it wouldn't be as overfit as my usual optimization runs. And it turned out to not actually be that hard and I wondered if this was something that if it were cleaned up and tested, you'd like me to open a PR for for inclusion into master.
Library Code (you might want to skip ahead to the example)
Example
Let's define a simple strategy:
And let's fetch a whole lot of alt coin data. I kept the frames to a really short period of time just so it'd run fast, but in production I'd probably want to stretch these data windows to as much data as I could possibly get.
Let's define our multi-backtest:
Which spits out our familiar stats, only with a column per dataset which is pretty cool:
Now let's optimize for the best
n_ma_window
andn_previous_highs_window
params:Aside: you'll notice an interesting little bit in there
multi_heatmap.quantile(0.25)
and that's how I'm smashing the multiple heatmaps down into one heatmap. You could swap in all sorts of different metrics like.mean()
(for average results) or.min()
(worst results) or.max()
(best results). I found that the bottom 25th percentile was interestingly pessimistic and interpreted that as meaning I want a score that 3/4s of the currencies I tested did better than.Anyway, here's our 25th percentile graph.
Hovering around a little, it looks like 31, 5 is a good combo. Reasonably pessimistically, I could hope for +1.6% or better returns using those parameters.
Anyway, let me know if you'd like me to open a PR for it. We could call it MultiBacktest or something. It doesn't need to be quite as fanciful a name.
The text was updated successfully, but these errors were encountered: