-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
PERF: more realistic np datetime c benchmark #58165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PERF: more realistic np datetime c benchmark #58165
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
@@ -17,7 +17,8 @@ class TimeGetTimedeltaField: | |||
param_names = ["size", "field"] | |||
|
|||
def setup(self, size, field): | |||
arr = np.random.randint(0, 10, size=size, dtype="i8") | |||
# 2 days in nanoseconds, scaled up to times e9 for runs with size=seconds | |||
arr = np.random.randint(-2 * 86400 * 1_000_000_000, 0, size=size, dtype="i8") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we even be using random numbers in the benchmark? @DeaMariaLeon any thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If from .pandas_vb_common import setup
is imported in this file, it should fix random number generation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say there is a bit of leeway here. The timestamps that get parsed here should usually be sorted, so random inputs are actually not that realistic. Running the benchmark on my machine with prior sorting of the input arr
gives 10-20% lower runtimes for the current main version and a negligible impact with my new version. Running the benchmark with randomised inputs is more pessimistic and adds additional sensitivity to branch prediction issues in the code.
I think the most important thing is to cover the whole range from nanoseconds to days, randomised or not, otherwise the benchmark never sees the great majority of code due to branch prediction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are there only negative numbers on line 21? @dontgoto.. I mean, the range of the generated number will go from -172800000000000 to 0. Is that what you meant?
@WillAyd it's probably obvious, but just in case: when this line is added from .pandas_vb_common import setup
we import a seed
. With it, we generate the same number every time.. it's not random any more.
To all: If the benchmark is going to be modified like that, shouldn't it be better to change its name? the new results won't have much to do with the historic ones. The historic results are used by conbench to detect regressions (it keeps the last 100 results).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are there only negative numbers on line 21? @dontgoto.. I mean, the range of the generated number will go from -172800000000000 to 0. Is that what you meant?
Exactly. Using negative numbers only is intentional here, np_datetime.c
has additional logic that only gets triggered for negative inputs and that logic is quite fickle performance wise.
@WillAyd it's probably obvious, but just in case: when this line is added
from .pandas_vb_common import setup
we import aseed
. With it, we generate the same number every time.. it's not random any more.
I like @mroeschke's suggestion of fixing the random numbers. I was doing my testing with fixed random numbers in the first place and it rules out weird edge cases due to ordering, but I only encountered those when using a very small input parameter range.
To all: If the benchmark is going to be modified like that, shouldn't it be better to change its name? the new results won't have much to do with the historic ones. The historic results are used by conbench to detect regressions (it keeps the last 100 results).
I was wondering about the historical results in the linked issue. Just creating a new benchmark with the changes here seems to be a good solution. The benchmark itself has negligible runtime.
I pushed a change that keeps the old benchmark intact and instead introduced a new benchmark. I refrained from adding more benchmarks for the other cases (only positive values, random positive and negative values,...) as I think they only give marginal benefits in observability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merged main and now CI passes again.
If I missed any suggestions, let me know, to me it seems I covered:
- Fix random numbers (to prevent rng related potential instabilities)
- Random numbers should be ok to use since they are an easy way to cover code branches while being not completely unrealistic usage of the function
- Created a new benchmark to keep the history of the current one
This pull request is stale because it has been open for thirty days with no activity. Please update and respond to this comment if you're still interested in working on this. |
Thanks @dontgoto |
See my comments in #57951. This change in the benchmark input range has more coverage of the different logic branches in
np_datetime.c
. The benchmark should run about 1.8x as long (most likely a little bit machine dependent) after merging this.Keeping this separate from my other PR #57988 that improves the
np_datetime.c
performance to distinguish the benchmark impacts.@WillAyd