Skip to content

PERF: more realistic np datetime c benchmark #58165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

dontgoto
Copy link
Contributor

@dontgoto dontgoto commented Apr 5, 2024

See my comments in #57951. This change in the benchmark input range has more coverage of the different logic branches in np_datetime.c. The benchmark should run about 1.8x as long (most likely a little bit machine dependent) after merging this.

Keeping this separate from my other PR #57988 that improves the np_datetime.cperformance to distinguish the benchmark impacts.

@WillAyd

Copy link
Member

@WillAyd WillAyd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

@@ -17,7 +17,8 @@ class TimeGetTimedeltaField:
param_names = ["size", "field"]

def setup(self, size, field):
arr = np.random.randint(0, 10, size=size, dtype="i8")
# 2 days in nanoseconds, scaled up to times e9 for runs with size=seconds
arr = np.random.randint(-2 * 86400 * 1_000_000_000, 0, size=size, dtype="i8")
Copy link
Member

@WillAyd WillAyd Apr 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we even be using random numbers in the benchmark? @DeaMariaLeon any thoughts on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If from .pandas_vb_common import setup is imported in this file, it should fix random number generation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say there is a bit of leeway here. The timestamps that get parsed here should usually be sorted, so random inputs are actually not that realistic. Running the benchmark on my machine with prior sorting of the input arr gives 10-20% lower runtimes for the current main version and a negligible impact with my new version. Running the benchmark with randomised inputs is more pessimistic and adds additional sensitivity to branch prediction issues in the code.

I think the most important thing is to cover the whole range from nanoseconds to days, randomised or not, otherwise the benchmark never sees the great majority of code due to branch prediction.

Copy link
Member

@DeaMariaLeon DeaMariaLeon Apr 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are there only negative numbers on line 21? @dontgoto.. I mean, the range of the generated number will go from -172800000000000 to 0. Is that what you meant?

@WillAyd it's probably obvious, but just in case: when this line is added from .pandas_vb_common import setup we import a seed. With it, we generate the same number every time.. it's not random any more.

To all: If the benchmark is going to be modified like that, shouldn't it be better to change its name? the new results won't have much to do with the historic ones. The historic results are used by conbench to detect regressions (it keeps the last 100 results).

Copy link
Contributor Author

@dontgoto dontgoto Apr 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are there only negative numbers on line 21? @dontgoto.. I mean, the range of the generated number will go from -172800000000000 to 0. Is that what you meant?

Exactly. Using negative numbers only is intentional here, np_datetime.c has additional logic that only gets triggered for negative inputs and that logic is quite fickle performance wise.

@WillAyd it's probably obvious, but just in case: when this line is added from .pandas_vb_common import setup we import a seed. With it, we generate the same number every time.. it's not random any more.

I like @mroeschke's suggestion of fixing the random numbers. I was doing my testing with fixed random numbers in the first place and it rules out weird edge cases due to ordering, but I only encountered those when using a very small input parameter range.

To all: If the benchmark is going to be modified like that, shouldn't it be better to change its name? the new results won't have much to do with the historic ones. The historic results are used by conbench to detect regressions (it keeps the last 100 results).

I was wondering about the historical results in the linked issue. Just creating a new benchmark with the changes here seems to be a good solution. The benchmark itself has negligible runtime.

I pushed a change that keeps the old benchmark intact and instead introduced a new benchmark. I refrained from adding more benchmarks for the other cases (only positive values, random positive and negative values,...) as I think they only give marginal benefits in observability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged main and now CI passes again.

If I missed any suggestions, let me know, to me it seems I covered:

  • Fix random numbers (to prevent rng related potential instabilities)
  • Random numbers should be ok to use since they are an easy way to cover code branches while being not completely unrealistic usage of the function
  • Created a new benchmark to keep the history of the current one

Copy link
Contributor

This pull request is stale because it has been open for thirty days with no activity. Please update and respond to this comment if you're still interested in working on this.

@github-actions github-actions bot added the Stale label May 22, 2024
@mroeschke mroeschke added Benchmark Performance (ASV) benchmarks and removed Stale labels May 31, 2024
@mroeschke mroeschke added this to the 3.0 milestone May 31, 2024
@mroeschke mroeschke merged commit bce10b4 into pandas-dev:main May 31, 2024
50 checks passed
@mroeschke
Copy link
Member

Thanks @dontgoto

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Benchmark Performance (ASV) benchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants