Skip to content

PERF: more realistic np datetime c benchmark #58165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion asv_bench/benchmarks/tslibs/fields.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ class TimeGetTimedeltaField:
param_names = ["size", "field"]

def setup(self, size, field):
arr = np.random.randint(0, 10, size=size, dtype="i8")
# 2 days in nanoseconds, scaled up to times e9 for runs with size=seconds
arr = np.random.randint(-2 * 86400 * 1_000_000_000, 0, size=size, dtype="i8")
Copy link
Member

@WillAyd WillAyd Apr 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we even be using random numbers in the benchmark? @DeaMariaLeon any thoughts on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If from .pandas_vb_common import setup is imported in this file, it should fix random number generation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say there is a bit of leeway here. The timestamps that get parsed here should usually be sorted, so random inputs are actually not that realistic. Running the benchmark on my machine with prior sorting of the input arr gives 10-20% lower runtimes for the current main version and a negligible impact with my new version. Running the benchmark with randomised inputs is more pessimistic and adds additional sensitivity to branch prediction issues in the code.

I think the most important thing is to cover the whole range from nanoseconds to days, randomised or not, otherwise the benchmark never sees the great majority of code due to branch prediction.

Copy link
Member

@DeaMariaLeon DeaMariaLeon Apr 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are there only negative numbers on line 21? @dontgoto.. I mean, the range of the generated number will go from -172800000000000 to 0. Is that what you meant?

@WillAyd it's probably obvious, but just in case: when this line is added from .pandas_vb_common import setup we import a seed. With it, we generate the same number every time.. it's not random any more.

To all: If the benchmark is going to be modified like that, shouldn't it be better to change its name? the new results won't have much to do with the historic ones. The historic results are used by conbench to detect regressions (it keeps the last 100 results).

Copy link
Contributor Author

@dontgoto dontgoto Apr 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are there only negative numbers on line 21? @dontgoto.. I mean, the range of the generated number will go from -172800000000000 to 0. Is that what you meant?

Exactly. Using negative numbers only is intentional here, np_datetime.c has additional logic that only gets triggered for negative inputs and that logic is quite fickle performance wise.

@WillAyd it's probably obvious, but just in case: when this line is added from .pandas_vb_common import setup we import a seed. With it, we generate the same number every time.. it's not random any more.

I like @mroeschke's suggestion of fixing the random numbers. I was doing my testing with fixed random numbers in the first place and it rules out weird edge cases due to ordering, but I only encountered those when using a very small input parameter range.

To all: If the benchmark is going to be modified like that, shouldn't it be better to change its name? the new results won't have much to do with the historic ones. The historic results are used by conbench to detect regressions (it keeps the last 100 results).

I was wondering about the historical results in the linked issue. Just creating a new benchmark with the changes here seems to be a good solution. The benchmark itself has negligible runtime.

I pushed a change that keeps the old benchmark intact and instead introduced a new benchmark. I refrained from adding more benchmarks for the other cases (only positive values, random positive and negative values,...) as I think they only give marginal benefits in observability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged main and now CI passes again.

If I missed any suggestions, let me know, to me it seems I covered:

  • Fix random numbers (to prevent rng related potential instabilities)
  • Random numbers should be ok to use since they are an easy way to cover code branches while being not completely unrealistic usage of the function
  • Created a new benchmark to keep the history of the current one

self.i8data = arr

def time_get_timedelta_field(self, size, field):
Expand Down
Loading