Skip to content
This repository was archived by the owner on Apr 28, 2025. It is now read-only.
This repository was archived by the owner on Apr 28, 2025. It is now read-only.

New benchmarks #220

Closed
Closed
@kpp

Description

@kpp

For now there are some PRs blocked because of lack of good benches (#169 and friends).

  • Shall we use criterion?
  • We can probably just use the TSC to measure how long it takes for a single function to run for a low N number of times for the same input? No need to. There is https://bheisler.github.io/criterion.rs/book/faq.html#how-should-i-benchmark-small-functions
  • Do we really need to put benches in /crates/libm-bench? How about placing them in /benches/*.rs?
  • I don't know how to detect bench regression in CI. Will maintainers check regression on their local machines or is there a better way? There is: https://bheisler.github.io/criterion.rs/book/faq.html#how-should-i-run-criterionrs-benchmarks-in-a-ci-pipeline
  • Do we need benches for avg execution time? It depends on. Read comments. Long story short: if a function takes more or less the same time when computing any input we can try this type of testing to try to notice when an algorithm change suddenly makes some of those inputs take more time. However we need to find out these functions.
  • So ideally we would have, for each function, a set of inputs that would allow us to get a good picture of the performance characteristics for the function and that we can use to compare different implementations. Such a set can grow over time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions