Skip to content

Benchmarking! #387

Open
Open
@theogf

Description

@theogf

Related to #386, I would like to open the discussion for evaluating the performance of basic functions.
Here are some facts/ideas.

Existing tools

  • PkgBenchmark.jl: A nice tool to create a suite of benchmarks and has functions to create nice reports on variations of performance. You can even create a markdown report which could be posted on the PR
  • NanoSoldier.jl: The tool used by Julia to evaluate the performance of the language. I don't believe this can be adapted easily to our setup, but I did not checked the details
  • Github action benchmark: Given a benchmark output will also create a report and has the possibility to directly create comments

Potential issues

  • Benchmark is highly dependent on the machine used, if we use the Github clusters, we might get a large variance in the results depending on the time of the day etc...
  • We cannot benchmark everything, which means we need to restrict ourselves to maybe the most used functions/kernels etc
  • Adding benchmarks can be a lot of work, can we find a framework where adding new tests is smooth

Other ideas

  • Not all PRs are performance-critical, we should be able to call whatever tool we use only when needed? Maybe every time for master and at will for some PRs.
  • What do we want to benchmark? Only pairwise, kernelmatrix or also the performance of the gradients on them?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions