Benchmark your code easily with Tinybench, a simple, tiny and light-weight 10KB
(2KB
minified and gzipped) benchmarking library!
You can run your benchmarks in multiple JavaScript runtimes, Tinybench is completely based on the Web APIs with proper timing using
process.hrtime
or performance.now
.
- Accurate and precise timing based on the environment
- Statistically analyzed latency and throughput values: standard deviation, margin of error, variance, percentiles, etc.
- Concurrency support
Event
andEventTarget
compatible events- No dependencies
In case you need more tiny libraries like tinypool or tinyspy, please consider submitting an RFC
$ npm install -D tinybench
You can start benchmarking by instantiating the Bench
class and adding benchmark tasks to it.
import { Bench } from 'tinybench'
const bench = new Bench({ name: 'simple benchmark', time: 100 })
bench
.add('faster task', () => {
console.log('I am faster')
})
.add('slower task', async () => {
await new Promise(resolve => setTimeout(resolve, 1)) // we wait 1ms :)
console.log('I am slower')
})
await bench.run()
console.log(bench.name)
console.table(bench.table())
// Output:
// simple benchmark
// βββββββββββ¬ββββββββββββββββ¬βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββ¬βββββββββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββ
// β (index) β Task name β Throughput average (ops/s) β Throughput median (ops/s) β Latency average (ns) β Latency median (ns) β Samples β
// βββββββββββΌββββββββββββββββΌβββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββββββββββββΌββββββββββββββββββββββΌββββββββββ€
// β 0 β 'faster task' β '102906 Β± 0.89%' β '82217 Β± 14' β '11909.14 Β± 3.95%' β '12163.00 Β± 2.00' β 8398 β
// β 1 β 'slower task' β '988 Β± 26.26%' β '710' β '1379560.47 Β± 6.72%' β '1408552.00' β 73 β
// βββββββββββ΄ββββββββββββββββ΄βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ
The add
method accepts a task name and a task function, so it can benchmark
it! This method returns a reference to the Bench instance, so it's possible to
use it to create an another task for that instance.
Note that the task name should always be unique in an instance, because Tinybench stores the tasks based
on their names in a Map
.
Also note that tinybench
does not log any result by default. You can extract the relevant stats
from bench.tasks
or any other API after running the benchmark, and process them however you want.
More usage examples can be found in the examples directory.
Both the Task
and Bench
classes extend the EventTarget
object. So you can attach listeners to different types of events in each class instance using the universal addEventListener
and removeEventListener
methods.
// runs on each benchmark task's cycle
bench.addEventListener('cycle', (evt) => {
const task = evt.task!;
});
// runs only on this benchmark task's cycle
task.addEventListener('cycle', (evt) => {
const task = evt.task!;
});
if you want more accurate results for nodejs with process.hrtime
, then import
the hrtimeNow
function from the library and pass it to the Bench
options.
import { hrtimeNow } from 'tinybench'
It may make your benchmarks slower.
- When
mode
is set tonull
(default), concurrency is disabled. - When
mode
is set to 'task', each task's iterations (calls of a task function) run concurrently. - When
mode
is set to 'bench', different tasks within the bench run concurrently. Concurrent cycles.
bench.threshold = 10 // The maximum number of concurrent tasks to run. Defaults to Number.POSITIVE_INFINITY.
bench.concurrency = 'task' // The concurrency mode to determine how tasks are run.
await bench.run()
Mohammad Bagher |
---|
Uzlopak |
poyoho |
---|
Feel free to create issues/discussions and then PRs for the project!
Your sponsorship can make a huge difference in continuing our work in open source!