This project is a CLI to write, test, and benchmark versions of puppeteer (and their respective Chrome binaries) for workloads that you might be interested in. By default, it comes with three basic test-cases:
- PDF Generation
- Screenshot Generation
- Load/paint events
Tests are simple async functions that make use of the perf_hooks
library to capture events you're interested in. Feel free to fork and add your own!
- Puppeteer Benchmark Tool
# Install
$ npm i # add -g flag to install globally
$ npm run build
# Set up puppeteer versions
$ npx pptr-benchmark prepare 13 15 latest
# or install all versions
$ npx pptr-benchmark prepare-suite
# Run the tests and output the results to a JSON file
$ npx pptr-benchmark run test-cases/generate-pdf.js -r 2 --puppeteer-versions 13 15 latest --out results.json
$ npx pptr-benchmark run all -r 5 \
--puppeteer-versions 13 15 latest \
--case-url "http://example.com" \
--out results.json
npm run prepare-suite
npm run suite
Runs a specific benchmark test. <case>
must be one of the following cases: all
| generate-pdf.js
| make-screenshot.js
| paint-events.js
# Will run PDF tests
npx pptr-benchmark run test-cases/generate-pdf.js
Number of test exectuions. Defaults to 5
.
# Will average and aggregate results based on 2 runs
npx pptr-benchmark run all -r 2
Tests a list of puppeteer versions, separated by commas or spaces. Defaults to latest
# Will run all tests on pptr v13, v15 and latest
npx pptr-benchmark run all --puppeteer-versions 13 15 latest
Url that will be navigated to on the function. Defaults to http://example.com/
.
# Will run all tests navigating to https://www.browserless.io/
npx pptr-benchmark run all --case-url https://www.browserless.io/
Writes json results to file.
# Will run all tests and write the results to ./file.json
npx pptr-benchmark run all --out ./file.json
Writes testing PDFs and screenshots to directory.
# Will run all tests and save the PDFs files to ./cache
npx pptr-benchmark run all --temp-dir ./cache
Exports results as an HTML report
# Will run PDF tests on pptr v16, v18 and latest and generate a report
npx pptr-benchmark run all --puppeteer-versions 16 18 latest --generate-report
And generates the following report:
Highlights min and max values in HTML report.
# Will run all tests on pptr v16, v18 and latest and generate a highlighted report
npx pptr-benchmark run all --puppeteer-versions 16 18 latest --generate-report
Writes the final HTML repor to directory
# Will run all tests on pptr v16, v18 and latest and generate a highlighted report on ./cache
npx pptr-benchmark run all --puppeteer-versions 16 18 latest --generate-report --report-dir ./cache
And generates the following report:
Turns off console output
# Will only print the results table
npx pptr-benchmark run all --silent
Downloads versions of puppeteer
# Will download pptr v13, v15 and latest
npx pptr-benchmark prepare 13 15 latest
Downloads and installs all major puppeteer versions.
# Will download all majr versions from v1 to latest
npx pptr-benchmark prepare-suite
Runs all tests with default arguments, on all available puppeteer versions. Easiest and most complete benchmark test.
# Will download all majr versions from v1 to latest
npx pptr-benchmark suite
You can read more about why we did this in our blog. The TL;DR is that we (browserless) heard a lot from our users about performance changes from version-to-version of Chrome, and wanted a way to programmatically see if a version change would introduce new latencies.
This CLI was born from that curiosity, and we wanted to open-source it to the community so that you can write and run your own performance benchmarks to track KPIs that you care about.
Eventually we'll track the results of this suite into a static webpage that you can check on. This will hopefully give you a good sense of what to expect when upgrading. We'll work on adding newer tests as time goes on, but found enough value out of these initial few that we wanted to see what the community thought!
First, thanks! Please submit a PR and we'll follow up with you. If you have a bigger feature or want to do something drastic, please submit an issue describing what you want to do in order to avoid doing all that work.