Skip to content

Add performance tests #152

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/workflows/testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,6 @@ jobs:
COVERALLS_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
make coveralls

- name: Check workability of benchmark tests
run: make bench-deps bench DURATION=1x COUNT=1
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@
.idea/
work_dir*
.rocks
bench*
107 changes: 107 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,113 @@ and run it with next flags:
golangci-lint run -E gofmt -D errcheck
```

## Benchmarking

### Quick start

To run all benchmark tests from the current branch run:

```bash
make bench
```

To measure performance difference between master and the current branch run:

```bash
make bench-diff
```

Note: `benchstat` should be in `PATH`. If it is not set, call:

```bash
export PATH="/home/${USER}/go/bin:${PATH}"
```

or

```bash
export PATH="${HOME}/go/bin:${PATH}"
```

### Customize benchmarking

Before running benchmark or measuring performance degradation, install benchmark dependencies:
```bash
make bench-deps BENCH_PATH=custom_path
```

Use the variable `BENCH_PATH` to specify the path of benchmark artifacts.
It is set to `bench` by default.

To run benchmark tests, call:
```bash
make bench DURATION=5s COUNT=7 BENCH_PATH=custom_path TEST_PATH=.
```

Use the variable `DURATION` to set the duration of perf tests. That variable is mapped on
testing [flag](https://pkg.go.dev/cmd/go#hdr-Testing_flags) `-benchtime` for gotest.
It may take the values in seconds (e.g, `5s`) or count of iterations (e.g, `1000x`).
It is set to `3s` by default.

Use the variable `COUNT` to control the count of benchmark runs for each test.
It is set to `5` by default. That variable is mapped on testing flag `-count`.
Use higher values if the benchmark numbers aren't stable.

Use the variable `TEST_PATH` to set the directory of test files.
It is set to `./...` by default, so it runs all the Benchmark tests in the project.

To measure performance degradation after changes in code, run:
```bash
make bench-diff BENCH_PATH=custom_path
```

Note: the variable `BENCH_PATH` is not purposed to be used with absolute paths.

## Recommendations for how to achieve stable results

Before any judgments, verify whether results are stable on given host and how large the noise. Run `make bench-diff` without changes and look on the report. Several times.

There are suggestions how to achieve best results:

* Close all background applications, especially web browser. Look at `top` (`htop`, `atop`, ...) and if something bubbles there, close it.
* Disable cron daemon.
* Disable TurboBoost and set fixed frequency.
* If you're using `intel_pstate` frequency driver (it is usually default):

Disable TurboBoost:

```shell
$ echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo
```

Set fixed frequency: not sure it is possible.

* If you're using `acpi-cpufreq` driver:

Ensure you actually don't use intel_pstate:

```shell
$ grep -o 'intel_pstate=\w\+' /proc/cmdline
intel_pstate=disable
$ cpupower -c all frequency-info | grep driver:
driver: acpi-cpufreq
<...>
```

Disable TurboBoost:

```shell
$ echo 0 > /sys/devices/system/cpu/cpufreq/boost
```

Set fixed frequency:

```shell
$ cpupower -c all frequency-set -g userspace
$ cpupower -c all frequency-set -f 1.80GHz # adjust for your CPU
```


## Code review checklist

- Public API contains functions, variables, constants that are needed from
Expand Down
43 changes: 43 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
SHELL := /bin/bash
COVERAGE_FILE := coverage.out
MAKEFILE_PATH := $(abspath $(lastword $(MAKEFILE_LIST)))
PROJECT_DIR := $(patsubst %/,%,$(dir $(MAKEFILE_PATH)))
DURATION ?= 3s
COUNT ?= 5
BENCH_PATH ?= bench-dir
TEST_PATH ?= ${PROJECT_DIR}/...
BENCH_FILE := ${PROJECT_DIR}/${BENCH_PATH}/bench.txt
REFERENCE_FILE := ${PROJECT_DIR}/${BENCH_PATH}/reference.txt
BENCH_FILES := ${REFERENCE_FILE} ${BENCH_FILE}
BENCH_REFERENCE_REPO := ${BENCH_PATH}/go-tarantool
BENCH_OPTIONS := -bench=. -run=^Benchmark -benchmem -benchtime=${DURATION} -count=${COUNT}
GO_TARANTOOL_URL := https://github.com/tarantool/go-tarantool
GO_TARANTOOL_DIR := ${PROJECT_DIR}/${BENCH_PATH}/go-tarantool

.PHONY: clean
clean:
Expand Down Expand Up @@ -55,3 +68,33 @@ coverage:
coveralls: coverage
go get github.com/mattn/goveralls
goveralls -coverprofile=$(COVERAGE_FILE) -service=github

.PHONY: bench-deps
${BENCH_PATH} bench-deps:
@echo "Installing benchstat tool"
rm -rf ${BENCH_PATH}
mkdir ${BENCH_PATH}
go clean -testcache
cd ${BENCH_PATH} && git clone https://go.googlesource.com/perf && cd perf && go install ./cmd/benchstat
rm -rf ${BENCH_PATH}/perf

.PHONY: bench
${BENCH_FILE} bench: ${BENCH_PATH}
@echo "Running benchmark tests from the current branch"
go test ${TEST_PATH} ${BENCH_OPTIONS} 2>&1 \
| tee ${BENCH_FILE}
benchstat ${BENCH_FILE}

${GO_TARANTOOL_DIR}:
@echo "Cloning the repository into ${GO_TARANTOOL_DIR}"
[ ! -e ${GO_TARANTOOL_DIR} ] && git clone --depth=1 ${GO_TARANTOOL_URL} ${GO_TARANTOOL_DIR}

${REFERENCE_FILE}: ${GO_TARANTOOL_DIR}
@echo "Running benchmark tests from master for using results in bench-diff target"
cd ${GO_TARANTOOL_DIR} && git pull && go test ./... ${BENCH_OPTIONS} 2>&1 \
| tee ${REFERENCE_FILE}

bench-diff: ${BENCH_FILES}
@echo "Comparing performance between master and the current branch"
@echo "'old' is a version in master branch, 'new' is a version in a current branch"
benchstat ${BENCH_FILES} | grep -v pkg:
26 changes: 26 additions & 0 deletions config.lua
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,31 @@ box.once("init", function()
})
st:truncate()

local s2 = box.schema.space.create('test_perf', {
id = 520,
temporary = true,
if_not_exists = true,
field_count = 3,
format = {
{name = "id", type = "unsigned"},
{name = "name", type = "string"},
{name = "arr1", type = "array"},
},
})
s2:create_index('primary', {type = 'tree', unique = true, parts = {1, 'unsigned'}, if_not_exists = true})
s2:create_index('secondary', {id = 5, type = 'tree', unique = false, parts = {2, 'string'}, if_not_exists = true})
local arr_data = {}
for i = 1,100 do
arr_data[i] = i
end
for i = 1,1000 do
s2:insert{
i,
'test_name',
arr_data,
}
end

--box.schema.user.grant('guest', 'read,write,execute', 'universe')
box.schema.func.create('box.info')
box.schema.func.create('simple_incr')
Expand All @@ -49,6 +74,7 @@ box.once("init", function()
box.schema.user.grant('test', 'execute', 'universe')
box.schema.user.grant('test', 'read,write', 'space', 'test')
box.schema.user.grant('test', 'read,write', 'space', 'schematest')
box.schema.user.grant('test', 'read,write', 'space', 'test_perf')
end)

local function func_name()
Expand Down
61 changes: 61 additions & 0 deletions tarantool_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ func BenchmarkClientSerial(b *testing.B) {
b.Errorf("No connection available")
}

b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err = conn.Select(spaceNo, indexNo, 0, 1, IterEq, []interface{}{uint(1111)})
if err != nil {
Expand All @@ -105,6 +106,7 @@ func BenchmarkClientSerialTyped(b *testing.B) {
}

var r []Tuple
b.ResetTimer()
for i := 0; i < b.N; i++ {
err = conn.SelectTyped(spaceNo, indexNo, 0, 1, IterEq, IntKey{1111}, &r)
if err != nil {
Expand All @@ -128,6 +130,7 @@ func BenchmarkClientFuture(b *testing.B) {
b.Error(err)
}

b.ResetTimer()
for i := 0; i < b.N; i += N {
var fs [N]*Future
for j := 0; j < N; j++ {
Expand Down Expand Up @@ -158,6 +161,7 @@ func BenchmarkClientFutureTyped(b *testing.B) {
b.Errorf("No connection available")
}

b.ResetTimer()
for i := 0; i < b.N; i += N {
var fs [N]*Future
for j := 0; j < N; j++ {
Expand Down Expand Up @@ -191,6 +195,7 @@ func BenchmarkClientFutureParallel(b *testing.B) {
b.Errorf("No connection available")
}

b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
exit := false
for !exit {
Expand Down Expand Up @@ -227,6 +232,7 @@ func BenchmarkClientFutureParallelTyped(b *testing.B) {
b.Fatal("No connection available")
}

b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
exit := false
for !exit {
Expand Down Expand Up @@ -266,6 +272,7 @@ func BenchmarkClientParallel(b *testing.B) {
b.Fatal("No connection available")
}

b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := conn.Select(spaceNo, indexNo, 0, 1, IterEq, []interface{}{uint(1111)})
Expand Down Expand Up @@ -307,6 +314,7 @@ func BenchmarkClientParallelMassive(b *testing.B) {
}
}()
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
wg.Add(1)
limit <- struct{}{}
Expand Down Expand Up @@ -344,6 +352,8 @@ func BenchmarkClientParallelMassiveUntyped(b *testing.B) {
}
}()
}

b.ResetTimer()
for i := 0; i < b.N; i++ {
wg.Add(1)
limit <- struct{}{}
Expand All @@ -352,6 +362,57 @@ func BenchmarkClientParallelMassiveUntyped(b *testing.B) {
close(limit)
}

func BenchmarkClientReplaceParallel(b *testing.B) {
conn, err := Connect(server, opts)
if err != nil {
b.Errorf("No connection available")
return
}
defer conn.Close()
spaceNo = 520

rSpaceNo, _, err := conn.Schema.ResolveSpaceIndex("test_perf", "secondary")
if err != nil {
b.Fatalf("Space is not resolved: %s", err.Error())
}

b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := conn.Replace(rSpaceNo, []interface{}{uint(1), "hello", []interface{}{}})
if err != nil {
b.Error(err)
}
}
})
}

func BenchmarkClientLargeSelectParallel(b *testing.B) {
conn, err := Connect(server, opts)
if err != nil {
b.Errorf("No connection available")
return
}
defer conn.Close()

schema := conn.Schema
rSpaceNo, rIndexNo, err := schema.ResolveSpaceIndex("test_perf", "secondary")
if err != nil {
b.Fatalf("symbolic space and index params not resolved")
}

offset, limit := uint32(0), uint32(1000)
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := conn.Select(rSpaceNo, rIndexNo, offset, limit, IterEq, []interface{}{"test_name"})
if err != nil {
b.Fatal(err)
}
}
})
}

///////////////////

func TestClient(t *testing.T) {
Expand Down