Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow specifying number of iterations via --benchmark_min_time. #1525

Merged
merged 35 commits into from
Feb 7, 2023
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
04f53c4
Allow specifying number of iterations via --benchmark_min_time.
oontvoo Jan 11, 2023
7e2aeaa
forgot to change flag type to string
oontvoo Jan 11, 2023
0ba28c5
used tagged union instead of std::variant, which is not available pre…
oontvoo Jan 11, 2023
de4c454
update decl in benchmark_runner.h too
oontvoo Jan 11, 2023
fab9054
fixed errors
oontvoo Jan 11, 2023
6d98c38
refactor
oontvoo Jan 11, 2023
2d7206d
backward compat
oontvoo Jan 11, 2023
807322b
typo
oontvoo Jan 11, 2023
80c349f
use IterationCount type
oontvoo Jan 11, 2023
2432374
fixed test
oontvoo Jan 11, 2023
cacbc83
const_cast
oontvoo Jan 11, 2023
774dbae
ret type
oontvoo Jan 11, 2023
26455dc
remove extra _
oontvoo Jan 11, 2023
f64dc35
debug
oontvoo Jan 11, 2023
d145ff9
fixed bug from reporting that caused the new configs not to be includ…
oontvoo Jan 12, 2023
d7f8701
addressed review comments
oontvoo Jan 12, 2023
c37b2a1
restore unnecessary changes in test/BUILD
oontvoo Jan 12, 2023
48c7d82
fix float comparisons warnings from Release builds
oontvoo Jan 12, 2023
0d69ed0
clang format
oontvoo Jan 12, 2023
bce6413
fix visibility warning
oontvoo Jan 12, 2023
860b2b6
remove misc file
oontvoo Jan 12, 2023
3dc210d
removed backup files
oontvoo Jan 12, 2023
46b2e7b
addressed review comments
oontvoo Jan 13, 2023
7886a3c
fix shorten in warning
oontvoo Jan 13, 2023
9d17cb3
use suffix for existing min_time specs to silent warnings in tests
oontvoo Jan 13, 2023
35481c5
fix leaks
oontvoo Jan 13, 2023
2664a17
use default min-time value in flag decl for consistency
oontvoo Jan 17, 2023
e96b8c6
removed double kMinTimeDecl from benchmark.h
oontvoo Jan 17, 2023
f0c1285
dont need to preserve errno
oontvoo Jan 19, 2023
4a22d68
add death tests
oontvoo Jan 24, 2023
ee9f1c8
Add BENCHMARK_EXPORT to hopefully fix missing def errors
oontvoo Jan 24, 2023
245e160
only enable death tests in debug mode because bm_check is no-op in re…
oontvoo Jan 24, 2023
340c65b
guard death tests with additional support-check macros
oontvoo Jan 24, 2023
a902c7f
Add additional guard to prevent running in Release mode
oontvoo Jan 24, 2023
94a06c7
Merge branch 'main' into min_time
dmah42 Feb 7, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion include/benchmark/benchmark.h
Original file line number Diff line number Diff line change
Expand Up @@ -1081,11 +1081,12 @@ class BENCHMARK_EXPORT Benchmark {
Benchmark* MinWarmUpTime(double t);

// Specify the amount of iterations that should be run by this benchmark.
// This option overrides the `benchmark_min_time` flag.
// REQUIRES: 'n > 0' and `MinTime` has not been called on this benchmark.
//
// NOTE: This function should only be used when *exact* iteration control is
// needed and never to control or limit how long a benchmark runs, where
// `--benchmark_min_time=N` or `MinTime(...)` should be used instead.
// `--benchmark_min_time=<N>s` or `MinTime(...)` should be used instead.
Benchmark* Iterations(IterationCount n);

// Specify the amount of times to repeat this benchmark. This option overrides
Expand Down
19 changes: 14 additions & 5 deletions src/benchmark.cc
Original file line number Diff line number Diff line change
Expand Up @@ -65,12 +65,21 @@ BM_DEFINE_bool(benchmark_list_tests, false);
// linked into the binary are run.
BM_DEFINE_string(benchmark_filter, "");

// Minimum number of seconds we should run benchmark before results are
// considered significant. For cpu-time based tests, this is the lower bound
// Specification of how long to run the benchmark.
//
// It can be either an exact number of iterations (specified as `<integer>x`),
// or a minimum number of seconds (specified as `<float>s`). If the latter
// format (ie., min seconds) is used, the system may run the benchmark longer
// until the results are considered significant.
//
// For backward compatibility, the `s` suffix may be omitted, in which case,
// the specified number is interpreted as the number of seconds.
//
// For cpu-time based tests, this is the lower bound
// on the total cpu time used by all threads that make up the test. For
// real-time based tests, this is the lower bound on the elapsed time of the
// benchmark execution, regardless of number of threads.
BM_DEFINE_double(benchmark_min_time, 0.5);
BM_DEFINE_string(benchmark_min_time, "0.5s");
oontvoo marked this conversation as resolved.
Show resolved Hide resolved

// Minimum number of seconds a benchmark should be run before results should be
// taken into account. This e.g can be necessary for benchmarks of code which
Expand Down Expand Up @@ -605,7 +614,7 @@ void ParseCommandLineFlags(int* argc, char** argv) {
if (ParseBoolFlag(argv[i], "benchmark_list_tests",
&FLAGS_benchmark_list_tests) ||
ParseStringFlag(argv[i], "benchmark_filter", &FLAGS_benchmark_filter) ||
ParseDoubleFlag(argv[i], "benchmark_min_time",
ParseStringFlag(argv[i], "benchmark_min_time",
&FLAGS_benchmark_min_time) ||
ParseDoubleFlag(argv[i], "benchmark_min_warmup_time",
&FLAGS_benchmark_min_warmup_time) ||
Expand Down Expand Up @@ -666,7 +675,7 @@ void PrintDefaultHelp() {
"benchmark"
" [--benchmark_list_tests={true|false}]\n"
" [--benchmark_filter=<regex>]\n"
" [--benchmark_min_time=<min_time>]\n"
" [--benchmark_min_time=`<integer>x` OR `<float>s` ]\n"
" [--benchmark_min_warmup_time=<min_warmup_time>]\n"
" [--benchmark_repetitions=<num_repetitions>]\n"
" [--benchmark_enable_random_interleaving={true|false}]\n"
Expand Down
85 changes: 82 additions & 3 deletions src/benchmark_runner.cc
Original file line number Diff line number Diff line change
Expand Up @@ -140,23 +140,102 @@ void RunInThread(const BenchmarkInstance* b, IterationCount iters,
manager->NotifyThreadComplete();
}

struct BenchTimeType {
enum { ITERS, TIME } tag;
union {
int iters;
double time;
};
};

BenchTimeType ParseBenchMinTime(const std::string& value) {
BenchTimeType ret;

if (value.empty()) {
ret.tag = BenchTimeType::TIME;
ret.time = 0.0;
return ret;
}

if (value.back() == 'x') {
std::string num_iters_str = value.substr(0, value.length() - 1);
int num_iters = std::atoi(num_iters_str.c_str());
oontvoo marked this conversation as resolved.
Show resolved Hide resolved

// std::atoi doesn't provide useful error messages, so we do some
// sanity checks.
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
// Also check that the iters is not negative.
BM_CHECK(num_iters > 0 && !(num_iters == 0 && num_iters_str != "0"))
<< "Malformed iters value passed to --benchmark_min_time: `" << value
<< "`. Expected --benchmark_min_time=<integer>x.";

ret.tag = BenchTimeType::ITERS;
ret.iters = num_iters;
return ret;
}

std::string min_time_str;
if (value.back() != 's') {
BM_VLOG(2) << "Value passed to --benchmark_min_time should have a suffix. "
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
"Eg., `30s` for 30-seconds.";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes it look like other suffixes (ms?) are supported

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we could extend the suffixes support later - but for the first iteration, let's keep it simple. (And I'd like to get the interface in place first)

min_time_str = value;
} else {
min_time_str = value.substr(0, value.length() - 1);
}
double min_time = std::atof(min_time_str.c_str());
oontvoo marked this conversation as resolved.
Show resolved Hide resolved

BM_CHECK(min_time > 0 &&
!(min_time == 0 && (min_time_str != "0" || min_time_str != "0.0")))
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
<< "Malformed seconds value passed to --benchmark_min_time: `" << value
<< "`. Expected --benchmark_min_time=<float>x.";

ret.tag = BenchTimeType::TIME;
ret.time = min_time;

return ret;
}

double GetMinTime(const benchmark::internal::BenchmarkInstance& b) {
if (!IsZero(b.min_time())) return b.min_time();
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
BenchTimeType iters_or_time = ParseBenchMinTime(FLAGS_benchmark_min_time);
// If the flag was used to specify number of iters, then return 0 for time.
if (iters_or_time.tag == BenchTimeType::ITERS) return 0;
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
oontvoo marked this conversation as resolved.
Show resolved Hide resolved

return iters_or_time.time;
}

bool BenchMinTimeHasIters() {
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
return !FLAGS_benchmark_min_time.empty() &&
FLAGS_benchmark_min_time.back() == 'x';
}

IterationCount GetIters(const benchmark::internal::BenchmarkInstance& b) {
if (b.iterations() != 0) return b.iterations();

// We've already checked that this flag is currently used to pass
// iters but do a sanity check anyway.
BenchTimeType parsed = ParseBenchMinTime(FLAGS_benchmark_min_time);
assert(parsed.tag == BenchTimeType::ITERS);
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
return parsed.iters;
}

} // end namespace

BenchmarkRunner::BenchmarkRunner(
const benchmark::internal::BenchmarkInstance& b_,
BenchmarkReporter::PerFamilyRunReports* reports_for_family_)
: b(b_),
reports_for_family(reports_for_family_),
min_time(!IsZero(b.min_time()) ? b.min_time() : FLAGS_benchmark_min_time),
min_time(GetMinTime(b_)),
min_warmup_time((!IsZero(b.min_time()) && b.min_warmup_time() > 0.0)
? b.min_warmup_time()
: FLAGS_benchmark_min_warmup_time),
warmup_done(!(min_warmup_time > 0.0)),
repeats(b.repetitions() != 0 ? b.repetitions()
: FLAGS_benchmark_repetitions),
has_explicit_iteration_count(b.iterations() != 0),
has_explicit_iteration_count(b.iterations() != 0 ||
BenchMinTimeHasIters()),
pool(b.threads() - 1),
iters(has_explicit_iteration_count ? b.iterations() : 1),
iters(has_explicit_iteration_count ? GetIters(b_) : 1),
perf_counters_measurement(StrSplit(FLAGS_benchmark_perf_counters, ',')),
perf_counters_measurement_ptr(perf_counters_measurement.IsValid()
? &perf_counters_measurement
Expand Down
2 changes: 1 addition & 1 deletion src/benchmark_runner.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

namespace benchmark {

BM_DECLARE_double(benchmark_min_time);
BM_DECLARE_string(benchmark_min_time);
BM_DECLARE_double(benchmark_min_warmup_time);
BM_DECLARE_int32(benchmark_repetitions);
BM_DECLARE_bool(benchmark_report_aggregates_only);
Expand Down
25 changes: 14 additions & 11 deletions test/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,17 @@ PER_SRC_COPTS = {
"donotoptimize_test.cc": ["-O3"],
}

TEST_ARGS = ["--benchmark_min_time=0.01"]
TEST_ARGS = ["--benchmark_min_time=0.01s"]
dmah42 marked this conversation as resolved.
Show resolved Hide resolved

PER_SRC_TEST_ARGS = ({
PER_SRC_TEST_ARGS = {
"user_counters_tabular_test.cc": ["--benchmark_counters_tabular=true"],
"repetitions_test.cc": [" --benchmark_repetitions=3"],
"spec_arg_test.cc" : ["--benchmark_filter=BM_NotChosen"],
"spec_arg_verbosity_test.cc" : ["--v=42"],
})
"spec_arg_test.cc": ["--benchmark_filter=BM_NotChosen"],
"spec_arg_verbosity_test.cc": ["--v=42"],
"benchmark_min_time_flag_iters_test.cc": ["--benchmark_min_time=4x"],
# Backward compat test.
"benchmark_min_time_flag_time_test.cc": ["--benchmark_min_time=2"],
}

cc_library(
name = "output_test_helper",
Expand All @@ -58,14 +61,14 @@ cc_library(
copts = select({
"//:windows": [],
"//conditions:default": TEST_COPTS,
}) + PER_SRC_COPTS.get(test_src, []) ,
}) + PER_SRC_COPTS.get(test_src, []),
deps = [
":output_test_helper",
"//:benchmark",
"//:benchmark_internal_headers",
"@com_google_googletest//:gtest",
"@com_google_googletest//:gtest_main",
]
],
# FIXME: Add support for assembly tests to bazel.
# See Issue #556
# https://github.com/google/benchmark/issues/556
Expand All @@ -85,17 +88,17 @@ cc_test(
size = "small",
srcs = ["cxx03_test.cc"],
copts = TEST_COPTS + ["-std=c++03"],
target_compatible_with = select({
"//:windows": ["@platforms//:incompatible"],
"//conditions:default": [],
}),
deps = [
":output_test_helper",
"//:benchmark",
"//:benchmark_internal_headers",
"@com_google_googletest//:gtest",
"@com_google_googletest//:gtest_main",
],
target_compatible_with = select({
"//:windows": ["@platforms//:incompatible"],
"//conditions:default": [],
})
)

cc_test(
Expand Down
65 changes: 65 additions & 0 deletions test/benchmark_min_time_flag_iters_test.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#include <cassert>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <string>
#include <vector>

#include "benchmark/benchmark.h"

// Tests that we can specify the number of iterations with
// --benchmark_min_time=<NUM>x.
namespace {

class TestReporter : public benchmark::ConsoleReporter {
public:
virtual bool ReportContext(const Context& context) BENCHMARK_OVERRIDE {
return ConsoleReporter::ReportContext(context);
};

virtual void ReportRuns(const std::vector<Run>& report) BENCHMARK_OVERRIDE {
assert(report.size() == 1);
iter_nums_.push_back(report[0].iterations);
ConsoleReporter::ReportRuns(report);
};

TestReporter() {}

virtual ~TestReporter() {}

const std::vector<int>& GetIters() const { return iter_nums_; }

private:
std::vector<int> iter_nums_;
};

} // end namespace

static void BM_MyBench(benchmark::State& state) {
for (auto s : state) {
}
}
BENCHMARK(BM_MyBench);

int main(int argc, char** argv) {
// Verify that argv specify --benchmark_min_time=4x
bool found = false;
for (int i = 0; i < argc; ++i) {
if (strcmp("--benchmark_min_time=4x", argv[i]) == 0) {
found = true;
break;
}
}
assert(found);
benchmark::Initialize(&argc, argv);

TestReporter test_reporter;
const size_t returned_count =
benchmark::RunSpecifiedBenchmarks(&test_reporter, "BM_MyBench");
assert(returned_count == 1);

// Check the executed iters.
const std::vector<int> iters = test_reporter.GetIters();
assert(!iters.empty() && iters[0] == 4);
return 0;
}
82 changes: 82 additions & 0 deletions test/benchmark_min_time_flag_time_test.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
#include <cassert>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <string>
#include <vector>

#include "benchmark/benchmark.h"

// Tests that we can specify the min time with
// --benchmark_min_time=<NUM> (no suffix needed) OR
// --benchmark_min_time=<NUM>s
oontvoo marked this conversation as resolved.
Show resolved Hide resolved
namespace {

class TestReporter : public benchmark::ConsoleReporter {
public:
virtual bool ReportContext(const Context& context) BENCHMARK_OVERRIDE {
return ConsoleReporter::ReportContext(context);
};

virtual void ReportRuns(const std::vector<Run>& report) BENCHMARK_OVERRIDE {
assert(report.size() == 1);
min_times_.push_back(report[0].run_name.min_time);
ConsoleReporter::ReportRuns(report);
};

TestReporter() {}

virtual ~TestReporter() {}

const std::vector<std::string>& GetMinTimes() const { return min_times_; }

private:
std::vector<std::string> min_times_;
};

void DoTestHelper(int* argc, const char** argv, const std::string& expected) {
benchmark::Initialize(argc, const_cast<char**>(argv));

TestReporter test_reporter;
const size_t returned_count =
benchmark::RunSpecifiedBenchmarks(&test_reporter, "BM_MyBench");
assert(returned_count == 1);

// Check the min_time
const std::vector<std::string>& min_times = test_reporter.GetMinTimes();
if (min_times.empty()) std::cout << "**** min_times empty\n";
else {
std::cout << " *** EXPECTED = " << expected << "\n";
for (const std::string& v : min_times)
std::cout << " ** v = " << v << "\n";
}
assert(!min_times.empty() && min_times[0] == expected);
}

} // end namespace

static void BM_MyBench(benchmark::State& state) {
for (auto s : state) {
}
}
BENCHMARK(BM_MyBench);

int main(int argc, char** argv) {
// Make a fake argv and append the new --benchmark_min_time=<foo> to it.
int fake_argc = argc + 1;
const char** fake_argv = new const char*[fake_argc];

for (int i = 0; i < argc; ++i) fake_argv[i] = argv[i];

const char* no_suffix = "--benchmark_min_time=4.0";
const char* with_suffix = "--benchmark_min_time=4.0s";
std::string expected = "min_time:4.0s";

fake_argv[argc] = no_suffix;
DoTestHelper(&fake_argc, fake_argv, expected);

fake_argv[argc] = with_suffix;
DoTestHelper(&fake_argc, fake_argv, expected);

return 0;
}