Skip to content

Commit

Permalink
generation of real benchmark functions for benchmarking v2 (paritytec…
Browse files Browse the repository at this point in the history
…h#13224)

* function generation with _name working, need to modify signature

* WIP

* support custom BenchmarkResult<T> type

* full support for BenchmarkResult<T> on benchmark function defs

* support () return type for benchmark function defs that don't use ?

* uncomment

* fix where clause handling

* fix benchmark function call bodies

* proper parsing of return type

* add UI tests for bad return type

* fix detection of missing last_stmt with defined return type

* UI tests covering missing last_stmt

* properly detect and complain about empty benchmark function defs

* fix missing Comma in Result<T, BenchmarkError> parsing + test

* add additional UI test

* allow complex path for BenchmarkResult and BenchmarkError in fn defs

* add UI tests covering complex path for BenchmarkResult, BenchmarkError

* retain doc comments and attributes

* also add attributes to struct

* add docs for benchmark function definition support

* fix imports on benchmark example

* fix issue with unused variables in extrinsic call fn def

* fix up docs

* remove support for v2::BenchmarkResult because it was confusing

* fix typo

* remove ability to use custom T for Result<T, BenchmarkError> in v2

* use missing call error instead of empty_fn()

* remove unneeded match statement

* Add a proper QED

Co-authored-by: Keith Yeung <kungfukeith11@gmail.com>

* fix other QED

Co-authored-by: Keith Yeung <kungfukeith11@gmail.com>

* cargo fmt

* add an explicit error for non TypePath as return type

* tweak error warning and add a UI test for non TypePath return

* remove comment

* add docs about T and I generic params

* improve docs referring to section "below"

* pull out return type checking logic into its own function

* pull out params parsing into its own function

* pull out call_def parsing into its own function

* add doc comment for missing_call()

* replace spaces with tabs

* add a result-based example to the benchmarking examples

---------

Co-authored-by: Keith Yeung <kungfukeith11@gmail.com>
  • Loading branch information
2 people authored and ukint-vs committed Apr 10, 2023
1 parent 2104709 commit fa4be67
Show file tree
Hide file tree
Showing 26 changed files with 601 additions and 95 deletions.
4 changes: 3 additions & 1 deletion frame/balances/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ mod benchmarks {
}

#[benchmark]
fn force_unreserve() {
fn force_unreserve() -> Result<(), BenchmarkError> {
let user: T::AccountId = account("user", 0, SEED);
let user_lookup = T::Lookup::unlookup(user.clone());

Expand All @@ -244,6 +244,8 @@ mod benchmarks {

assert!(Balances::<T, I>::reserved_balance(&user).is_zero());
assert_eq!(Balances::<T, I>::free_balance(&user), balance);

Ok(())
}

impl_benchmark_test_suite! {
Expand Down
108 changes: 91 additions & 17 deletions frame/benchmarking/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -114,21 +114,22 @@ pub use v1::*;
/// Within a `#[benchmarks]` or `#[instance_benchmarks]` module, you can define individual
/// benchmarks using the `#[benchmark]` attribute, as shown in the example above.
///
/// The `#[benchmark]` attribute expects a function definition with a blank return type and
/// zero or more arguments whose names are valid
/// [BenchmarkParameter](`crate::BenchmarkParameter`) parameters, such as `x`, `y`, `a`, `b`,
/// etc., and whose param types must implement [ParamRange](`v2::ParamRange`). At the moment
/// the only valid type that implements [ParamRange](`v2::ParamRange`) is
/// [Linear](`v2::Linear`).
///
/// The valid syntax for defining a [Linear](`v2::Linear`)is `Linear<A, B>` where `A`, and `B`
/// The `#[benchmark]` attribute expects a function definition with a blank return type (or a
/// return type compatible with `Result<(), BenchmarkError>`, as discussed below) and zero or
/// more arguments whose names are valid [BenchmarkParameter](`crate::BenchmarkParameter`)
/// parameters, such as `x`, `y`, `a`, `b`, etc., and whose param types must implement
/// [ParamRange](`v2::ParamRange`). At the moment the only valid type that implements
/// [ParamRange](`v2::ParamRange`) is [Linear](`v2::Linear`).
///
/// The valid syntax for defining a [Linear](`v2::Linear`) is `Linear<A, B>` where `A`, and `B`
/// are valid integer literals (that fit in a `u32`), such that `B` >= `A`.
///
/// Note that the benchmark function definition does not actually expand as a function
/// definition, but rather is used to automatically create a number of impls and structs
/// required by the benchmarking engine. For this reason, the visibility of the function
/// definition as well as the return type are not used for any purpose and are discarded by the
/// expansion code.
/// Anywhere within a benchmark function you may use the generic `T: Config` parameter as well
/// as `I` in the case of an `#[instance_benchmarks]` module. You should not add these to the
/// function signature as this will be handled automatically for you based on whether this is a
/// `#[benchmarks]` or `#[instance_benchmarks]` module and whatever [where clause](#where-clause)
/// you have defined for the the module. You should not manually add any generics to the
/// signature of your benchmark function.
///
/// Also note that the `// setup code` and `// verification code` comments shown above are not
/// required and are included simply for demonstration purposes.
Expand Down Expand Up @@ -189,10 +190,10 @@ pub use v1::*;
///
/// #### `skip_meta`
///
/// Specifies that the benchmarking framework should not analyze the storage keys that
/// Specifies that the benchmarking framework should not analyze the storage keys that the
/// benchmarked code read or wrote. This useful to suppress the prints in the form of unknown
/// 0x… in case a storage key that does not have metadata. Note that this skips the analysis
/// of all accesses, not just ones without metadata.
/// 0x… in case a storage key that does not have metadata. Note that this skips the analysis of
/// all accesses, not just ones without metadata.
///
/// ## Where Clause
///
Expand Down Expand Up @@ -231,6 +232,79 @@ pub use v1::*;
/// );
/// }
/// ```
///
/// ## Benchmark Function Generation
///
/// The benchmark function definition that you provide is used to automatically create a number
/// of impls and structs required by the benchmarking engine. Additionally, a benchmark
/// function is also generated that resembles the function definition you provide, with a few
/// modifications:
/// 1. The function name is transformed from i.e. `original_name` to `_original_name` so as not
/// to collide with the struct `original_name` that is created for some of the benchmarking
/// engine impls.
/// 2. Appropriate `T: Config` and `I` (if this is an instance benchmark) generics are added to
/// the function automatically during expansion, so you should not add these manually on
/// your function definition (but you may make use of `T` and `I` anywhere within your
/// benchmark function, in any of the three sections (setup, call, verification).
/// 3. Arguments such as `u: Linear<10, 100>` are converted to `u: u32` to make the function
/// directly callable.
/// 4. A `verify: bool` param is added as the last argument. Specifying `true` will result in
/// the verification section of your function executing, while a value of `false` will skip
/// verification.
/// 5. If you specify a return type on the function definition, it must conform to the [rules
/// below](#support-for-result-benchmarkerror-and-the--operator), and the last statement of
/// the function definition must resolve to something compatible with `Result<(),
/// BenchmarkError>`.
///
/// The reason we generate an actual function as part of the expansion is to allow the compiler
/// to enforce several constraints that would otherwise be difficult to enforce and to reduce
/// developer confusion (especially regarding the use of the `?` operator, as covered below).
///
/// Note that any attributes, comments, and doc comments attached to your benchmark function
/// definition are also carried over onto the resulting benchmark function and the struct for
/// that benchmark. As a result you should be careful about what attributes you attach here as
/// they will be replicated in multiple places.
///
/// ### Support for `Result<(), BenchmarkError>` and the `?` operator
///
/// You may optionally specify `Result<(), BenchmarkError>` as the return type of your
/// benchmark function definition. If you do so, you must return a compatible `Result<(),
/// BenchmarkError>` as the *last statement* of your benchmark function definition. You may
/// also use the `?` operator throughout your benchmark function definition if you choose to
/// follow this route. See the example below:
///
/// ```ignore
/// #![cfg(feature = "runtime-benchmarks")]
///
/// use super::{mock_helpers::*, Pallet as MyPallet};
/// use frame_benchmarking::v2::*;
///
/// #[benchmarks]
/// mod benchmarks {
/// use super::*;
///
/// #[benchmark]
/// fn bench_name(x: Linear<5, 25>) -> Result<(), BenchmarkError> {
/// // setup code
/// let z = x + 4;
/// let caller = whitelisted_caller();
///
/// // note we can make use of the ? operator here because of the return type
/// something(z)?;
///
/// #[extrinsic_call]
/// extrinsic_name(SystemOrigin::Signed(caller), other, arguments);
///
/// // verification code
/// assert_eq!(MyPallet::<T>::my_var(), z);
///
/// // we must return a valid `Result<(), BenchmarkError>` as the last line of our benchmark
/// // function definition. This line is not included as part of the verification code that
/// // appears above it.
/// Ok(())
/// }
/// }
/// ```
pub mod v2 {
pub use super::*;
pub use frame_support_procedural::{
Expand All @@ -240,7 +314,7 @@ pub mod v2 {
// Used in #[benchmark] implementation to ensure that benchmark function arguments
// implement [`ParamRange`].
#[doc(hidden)]
pub use static_assertions::assert_impl_all;
pub use static_assertions::{assert_impl_all, assert_type_eq_all};

/// Used by the new benchmarking code to specify that a benchmarking variable is linear
/// over some specified range, i.e. `Linear<0, 1_000>` means that the corresponding variable
Expand Down
20 changes: 14 additions & 6 deletions frame/examples/basic/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,7 @@
#![cfg(feature = "runtime-benchmarks")]

use crate::*;
use frame_benchmarking::v1::{
impl_benchmark_test_suite,
v2::{benchmarks, Linear},
whitelisted_caller,
};
use frame_benchmarking::v2::*;
use frame_system::RawOrigin;

// To actually run this benchmark on pallet-example-basic, we need to put this pallet into the
Expand Down Expand Up @@ -55,19 +51,31 @@ mod benchmarks {
assert_eq!(Pallet::<T>::dummy(), Some(value))
}

// An example method that returns a Result that can be called within a benchmark
fn example_result_method() -> Result<(), BenchmarkError> {
Ok(())
}

// This will measure the execution time of `accumulate_dummy`.
// The benchmark execution phase is shorthanded. When the name of the benchmark case is the same
// as the extrinsic call. `_(...)` is used to represent the extrinsic name.
// The benchmark verification phase is omitted.
#[benchmark]
fn accumulate_dummy() {
fn accumulate_dummy() -> Result<(), BenchmarkError> {
let value = 1000u32.into();
// The caller account is whitelisted for DB reads/write by the benchmarking macro.
let caller: T::AccountId = whitelisted_caller();

// an example of calling something result-based within a benchmark using the ? operator
// this necessitates specifying the `Result<(), BenchmarkError>` return type
example_result_method()?;

// You can use `_` if the name of the Call matches the benchmark name.
#[extrinsic_call]
_(RawOrigin::Signed(caller), value);

// need this to be compatible with the return type
Ok(())
}

/// You can write helper functions in here since its a normal Rust module.
Expand Down
Loading

0 comments on commit fa4be67

Please sign in to comment.