-
Notifications
You must be signed in to change notification settings - Fork 0
Benchmarking Decorators
Thanks to Mikhail Vorontsov Source: http://java-performance.info/jmh/
You can use the following test modes specified using @BenchmarkMode annotation on the test methods:
Name | Description |
Mode.Throughput | Calculate number of operations in a time unit. |
Mode.AverageTime | Calculate an average running time. |
Mode.SampleTime | Calculate how long does it take for a method to run (including percentiles). |
Mode.SingleShotTime |
Just run a method once (useful for cold-testing mode). Or more than once if you have specified a batch size for your iterations (see @Measurement annotation below) – in this case JMH will calculate the batch running time (total time for all invocations in a batch).
|
Any set of these modes | You can specify any set of these modes – the test will be run several times (depending on number of requested modes). |
Mode.All | All these modes one after another. |
You can specify time unit to use via @OutputTimeUnit, which requires an argument of the standard Java type java.util.concurrent.TimeUnit. Unfortunately, if you have specified several test modes for one test, the given time unit will be used for all tests (for example, it may be convenient to measure SampleTime in nanoseconds, but throughput should better be measured in the longer time units).
Your test methods can accept arguments. You could provide a single argument of a class which complies to 4 following rules:
- There should be a no-arg constructor (default constructor).
- It should be a public class.
- Inner classes should be static.
- Class must be annotated with @State annotation. @State annotation defines the scope in which an instance of a given class will be available. JMH allows you to run tests in multiple threads simultaneously, so choose the right state:
Name | Description |
Scope.Thread | This is a default state. An instance will be allocated for each thread running the given test. |
Scope.Benchmark | An instance will be shared across all threads running the same test. Could be used to test multithreaded performance of a state object (or just mark your benchmark with this scope). |
Scope.Group | An instance will be allocated per thread group (see Groups section down below). |
Like JUnit tests, you can annotate your state class methods with @Setup and @TearDown annotations (these methods called fixtures in JMH documentation. You can have any number of setup/teardown methods. These methods do not contribute anything to test times (but Level.Invocation may affect precision of measurements).
You can specify when to call fixtures by providing a Level argument for @Setup/@TearDown annotations:
Name | Description |
Level.Trial | This is a default level. Before/after entire benchmark run (group of iteration) |
Level.Iteration | Before/after an iteration (group of invocations) |
Level.Invocation | Before/after every method call (this level is not recommended until you know what you are doing) |
Do not use loops in your tests. JIT is too smart and often does magic tricks with loops. Test the actual calculation and let JMH to take care of the rest.
In case of non-uniform cost operations (for example, you test time to process a list which grows after each test) you may want to use @BenchmarkMode(Mode.SingleShotTime) with @Measurement(batchSize = N). But you must not implement test loops yourself!
By default JHM forks a new java process for each trial (set of iterations). This is required to defend the test from previously collected “profiles” – information about other loaded classes and their execution information. For example, if you have 2 classes implementing the same interface and test the performance of both of them, then the first implementation (in order of testing) is likely to be faster than the second one (in the same JVM), because JIT replaces direct method calls to the first implementation with interface method calls after discovering the second implementation.
So, do not set forks to zero until you know what you are doing.
In the rare cases when you need to specify number of forked JVMs, use @Fork test method annotation, which allows you to set number of forks, number of warmup iterations and the (extra) arguments for the forked JVM(s).
It may be useful to specify the forked JVM arguments via JMH API calls – it may allow you to provide JVM some -XX: arguments, which are not accessible via JMH API. It will allow you to automatically choose the best JVM settings for your critical code (remember that new Runner(opt).run() returns all test results in a convenient form).
You can give the JIT a hint how to use any method in your test program. By “any method” I mean any method – not just those annotated by @Benchmark. You can use following @CompilerControl modes (there are more, but I am not sure about their usefulness):
Name | Description |
CompilerControl.Mode.DONT_INLINE | This method should not be inlined. Useful to measure the method call cost and to evaluate if it worth to increase the inline threshold for the JVM. |
CompilerControl.Mode.INLINE |
Ask the compiler to inline this method. Usually should be used in conjunction with Mode.DONT_INLINE to check pros and cons of inlining.
|
CompilerControl.Mode.EXCLUDE | Do not compile this method – interpret it instead. Useful in holy wars as an argument how good is the JIT 🙂 |
You can specify JMH parameters via annotations. These annotations could be applied to either classes or methods. Method annotations always win.
Name | Description |
@Fork
|
Number of trials (sets of iterations) to run. Each trial is started in a separate JVM. It also lets you specify the (extra) JVM arguments. |
@Measurement
|
Allows you to provide the actual test phase parameters. You can specify number of iterations, how long to run each iteration and number of test invocations in the iteration (usually used with @BenchmarkMode(Mode.SingleShotTime) to measure the cost of a group of operations – instead of using loops).
|
@Warmup
|
Same as @Measurement , but for warmup phase.
|
@Threads
|
Number of threads to use for the test. The default is Runtime.getRuntime().availableProcessors() .
|