Introduction
In this post, we take a look at how to benchmark Java code using JMH. JMH is a tool or harness as they call it to benchmark code running on the JVM. The tool takes care of warm-ups, can prevent code optimizations, and running multiple variations of the benchmarks.
Dependencies
To use JMH in your project include the following dependencies in your pom.xml file.
|
|
You can find the latest version of jmh-core here. The latest version of jmh-generator-annprocess can be found here
Creating a benchmark
To run a benchmark you need to have a public
method that is marked with the @Benchmark
annotation. Like this:
|
|
Having a @Benchmark annotated method is not enough. You also need to have main method that starts the whole process. Like this:
|
|
This starts the benchmarking and gets all the options and settings from the annotations, which we will talk about later. If you don't want to use the annotations for passing the options you can also use the following:
|
|
This will also start the benchmarking with the options that are passed along. In this case, the number of forks is 1, and garbage collection is turned off.
When running this code you output that is similar to this:
|
|
During the execution, JMH will print the progress to the console like the state of the benchmark and the estimated time remaining.
Benchmark modes
JMH supports multiple benchmark modes like:
- Throughput: The number of times the benchmark method ran within a given time.
- AverageTime: The average time an operation ran.
- SampleTime: Runs the benchmark for a given time and takes random samples of the benchmark execution.
- SingleShotTime: Runs the benchmark only once. This is good for measuring cold times.
- All: Useful when you work on the JMH tool or want everything.
You can set the mode by using the @BenchmarkMode
annotation like so:
|
|
This will set the benchmark mode to average time and print the average time. This looks like the following:
|
|
You can also pass multiple modes using the annotation like this:
|
|
This will run the benchmark twice. Once to get the average time, and once for the average time.
Forking
With @Fork
you can create new forks of your benchmark. The JVM optimizes an application by creating a profile of the code.
To reset these optimizations you can create forks. You can the fork like this:
|
|
This will create 3 forks but the first one is used to warm up the JVM and the results of those warm-ups are ignored.
Setting the number of warm-ups
With the @warmup
you can choose what the warm-up behavior is within a fork. Setting the iterations to 5 will run the benchmark 5 times. These warm-up rounds
are ignored for the real measurements. The warm-ups give the JVM an idea of how your code is used to create a profile.
You use the warmup annotation like this:
|
|
The benchmark will now run 5 times to warm up the JVM.
Setting the number of Executions
With @Measurement
you can set the number of times a benchmark method has to run. This looks as follows:
|
|
The benchmark annotated method from the example will run 5 times. All those times count for the benchmark results.
Example Benchmarking method
You can combine all these annotations to have complete control over how a benchmark is executed. When using all the annotations it looks like the following:
|
|
This will create 2 forks that each will run 5 warm-ups and 5 iterations.
Using state to create variants.
If you want to keep state and want to try lots of different parameters you can use a @state annotated class to keep track of things. For example, you use a state object to test different inputs or to activate different behavior. In the following example I use it to test different inputs.
In the following code, I have a @state annotated class with a single value “number”. JMH will run a unique benchmark for each value in the param array.
|
|
The example will make JMH run 6 different benchmarks. If I add another value like @Param({"true", "false"})
JMH will create
2 * 6 = 12 benchmarks. One for each combination.
You can use the plan in your benchmark like this:
|
|
JMh will run a benchmark for every value in the param annotation.
Prevent dead code optimizations
To prevent optimizations of unused objects you can use a black hole. The JVM is very good at optimizing code. If you are creating objects but don't use them the JVM can optimize this. In your production code, you use all the objects you create so that is also what you want to do in your benchmark. One way to achieve this is to use a black hole. A black hole will fool the JVM into thinking that the object is actually used.
To use a black hole all you have to do is to add it as a parameter.
|
|
After adding it you can use it to consume objects in your benchmark code.
Constant folding
As I mentioned in the previous point, the JVM is very good at optimizing. If a value can be a constant there is a change the JVM will make that optimization. This is not always favorable behavior.
Take this code for example:
|
|
The JVM can optimize this because the result is always the same. To prevent this from happening you can use a state object. Like so:
|
|
This will prevent the JVM from optimizing constants.
Order of benchmarks.
Every benchmark runs in isolation so if you're benchmarking code that does not interact with the OS or the “outside” you should be
worried about the order of the benchmarks. If you are benchmarking code that does interact the OS, filesystem, or the “outside”. You will
hit caches and other things that need to be warmed up and will be faster a second time. Giving the second benchmark or run a more favourable environment.
To order the benchmarks you need to prefix them with _0_
. The benchmark with zero will run first followed by _1_ _2_ ... _N_
.
This looks like this:
|
|
Doing this you code will run in a specific order.
Tip for benchmarking network code
When benchmarking applications that use a network connection in any way I like to run the software it is connecting to on a different machine. This gives a more realistic view of the performance. In general, it is better to have an environment for benchmarking that looks as much as possible as the real environment.
Conclusion
In this post, you learned how to create benchmarks using JMH and how to configure it to run as you like it. Remember that warm-ups are important when running benchmarks and can be configured at the benchmark and fork level.
Sources
To create this post, I used these resources next to my personal experience using JMH.