20Dec
Benchmark Java Applications using JMH
Benchmark Java Applications using JMH

In this article, we will introduce JMH as a tool to benchmark your Java applications. We will discuss on how to leverage JMH for performance insights of your Java application.

Before we talk about benchmarks, let us distinguish between the functional requirements and performance requirements of an application. We use JUnit / Mockito to test the functionality of Java applications. We write test cases to check whether all the functional requirements of the application are satisfied. Automated testing tools such as Selenium makes testing even simpler. But unfortunately, that’s not the only challenge in maintaining applications in production. One of the major concerns is the application performance and that’s where the benchmark comes into the picture. Customer-faced applications demand low latency and high throughput.  Troubleshooting application performance is one of the hardest jobs for a developer. The hardest part is to figure out the actual logic in the application that causes performance bottleneck.

App in Dev and App in Production
App in Dev and App in Production

 

We may be measuring the performance of the application as a whole using multiple benchmark experiments. However, it is difficult to test benchmarks on a block of code rather than the entire application. So, we need a benchmark tool to test the performance of application logic. Also, you would want to investigate on a more detailed level. JMH (Java Microbenchmark Harness) serves for the exact purpose.

Pre-requisites

This article assumes that you have basic programming knowledge in Java. You need to have Java and Maven installed on your system in order to follow this article. We prefer JDK 8 to be installed although there are no requirements for the JDK version. You also need to have IntelliJ IDEA  to run the examples flawlessly. If you don’t have an IDE like IntelliJ IDEA, then you could simply run the code samples using java command.

What is JMH?

JMH is an Open JDK tool that helps to implement benchmarks correctly. Remember, JMH specializes in micro-benchmarks where low-level performance metrics are measured. As an example, we can think of measuring the performance of a block of code or a function rather than the whole application. So, it is important to note that micro-benchmarks are the exact opposite of what an end-to-end performance testing is meant for. Micro-benchmarks are useful for in-depth analysis of performance issues. Writing JMH benchmarks are similar to writing JUnit test cases. The only difference is with the annotations used in JMH.

Writing your first JMH benchmark

Start by creating a maven project and your pom.xml should contain the below-mentioned dependencies.

<dependency>
       <groupId>org.openjdk.jmh</groupId>
       <artifactId>jmh-core</artifactId>
       <version>1.22</version>
</dependency>

<dependency>
       <groupId>org.openjdk.jmh</groupId>
       <artifactId>jmh-generator-annprocess</artifactId>
       <version>1.22</version>
</dependency>

The ideal approach is to create a new JMH benchmark project/class instead of writing them in the same Java application. So, let’s start by creating a class called BenchMark to run your JMH benchmarks.

@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@Fork(value = 2)
public class BenchMark {
   
  // benchmark methods to be added.

}

Don’t worry about the unfamiliar annotations in the above code. We will explain every one of them in detail as we go along with the article. So, @BenchmarkMode annotation specifies the benchmark mode in which you want to measure the performance. Below are 5 different benchmark modes available for you.

  1. Throughput -> Use Throughput mode if you want to measure the number of operations per second. An operation corresponds to the benchmark method.
  2. AverageTime -> Use AverageTime mode if you want to measure average time in which the benchmark method is executed.
  3. SampleTime -> Use SampleTime mode if you want to measure the max/min time in which the benchmark method is executed.
  4. SingleShotTime -> Use SingleShotTime mode if you want to measure the time in which the benchmark method is executed without considering the warm-up iterations.
  5. All -> Use All mode if you want to measure all the benchmarks mentioned in the previous 4 benchmark modes.

The annotation @OutputTimeUnit specify the time unit in which benchmark results should be displayed. @Fork annotation will set default forking parameters. It is used to control the benchmark execution. We can specify the number of parameters such as value, warmups & jvmArgs with @Fork annotation. The value parameter specifies the number of times in which the benchmark should be executed. We can also specify the number of warm-up iterations using the warmups parameter. Warm-up iterations will not be considered for the actual benchmark operation. The jvmArgs parameter specifies the JVM arguments to be used with the benchmark.

As we discussed earlier, micro-benchmarks target low-level code, which means a method or particular block of code. Each of the benchmarks that we need to measure will be placed inside a method. We will then annotate them with @Benchmark annotation. For instance, refer to the below benchmark method:

@Benchmark
public void testMethod1(Blackhole bh){
   int[] array = new int[2000];
   bh.consume(array);
}

In the above method, we are simply allocating array space for 2000 primitive int elements. The intent is to benchmark the memory space allocation of a primitive array. You might be wondering what is a Blackhole and why is it required in the first place. Let’s assume if the above method was written like below:

@Benchmark
public void testMethod1(){
   int[] array = new int[2000];
}

Now when you see the benchmark results, you might come across inconsistent results. First of all, there is nothing to benchmark here! Why? because the variable array is garbage collected since it is not referenced or used anywhere. JVM already handled just as usual. Now we will be like:

Thanks!
Thanks!

 

So, how do we benchmark the above method without JVM interference for optimization?  That’s where Blackhole comes into the picture. Blackhole consumes values and thereby fooling the JVM that the value was actually being used. Now for the demonstration, let’s write another benchmark method for string array allocation.

@Benchmark
public void testMethod2(Blackhole bh){
  String[] array = new String[2000];
  bh.consume(array);
}

Now let’s say we have a benchmark method as shown below:

@Benchmark
public int testMethod2(int sum){
  int a = 100;
  int b = 200;
  int sum = a+b;
  return sum;
}

In this case, our benchmark method returns a value and it is being used. So, no need to handle them using Blackhole. At this point, it seems perfectly fine in our primary inspection. But there are still chances of JVM interference here. As you can see on the above code, we use constants to form the sum variable. So, JVM will probably memorize the value of sum as 300 instead of performing the actual sum operation. So, we won’t be able to actually evaluate the benchmark here. So, how do we solve this problem?  By moving the constant part to a benchmark state. A benchmark state can be defined using a separate class or using an inner class. We move the constant variables to the benchmark state class. These variables are called state variables. Let’s say, we have a state class as shown below:

@State(Scope.Thread)
public static class BenchmarkState {
  public int a = 100;
  public int b = 200;
}

Then our benchmark method can be re-written as shown below:

@Benchmark
public int testMethod2(BenchmarkState state){
  return state.a + state.b;;
}

@State annotation accepts three different scopes for the benchmark state.

  • Thread -> State instances are created for every thread that runs the benchmark.
  • Group ->  State instances are created for every thread group that runs the benchmark.
  • Benchmark -> All the threads share the same state object for the benchmark process.

So, it is a preferred approach to keep the constants out from the benchmark to eliminate possible JVM optimization.  @State annotation plays a big role here.
Let’s go back to our previous implementations for the benchmarks. Below were the benchmarks for your reference.

@Benchmark
public void testMethod1(Blackhole bh){
  int[] array = new int[2000];
  bh.consume(array);
}

@Benchmark
public void testMethod2(Blackhole bh){
  String[] array = new String[2000];
  bh.consume(array);
}

Let us compare the benchmark methods: testMethod1 & testMethod2 for their performance. Since we mentioned the benchmark mode as  Mode. Throughput earlier, throughput for these two benchmark methods will be measured. Let us run them from the main method itself. We use Options / OptionsBuilder to pass the JMH application options to the running thread. We just have to mention the benchmark class there. Benchmark methods will be picked up since we tagged them using @Benchmark annotation. Refer to the below code:

public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
                .include(BenchMark.class.getSimpleName())
                .forks(1)
                .build();

        new Runner(opt).run();
}

Once you execute the main method, we get the result as shown below:

"C:Program FilesJavajdk1.8.0_191binjava.exe" -javaagent:C:Usersrahul.rajDownloadsideaIC-2019.3.winlibidea_rt.jar=63365:C:Usersrahul.rajDownloadsideaIC-2019.3.winbin -Dfile.encoding=UTF-8 -classpath "C:Program FilesJavajdk1.8.0_191jrelibcharsets.jar;C:Program FilesJavajdk1.8.0_191jrelibdeploy.jar;C:Program FilesJavajdk1.8.0_191jrelibextaccess-bridge-64.jar;C:Program FilesJavajdk1.8.0_191jrelibextcldrdata.jar;C:Program FilesJavajdk1.8.0_191jrelibextdnsns.jar;C:Program FilesJavajdk1.8.0_191jrelibextjaccess.jar;C:Program FilesJavajdk1.8.0_191jrelibextjfxrt.jar;C:Program FilesJavajdk1.8.0_191jrelibextlocaledata.jar;C:Program FilesJavajdk1.8.0_191jrelibextnashorn.jar;C:Program FilesJavajdk1.8.0_191jrelibextsunec.jar;C:Program FilesJavajdk1.8.0_191jrelibextsunjce_provider.jar;C:Program FilesJavajdk1.8.0_191jrelibextsunmscapi.jar;C:Program FilesJavajdk1.8.0_191jrelibextsunpkcs11.jar;C:Program FilesJavajdk1.8.0_191jrelibextzipfs.jar;C:Program FilesJavajdk1.8.0_191jrelibjavaws.jar;C:Program FilesJavajdk1.8.0_191jrelibjce.jar;C:Program FilesJavajdk1.8.0_191jrelibjfr.jar;C:Program FilesJavajdk1.8.0_191jrelibjfxswt.jar;C:Program FilesJavajdk1.8.0_191jrelibjsse.jar;C:Program FilesJavajdk1.8.0_191jrelibmanagement-agent.jar;C:Program FilesJavajdk1.8.0_191jrelibplugin.jar;C:Program FilesJavajdk1.8.0_191jrelibresources.jar;C:Program FilesJavajdk1.8.0_191jrelibrt.jar;C:Usersrahul.rajIdeaProjectsJmhBenchMarktargetclasses;C:Usersrahul.raj.m2repositoryorgopenjdkjmhjmh-core1.21jmh-core-1.21.jar;C:Usersrahul.raj.m2repositorynetsfjopt-simplejopt-simple4.6jopt-simple-4.6.jar;C:Usersrahul.raj.m2repositoryorgapachecommonscommons-math33.2commons-math3-3.2.jar;C:Usersrahul.raj.m2repositoryorgopenjdkjmhjmh-generator-annprocess1.21jmh-generator-annprocess-1.21.jar" com.benchmark.BenchMark
# JMH version: 1.21
# VM version: JDK 1.8.0_191, Java HotSpot(TM) 64-Bit Server VM, 25.191-b12
# VM invoker: C:Program FilesJavajdk1.8.0_191jrebinjava.exe
# VM options: -Xms2G -Xmx2G
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: com.benchmark.BenchMark.testMethod1

# Run progress: 0.00% complete, ETA 00:03:20
# Fork: 1 of 1
# Warmup Iteration   1: 1398.239 ops/ms
# Warmup Iteration   2: 1288.721 ops/ms
# Warmup Iteration   3: 1367.695 ops/ms
# Warmup Iteration   4: 1502.809 ops/ms
# Warmup Iteration   5: 1656.859 ops/ms
Iteration   1: 1672.573 ops/ms
Iteration   2: 1618.713 ops/ms
Iteration   3: 1589.281 ops/ms
Iteration   4: 1645.972 ops/ms
Iteration   5: 1636.542 ops/ms


Result "com.benchmark.BenchMark.testMethod1":
  1632.616 ±(99.9%) 119.643 ops/ms [Average]
  (min, avg, max) = (1589.281, 1632.616, 1672.573), stdev = 31.071
  CI (99.9%): [1512.973, 1752.260] (assumes normal distribution)


# JMH version: 1.21
# VM version: JDK 1.8.0_191, Java HotSpot(TM) 64-Bit Server VM, 25.191-b12
# VM invoker: C:Program FilesJavajdk1.8.0_191jrebinjava.exe
# VM options: -Xms2G -Xmx2G
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: com.benchmark.BenchMark.testMethod2

# Run progress: 50.00% complete, ETA 00:01:40
# Fork: 1 of 1
# Warmup Iteration   1: 1637.763 ops/ms
# Warmup Iteration   2: 1636.405 ops/ms
# Warmup Iteration   3: 1670.132 ops/ms
# Warmup Iteration   4: 1419.568 ops/ms
# Warmup Iteration   5: 1624.282 ops/ms
Iteration   1: 1265.384 ops/ms
Iteration   2: 1203.591 ops/ms
Iteration   3: 1271.569 ops/ms
Iteration   4: 1277.881 ops/ms
Iteration   5: 1208.284 ops/ms


Result "com.benchmark.BenchMark.testMethod2":
  1245.342 ±(99.9%) 139.698 ops/ms [Average]
  (min, avg, max) = (1203.591, 1245.342, 1277.881), stdev = 36.279
  CI (99.9%): [1105.643, 1385.040] (assumes normal distribution)


# Run complete. Total time: 00:03:21

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark               Mode  Cnt     Score     Error   Units
BenchMark.testMethod1  thrpt    5  1632.616 ± 119.643  ops/ms
BenchMark.testMethod2  thrpt    5  1245.342 ± 139.698  ops/ms

Results are as expected since allocations for primitive arrays are faster than that of String.  However, note that these results can vary depending upon your hardware. No code is magic without good hardware!

JMH does a good job in troubleshooting performance bottlenecks since we can narrow down the problem to a specific block of code. For those who had experience working in customer faced production apps,  you might have been in a similar situation:

Devs life be like
Devs life be like

The possibility of JMH doesn’t end here. Refer to the JMH documentation for more advanced use cases. You will also be developing your troubleshooting skills to another level 🙂

So, that’s it for now and hope you find this article useful!

23. Node.js Lessons. Domains, asynchronous try.. catch. Part 1.

Hey all! Our topic for today is Domains.

Domains are one of the Node.js options lacking both in standard JavaScript and browser versions of JavaScript. Domains were created to catch any asynchronous errors. For instance, if we look at the server that we’ve explored in one of our previous articles (download the lesson code from here for your convenience), we will see that everything is ok when it’s working

One Reply to “Benchmark Java Applications using JMH”

  1. Liked! Why not Caliper or JMeter?

Leave a Reply