This week we’re looking at JEP 197: Segmented Code Cache as part of the Java 9 series: looking at some of the JDK Enhancement Proposals (JEPS) hoping to make their way into Java 9. Last week we looked at multi-release JAR files: MRJARs. This week, we look at the proposal to spilt out the code cache so there is a code cache per top-level type of compiled code, as “the organization and maintenance of compiled code has a significant impact on performance.”

The code cache

The first time a method is called at runtime, it is interpreted. The JVM has a call count for each method, which is incremented. When the call count gets to a certain threshold, the method is compiled. If it is called infrequently, the counters decay.

When a compiled method is called for the first time, a new call count is created and incremented. When this call count reaches a certain threshold, it is compiled again with optimisations. This is repeated until there are no more available optimisations.

This increases performance without requiring all code to be compiled and optimised on start up. It also ensures that the most frequently called methods (the “hottest”) are the most highly optimised. To see this in action, enable -XX:+PrintCompilation which prints out a line of information each time a method or loop is compiled.

When methods are compiled, the assembly-language instructions are stored in the code cache. This has a fixed size (which can be tuned), and when it is full, the JVM can’t compile anymore code.

Using tiered compilation, there are five levels at which a method is compiled:

  • Interpreted Code
  • Simple C1 Compiled Code
  • Limited C1 Compiled Code
  • Full C1 Compiled Code
  • C2 Compiled Code

With tiered compilation, there is more compiled code and a new compiled code type: profiled code. This has a predefined, limited life time.

The proposal to segment the code cache wants to separate:

  • Non-method code (JVM internal non-method code, e.g. compiler buffers, bytecode interpreter). This stays in the cache forever.
  • Profiled code (lightly optimised with a short lifetime)
  • Non-profiled code (fully optimised, which can potentially stay in the code cache forever)

At present, the code cache does not differentiate between the code types.

Flushing the cache

Unloading classes removes methods from the code cache which frees up more space. There is also a sweeper that is involved if:

  • The code cache is getting full
  • There are sufficient state changes since the last sweep
  • There has not been a sweep for a while

This will progress methods though stages in successive sweeps so they can be safely flushed.

The problem with having a homogenous soup of code in the code cache, is that the sweeper has to scan all of the code, even though some entries (like the non-method code) are never flushed.


JEP 127 looks to create three corresponding code heaps, to reflect the three different top-level types of compiled code. There will also be command line switches to tune the size in bytes of the respecitve heaps:

  • -XX:NonProfiledCodeHeapSize: heap containing non-profiled methods.
  • -XX:ProfiledCodeHeapSize: heap containing profiled methods.
  • -XX:NonMethodCodeHeapSize: heap containing non-method code.

The code cache sweeper can thus only iterate over the method-code heaps. The only problem is that having fixed sizes for heaps could lead to wastes of memory, and so an option to turn off segmentation (and revert to the existing form of the code cache) will be added.

Further reading

Java Performance: The Definitive Guide

JIT compiler overview

Java JIT code cache introduction and tuning for Power Systems