Caffeine is a modern in-memory local cache for Java 8. This means high performance, a near optimal hit rate, and a rich set of features all packaged into a clean API. Caffeine is an evolution, drawing on experience from building Google Guava’s Cache and ConcurrentLinkedHashMap.

Taking Caffeine for a Spin

Those familiar with Guava should feel right at home due to the similar features and API. In fact migrating is made easier by using an optional adapter that bridges Caffeine’s caches to Guava’s interfaces. For those using Spring or CDI, integration using standard annotations is provided through the JSR-107 JCache extension.

The preferred usage is through the standard API, so let’s see what that looks like.

LoadingCache<String, Book> booksByTitle = Caffeine.newBuilder()
    .maximumSize(10_000)
    .expireAfterAccess(5, TimeUnit.MINUTES) 
    .refreshAfterWrite(1, TimeUnit.MINUTES)
    .build(title -> { // Using a jOOQ repository
      return database.selectFrom(BOOK)
          .where(BOOK.TITLE.eq(title))
          .fetchOneInto(Book.class);
    });

// A classic about a prison revolt (TANSTAAFL!)
Book favorite = booksByTitle.get(“The Moon Is a Harsh Mistress”);

// A good way to spend a long, rainy weekend
Map<String, Book> series = booksByTitle.getAll(Arrays.asList(
  “A Game of Thrones”, “A Clash of Kings”, ...));

Caffeine provides a lot of features, allowing you to tailor the cache to your needs. For brevity, I’ll list them out and skip showing examples. For those coming from Guava, two new additions are an asynchronous cache and a cache writer (for write-through and layered caches):

  • Automatic loading of entries into the cache, optionally asynchronously
  • Size-based eviction when a maximum is exceeded
  • Time-based expiration of entries, measured since last access or last write
  • Keys automatically wrapped in weak references
  • Values automatically wrapped in weak or soft references
  • Notification of evicted (or otherwise removed) entries
  • Writes propagated to an external resource
  • Accumulation of cache access statistics

Going Beyond LRU

Many caches use the classic Least-Recently-Used eviction policy due to its simplicity and decent hit rate in a variety of situations. However, it is far from optimal, and decades of research have led to numerous alternatives. Caffeine uses a novel eviction policy that provides a nearly perfect hit rate by retaining usage history in a compact sketch data structure. I’ll show a few comparisons, but for the full details see this paper.

For comparison I’ll use Guava and Ehcache, which are both popular libraries. Guava uses an LRU split into multiple hash table segments to provide write concurrency. Ehcache uses sampled LRU where the least recently used entry is chosen from a random distribution. I’ll show both versions 2.10 and 3.0m4 because the quality of the distribution has a significant effect on both the hit rate and runtime cost.

Screen Shot 2015-12-07 at 15.13.54

Screen Shot 2015-12-07 at 15.14.42

Ehcache 3.0m4 has very erratic performance due to how it constructs the distribution to sample from. For small caches the victim chosen is often quite poor and for large ones the execution time is excessive. In fact, while Caffeine and Guava were measured in 10s of seconds and Ehcache 2.10 double that, Ehcache 3.0m4 can take over 4 minutes. Their development team has been dismissive towards performance issues, but hopefully they’ll be fixed prior to a production certified release.

Scaling with You

Servers keep adding more cores and as developers we need to learn how to better use them in our applications. A common technique is to avoid shared state, but caching is all about intelligently reusing data. To make smart predictions about which entries should be kept, a cache read is often a write to some global state. So how can we get good performance with a great hit rate?

Caffeine refines a technique used by ConcurrentLinkedHashMap (CLHM) and Guava’s cache. It borrows from database theory, where writes are buffered to a commit log and replayed asynchronously in batches. As our caches are not persistent we can take advantage of ring buffers to cheaply record access events. As we will see with Guava, the pragmatic use of a ConcurrentLinkedQueue instead hurts performance. As a baseline I’ve included a synchronized LinkedHashMap (LHM) in LRU mode since that is still a popular choice.

Screen Shot 2015-12-07 at 15.16.02

As the chart demonstrates, the approach taken by Caffeine works very well. It continues to scale as the number of cores increase by gradually adding ring buffers when contention is detected.

Summing up

Caffeine is an open-source, high performance, modern cache for the JVM. It shows that when you have a long commute, an interesting problem can make the train ride a little more enjoyable.

 

 

Add a Boost of Caffeine to Your Java

| Java Language| 22,533 views | 0 Comments
About The Author
-

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>