Going nuts for scale
Earlier this month, Java in-memory data grid specialists Hazelcast announced a partnership with SQL people Speedment. In a nutshell, this means that, thanks to Speedment’s SQL Reflector, users can now integrate existing relational data with continuous updates of Hazelcast data-maps in glorious real-time. In this interview, Hazelcast VP of Marketing and Developer Relations Miko Matsumura gives Voxxed the full story, and explains to us why the world seems to be increasingly turning to the cache.
Voxxed: What were the key reasons for pairing up with Speedment?
Miko: In many cases, if you think about Hazelcast, one of the primary use cases is this notion of caching. So what happens is that a lot of people just put Hazelcast in, and often times they have some database that’s underneath, and they’re getting a lot of distributed caching performance from that.
So that’s a happy scenario. But, another scenario that often occurs with more legacy types of situations is that people are making changes to the database that Hazelcast is unaware of. And if that’s happening, you get this phenomenon of cache invalidation.
In this particular case, it doesn’t really matter if you have something that’s dedicated Hazelcast, but what does matter is if you’ve got some other database that’s changing the side. One of the biggest things that comes out of the partnership with Speedment is the ability to get really strong database synchronisation.
You say adoption is picking up for Hazelcast – is this mostly in finance?
That’s actually where the game is changing for us. In finance, the pattern is really simple, and often doesn’t need something like this kind of capability. In finance, what you have is lots of dedicated grids. But, what we’re now seeing is everything from large-scale government, e-commerce, gaming, transportation and logistics, to ticketing and reservation using Hazelcast.
So we’re definitely starting to see a whole bunch of horizontal uses – and it’s starting to penetrate into a lot of larger enterprise in these applications – things outside the “traditional” in-memory data grid. These are more like large scale caching type systems. So I think that’s exciting.
This is the first time Hazelcast has been able to add SQL reflection to your solution – does this open the door for you to expand in other directions too?
Yes – the solutions available on the market right now on the market come in fairly expensive and proprietary packages…for example Oracle Coherence – a property called GoldenGate HotCache. But with us, in this scenario, Speedment is acting as a HotCache product which provides these sort of functionalities. In terms of the market that we can address with this, what it will allow us to do is penetrate more deeply into traditional enterprise data.
In traditional enterprise data, there are really big central databases which are entered by many applications. What this partnership will allow is for stage rollout and migration. New apps can benefit from Hazelcast, and the older apps can continue to do whatever they want.
I think initially we’ve progressively been seeing the traditional customer in finance, people doing latency transactions in banks and things like that, which is all well and good, but the thing that we’re seeing now is really broad mainstream adoption using the JCache interface.
JCache is the Java standard for caching, and what it’s doing is mainstreaming in memory computing. The main streaming is happening across all these sectors. It’s this perfect storm of the JCache standard, which is a pure Java standard, plus on top of it, there are open source implementations, including Hazelcast. So the thing I think is really interesting is that people are not thinking about this the way they used to think about in-memory data grids. Now it’s more like enterprise data services, or enterprise caching as a service.
Which fits perfectly for Hazelcast – after all, you did hire JCache guy Greg Luck this year.
Yeah – Cache first is a really big thing for us. What I mean when I say that is, if you remember a few years ago, there used to be this big concept called 2.0. People used to run around saying 2.0…that’s a goofy concept that people don’t care about any more, and the reason why is that there’s a bunch of forces in play. One of these is continuous integration, and continuous testing. And the reason that’s so meaningful is because this notion of, ‘stop the world, I need to re-architect my system’ – it doesn’t work anymore.
To me, there’s a complete intolerance that’s been driven by things like latency. The second thing that’s happening is massive scale viral spiking – or even behavioural spiking. For example on things like Black Friday, or new product introductions.
As a consequence, the whole API management movement is predicated on this notion that you’re going to have something which is performant and ready to go. And that’s super vital. Ultra high performance, as well as ultra scalable. And the cache first mentality is happening in this realm. And it’s so easy to do this now. It’s comically easy for Java developers. In the case of Hazelcast, developers just drop an open-source JAR file into their class path, and then they’re done!
Image by Rick Craig