The microservice hype is everywhere, and although the industry can’t seem to agree on an exact definition, we are repeatedly told that moving away from a monolithic application to a Service-Oriented Architecture (SOA) consisting of small services is the correct way to build and evolve software systems. Many industry stalwarts are suspicious as to the benefits being touted, from Benjamin Wooton arguing that there is no free lunch for what could effectively be a reinvention of classic SOA, to Martin Fowler suggesting we are simply pushing complexity around, from application to orchestration. Simon Brown also recently asked a very interesting question in his blog; “if you can’t build a [well-structured] monolith, what makes you think microservices are the answer?” This is a highly pertinent question for all teams looking to utilise microservices to escape their current architectural woes.

Despite these concerns, many companies are vocal about their successful adoption of microservices. Netflix is the poster-child of the microservice architectural style, with the likes of Amazon, Twitter, Groupon and Gilt closely flanking them and regularly sharing their stories at tech conferences. Accordingly, organisation and enterprises of all sizes are beginning to evaluate this latest incarnation of SOA, and the goal of this article is to build on the introductory article by Martin Fowler and James Lewis and provide an overview of the key concepts and technologies of microservices. Ultimately, we hope this article will help developers to shift their mindset temporarily away from the typical enterprise monolithic applications, and we aim to provide several pointers to allow successful exploration of this brave new service-oriented world.

Interfaces – Good contracts make for good neighbours

Whether you are starting a greenfield microservice project or are tasked with deconstructing an existing monolith into services, the first task is to define the boundaries and corresponding Application Programming Interfaces (APIs) of your new components.

The suggested granularity of a service in a microservice architecture is finer in comparison with what is typically implemented when using a classical SOA approach, but arguably the original intention of SOA was to create cohesive units of reusable business functionality, even if the implementation history tells a different story. A greenfield microservice project often has more flexibility, and the initial design stage can define Domain Driven Design (DDD) inspired bounded contexts with explicit responsibilities and contracts between service provider and consumer (for example, using Consumer Driven Contracts). However, a typical brownfield project must look to create “seams” within the existing applications and implement new (or extracted) services that integrate with the seam interface. The goal is for each service to have high cohesion and loose coupling; the design of the service interface is where the seeds for these principles are sowed.

A well-designed interface should promote easy comprehension for developers integrating the component or service into their application. All of the traditional programming interface design advice from the likes of Robert Martin and Sandro Mancuso applies to the design of microservice interfaces (such as naming), and additional documentation tools such as Reverb’s Swagger and Mashery’s I/O Docs can provide additional information to consumers of an API with relatively little effort (or intrusion into the codebase). The Swagger toolset also offers a very useful editor, which can be used to undertake ‘API-driven development’, where an API can be defined using a simple YAML syntax and shared with a consumer for agreement or refinement before resource is committed to implementing the backend platform that will support the functionality provided by API.

Communication – Synchronous vs asynchronous

In addition to the definition of service interfaces, another primary design decision is whether to implement synchronous or asynchronous communication. Asynchronous communication is often favoured, as this can lead to more loosely coupled, and hence less brittle, services. However, my experience has shown that each use case should be examined on its own merits, and requirements around consistency, availability, and responsiveness (e.g. latency) should be considered carefully.

In practice, we find that many companies will need to offer both synchronous and asynchronous communication in their services. It is also worth noting that there is a considerable drive within the industry to move away from the perceived ‘heavyweight’ WS-* communication standards (e.g. WSDL, SOAP, UDDI), even though many of the challenges addressed by these frameworks still exist, such as service discovery, service description and contract negotiation (as articulated very succinctly by Greg Young in a recent presentation at the muCon microservices conference).

The current de facto synchronous communication mechanism in microservice architecture is provided by creating a REST-like API that exposes resources on a service, typically using JSON over HTTP. Interface Definition Language (IDL) implementations utilising binary protocols, such as Thrift or Avro, can also be used if RPC-like communication or increased performance is required. However, caution should also be taken against utilising too many different protocols within a single system, as this can lead to integration problems and maintenance issues.

An IDL implementation typically provides an implicit (and often versioned) contract, which can make integration and testing against endpoints easier. Explicit tooling must be implemented for contract-based development of REST-like interfaces. The PACT framework is a good example that provides both service virtualisation for consumers and contract validation against an API for providers.

Once services and API endpoints are created, they can be deployed and managed in-house. However, many enterprises are instead choosing to offload API management by  using API Gateway services, such as Apigee, 3Scale, or Mashery. Regardless of deployment methodology, the problems of fault tolerance and resilience must be handled with synchronous communication. Design patterns such as retries, timeouts, circuit-breaker, and bulkheads must be implemented in a system of any size, especially if load will be high or the deployment fabric is volatile (e.g. a cloud environment). One excellent framework that provides an implementation of these patterns is Netflix’s Hystrix, which has also been combined with Dropwizard in the “Tenacity” project and Spring Boot in the “Spring Cloud” project.

Lightweight message queue (MQ) technology is often the favoured method for implementing asynchronous communication. RabbitMQ and ActiveMQ are typically the two most popular choices. Frequently these MQs are combined with reactive frameworks, which can lead to the implementation of event-driven architectures, another emerging hot topic within our industry. Enterprises seeking high performance messaging can look at additional MQ technologies such as ZeroMQ, Kafka, or Redis.

If their requirements include high throughput/low latency processing of messages or events, then one of the current stream-based Big Data technologies may be more appropriate, such as Spring XD or Apache Storm. Interface contracts tend to be loose with asynchronous communication since messages or events are typically sent to a broker for relay onto any number of interested services. Postel’s law of “be conservative in what you send, be liberal in what you accept” is often touted as a key principle for both provider and consumer services. If your development team is exploring asynchronous communication for the first time, then some care must be taken because the programming model can be significantly different in comparison with synchronous blocking communication.

Finally, although many MQ services provide fault tolerance, it is highly recommended that you confirm your requirements against the deployment configuration of your MQ broker.

Middleware – What about the traditional enterprise stalwarts?

With the mention of REST-like endpoints and lightweight messaging, many enterprise middleware vendors are understandably becoming nervous. When my consulting firm discusses the microservice architecture with enterprise clients, many look at implementation diagrams and immediately ask whether the message broker sitting in between all of the services is a commercial Enterprise Service Bus (ESB), such has been the success of the ESB marketing campaign.

We typically answer that it could be, but we usually find that a lightweight MQ platform is more suitable because we believe the current trend in SOA communication is towards “dumb pipes and smart endpoints” In addition to removing potential vendor fees and lock-in, other benefits of using lightweight MQ technologies include easier deployment, management, and simplified testing.

Although many heavyweight ESBs can perform some very clever routing, they are frequently deployed as a black box. Jim Webber once joked that ESB should stand for “Egregious Spaghetti Box,” because the operations performed within proprietary ESBs are not transparent, and are often complex. If requirements dictate the use of an ESB (for example, message splitting or policy-based routing), then open source lightweight ESB implementations such as Mule ESB, WSO2 ESB, and Fuse ESB should be among the first options you consider.

On a related topic, we have found that although many companies would like to split monolithic systems, the Enterprise Integration Patterns (EIPs) contained within the corresponding applications are often still valid. Accordingly, we believe the use of EIP frameworks such as Spring Integration and Apache Camel still have their place within a microservice architecture. These frameworks typically provide a large amount of “bang for your buck” in relation to the amount of code written (which may be an important factor when creating a microservice), and they nicely abstract over archetypal EIP solutions. These frameworks can also be introduced as an intermediate step when migrating to a microservice architecture. Refactoring existing code to utilise EIPs, such as “pipes and filters,” may allow components to be extracted to external services more easily at a later date.

Building microservices on the JVM – What are my options?

Building microservices can be a challenge for new adopters because existing frameworks, platforms, and processes for creating a monolith may not scale well with multiple services in the mix. Robert Martin recently discussed the “component scalability scale” on his blog, and he suggested that components/services build and deployment configuration can range from multiple microservices deployed across a cluster to a service created by dynamically linking components (e.g. via JARS) and deploying them into a single VM. This is a useful model, and below is a list of related technologies that can be explored depending on your use case:

  • Microservices deployed onto a cluster: Spring Boot, Dropwizard, Ratpack services, or container-based services (e.g. Docker, Rocket) deployed on Mesos with Marathon, Kubernetes, CoreOS Fleet, fabric8 or a vendor-specific PaaS. Service discovery via Consul, Smartstack, or Curator ensemble.
  • Small number of servers, each running more than one microservice: Spring Boot or Dropwizard services deployed as a WAR into containers or running as fat JARs containing an embedded container (e.g. Jetty, Tomcat). Service discovery via Consul, Smartstack, or Curator ensemble.
  • Single server, multiple microservices: Spring Boot or Dropwizard services running on configurable ports deployed within multiple embedded containers(e.g. Jetty or Tomcat). Service discovery via Curator, local HAProxy, or properties file.
  • Services running as threads in a single VM: Akka actors or Vert.x Verticles deployed as a JAR running on a single JVM. Service (actor) discovery implicit within Akka and Vert.x frameworks.
  • Dynamically linked components within a single service: OSGi bundle deployed via Apache Felix or Apache Karaf. No need for service discovery, but correct bundling of components is vital.

The list above contains just a few examples, and we have recently seen some great ‘outside the box’ thinking by the likes of Adam Bien and David Blevins, who have deployed Java EE-based microservices with a simple implementation of JAX-RS running as a standalone process and a single EJB running in an Embedded Container respectively (check out their Devoxx Belgium 2014 videos on Parleys when they are released!).

Deploying microservices – How hard can it be?

However you choose to build microservices, it is essential that a continuous integration-style build pipeline be used which includes rigorous automated testing for functional requirements, fault-tolerance, security and performance. The classical SOA approach of manual QA and staged evaluation is arguably no longer appropriate in an economy where ‘speed wins’ and the ability to rapidly innovate and experiment is a competitive advantage (as captured within the Lean Startup movement).

The original Continuous Delivery book written by Jez Humble and Dave Farley contains an amazing wealth of information, practically all of which is relevant to building and deploying components utilising the microservice architectural style. Indeed, it could be argued that the use of the continuous integration and ‘DevOps’ practices detailed in this book are even more important when delivering microservices, as the level of complexity of orchestration and composition can potentially increase exponentially with each microservice added to your application stack.

Behaviour of your application can become emergent in a microservice-based platform, and although nothing can replace thorough and pervasive monitoring in your production stack, a build pipeline that exercises (or tortures) your components before they are exposed to your customers would appear to be highly beneficial. As I’ve argued in several conference presentations, a good build pipeline should exercise services in the target deployment environment as early in the pipeline as possible.

For example, if you plan to deploy your services in Docker containers to a public cloud environment (e.g. AWS EC2 Container Services or IBM Bluemix), then this is exactly where you should run your acceptance, performance and security tests. It cannot be emphasised enough that the underlying deployment environment fabric will massively influence the results of these tests, and I have witnessed several project teams struggle when they have created their build pipeline utilising a different fabric than that found within the production environment.

Summary – APIs, lightweight comms, and correct deployment

Regardless of whether you subscribe to the microservice hype, it would appear that this style of architecture is gaining traction within practically all software development domains, including enterprise development. This article has attempted to provide a primer for understanding key concepts within this growing space, and hopefully reminds readers that many of these problems and solutions have been seen before with classical SOA, and we should take care not to reinvent the proverbial service-oriented wheel.

Ultimately, microservices could simply be a deployment approach, albeit an approach supported by a rigorous continuous integration/delivery build pipeline that acknowledges the complexity of orchestrating deployment and run-time interactions that are implicit when composing multiple small services. The key to this goal is to practice good high-cohesion and loose-coupling strategies throughout the architecture and design process, and to choose technologies appropriate for your platform requirements.

Microservices may provide a very useful architectural and deployment style, but they certainly are not a panacea to all of your current software development problems. However, if you do decide that the microservice approach is appropriate for your application, then I hope that this article is a useful springboard for your research and evaluation.

Further Reading/Watching

This article was expanded from a piece I originally published in DZone’s 2014 Enterprise Integration and Microservice Guide, which is available here:

The slides for my muCon “Developing Microservices for the Cloud” talk can be found here:

and the video here:

Microservices, Sam Newman

Software Architecture for Developers, Simon Brown

Just Enough Software Architecture, George Fairbanks

Enterprise Integration Patterns, Gregor Hohpe and Bobby Woolf

Camel In Action, Claus Ibsen and Jonathan Anstey

Service Design Patterns, Robert Daigneau

Implementing DDD, Vaughn Vernon

Continuous Delivery, Jez Humble and Dave Farley

Art of Scalability, Martin Abbott and Michael Fisher

Scalability Rules, Martin Abbott and Michael Fisher

Exploring Microservices in the Enterprise

| Architecture & Security| 10,025 views | 0 Comments
About The Author
- Daniel Bryant is a Principal Consultant for OpenCredo, and specialises in enabling agility within organisations. His current work includes introducing better requirement gathering and planning techniques, focusing on the relevance of architecture within agile development, and facilitating continuous integration/delivery. Daniel’s current technical expertise focuses on 'DevOps’ tooling, cloud platforms and microservice implementations. He is also a leader within the London Java Community (LJC), contributes to several open source projects, writes for well known technical websites, and regularly presents at international conferences such as JavaOne, Devoxx and FOSDEM.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>