The IT world is moving forward fast. I wrote about Microservices and whether that spells the death of the Enterprise Service Bus and other middleware a year ago. This article is a “follow-up” and update to discuss how relevant microservices, containers and a cloud-native architecture is for middleware. It is unbelievable how fast enterprises of all sizes are moving forward with these topics!
Today, in June 2016, many enterprises have already adopted containers and cloud-native architectures or are adopting them. This topic is also getting more and more relevant for middleware vendors. Therefore, let’s do an update about the status quo of microservices, containers and cloud-native architectures in the middleware world.
Key takeaways of this article:
- A cloud-native architecture enables flexible and agile development, deployment and operations of all kinds of software
- Modern middleware leverages containers, microservices and a cloud-native architecture
- Packaging and isolation in containers is not enough, there are many more concepts to understand and leverage
The Momentum of Microservices and Docker
The main goal of microservices and containers is a shorter time to results and increased flexibility for development, deployment and operations of software. Why has it received so much momentum in the last few months? Because almost any enterprise beyond tech giants such as Amazon, Google, Facebook or Netflix struggles here significantly.
Microservices is like a Service-oriented Architecture (SOA): It is an architectural concept and vendor respectively technology independent. Therefore no clear standard definition or specification is available. You always need to define what you mean with the term microservices before you discuss it with others. Everybody has a different definition. For this article microservices are services that are developed, deployed and scaled independently. They are not specific to any technology and can offer business or integration logic. Several vendors offer specific support for building microservices (as we will see later in the article) but basically it is not related to any technology.
While the discussion about microservices architectures started with a famous article by Martin Fowler back in 2014 the actual widespread implementation was intially started by Netflix which open sourced plenty of frameworks for implementing microservices. We will come back to many of these later, and a lot of the content in the article is inspired by Netflix’ awesome and detailed tech blog posts.
A Container is dependent on the operating system it runs on. Containers use the resource isolation features of the Linux kernel such as kernel namespaces (isolates an application’s view of the operating environment including process trees, network, user IDs and mounted file systems) and cgroups (provides resource limiting, including the CPU, memory, block I/O and network), and a union-capable file system such as aufs and others. This allows independent containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Key differentiators of containers compared to VMs are packaging, portability, created as fit for purpose and therefore lower footprint and startup times, repeatability, better resource utilization of servers and better integration into the whole development ecosystem (such as Continuous Integration/Delivery lifecycle). Containers with your applications can be built, ship, and run anywhere: On your laptop, on test systems, in pre-production, and in production systems. Without changes to any content of the container and the application inside.
In contrary to microservices, there are several specific implementations of container software. Most of the momentum these days is behind Docker. Its ecosystem is growing daily. This will definitely consolidate again in the next years, but it will also become much more mature than it is today. Other examples for container technologies are CoreOS’ rkt (Rocket) or Cloud Foundry’s Garden / Warden. Notice that all these container concepts are nothing new, but leveraged in UNIX systems for years, for example take a look at Solaris Zones.
Other commercial examples are VMware Photon Platform / vSphere Integrated Containers or Microsoft’s Windows Server containers / Hyper-V containers or VMware Thinapp.
A great introduction to Docker – and containers in general – can be found here: Docker, the Future of DevOps. „The Open Container Initiative (OCI)“ – an Open Standard for Containers – was created in mid of 2015 to establish a global, vendor-agnostic standard. Many software vendors are part of the committee, including Amazon, Intel, Docker, Facebook, IBM, Microsoft, Oracle, Pivotal, VMware, to name a few of the many official supporters.
A Cloud-Native Architecture
Microservices and containers with their independent services and flexible deployment are just the foundation. The following sections discuss additional requirements for a cloud-native architecture. Please be aware that a lot of examples for available frameworks are listed in every section but they are not intended to be complete lists.
A cloud native architecture enables:
- Scalable services
- High uptime
- Automatic load balancing and failover
- Usage of public cloud platforms but also private or hybrid
- Vendor-agnostic deployment
- Faster upgrades
- Higher utilization and lower infrastructure cost
- Shorter time to results and increased flexibility
With all this you can focus on innovation and solving your business problems instead of spending your time with plenty of technical issues in ”static and inflexible legacy architectures”. Be aware that cloud-native does not mean that you can deploy software just in the public cloud. Private or hybrid cloud deployments are also contained in the definition of cloud-native!
Continuous Integration and Continuous Delivery
Continuous Integration (CI) and Continuous Delivery (CD) require a lot of different things to automatically build, deploy and run microservices. This includes scripting for automatic test and deployment, internal and external service discovery and distributed configuration of microservices and containers.
Scripting / Automatic Test and Deployment
This is what CI / CD began with several years ago. You build, test and deploy services automatically. This improved productivity, efficiency and product quality. The following frameworks and tools are used to create scripts for enabling CI / CD:
- Build Management: Apache Ant, Apache Maven, Gradle, …
- Continuous Integration: Jenkins, Bamboo, …
- Continuous Delivery: Chef, Puppet, SaltStack, Ansible, …
We have to work with plenty of different independent services and a huge number of distributed instances of each service. An internal service discovery framework is used to locate services for the purpose of load balancing and failover. Therefore a service provider registers to the registry when it is available. Consumers discover the service from the registry to be able to connect and consume it.
A lot of options are available for using a service registry, such as Netflix’ Eureka, Apache Zookeeper, Consul, Etcd. Many of the later discussed frameworks also include a service registry implicitly. It is not always easy to classify each of the frameworks in this article for just one component. Often the features are overlapping.
In addition to an internal service discovery an external service discovery framework is used to expose internal microservices to the outside world (which can be the public internet, just partners or other internal departments). This is often called an “Open API initiative” or “API Management” and offers features such as a portal for easy packaging and self-provisioning of APIs (i.e. microservices in this case), monetization and a gateway for security enforcement (e.g. authentication, authorization, throttling). Some relevant options for API Management are:
- JBoss apiman: Open source, low-level coding framework, can leverage other Red Hat JBoss projects
- Apigee: Pure player in the API Management market
- Akana (former SOA Software): Pure player in the API Management market
- CA’s Layer7: Strong security gateway, can leverage other CA products
- TIBCO’s Mashery: Strong portal and community, can leverage other TIBCO products, including TIBCO API Exchange Gateway for advanced security and routing requirements
See the following article for more details about use cases and product categorization for “Open API”: API Management as a Game Changer for Cloud, Big Data and IoT.
Dynamic Distributed Configuration Management
Numerous agile and dynamic changes in a cloud-native architecture demand that you cannot manage configuration manually anymore when adopting distributed microservices and containers. Services are designed to fail, respawn and get updated frequently. Therefore you need automated configuration to setup new containers on distributed nodes quickly and automatically. Some required features:
- Make changes dynamically at runtime (e.g. change service behavior, database connection or log level of a specific instance)
- Change multi-dimensional properties based on a complex request or deployment context
- Enable / disable features based on the request context (e.g. display of a specific user interface for a specific region or device)
- Change behavior of cloud design patterns (see the later section “Resiliency Design Patterns”)
Two relevant frameworks for dynamic distributed configuration management are Netflix’ Archaius and Spring Cloud Config. These frameworks use polling and callback mechanisms for dynamic configuration as the traditional push concept (to specific IP addresses and hosts) does not work in elastic and ever changing cloud-native environments.
Scalability and Failover
A key feature of a cloud-native architecture is the ability of elastic scaling depending on load and SLAs. This requires advanced cluster management, server-side and client-side load balancing and resilient design patterns.
Cluster Management (Scheduling and Orchestration)
Flexible development and deployment is a key advantage of microservices and containers. New features are added and old ones pruned. Zero-downtime and failover are required but you also need efficient usage of your resources.
A cluster manager is designed for failover and high scalability. It is used to automatically orchestrate container scheduling and managing hosts including the application of rules and constraints to each host.
Various cluster management frameworks are already available especially for Docker. The following examples are some of the most relevant (and discussed in more detail here):
- Docker Swarm: A Docker-native framework, uses the Docker API, can easily leverage other Docker frameworks such as Docker Compose, it has to be combined with other frameworks such as etcd, Consul or ZooKeeper
- CoreOS Fleet: Low-level framework built directly on systemd, often used as “foundation layer” for higher-level solutions
- Kubernetes: Open sourced by Google and adopted by many other companies including IBM, Red Hat and Microsoft. Kubernetes is a great mix of sophisticated features and relatively simple installation / configuration. In contrast to some other sophisticated cluster managers you can even set it up on your local machine for development with just a single “Docker run” command. If you install it on a cloud platform it leverages the platforms specific features, for example on AWS it uses Amazons ELB while it leverages Googles LB on Google Cloud Platform.
- Mesos’ Marathon: An orchestration framework on top of the powerful (but complex) Apache Mesos, a “distributed systems kernel”. Mesos is intended for large scale and multi-use of different frameworks on top of it (e.g. Apache Hadoop, containers via Marathon, batch processing via Chronos)
Load Balancing (Server-side and Client-side)
Servers come and go in a cloud-native architecture. Load balancing needs to become much more sophisticated (and therefore complex) with microservices and containers. Just distributing load based on well known IP addresses and hosts is not sufficient anymore. Concepts such as weighted load balancing based on several factors like traffic, resource usage or error conditions provide superior resiliency.
Traditional server-side load balancing is used for years to distribute network or application traffic across a number of servers and to increase capacity and reliability of applications. Well-known examples are F5’s Big-IP products or Amazon AWS Elastic Load Balancing (ELB) service. They are used for so-called edge services i.e. external service consumers respectively end-user web traffic.
In addition many microservices architectures include client-side load balancing to avoid unnecessary inter-service communication. Therefore frameworks such as Netflix Ribbon “embed” the client-side LB into each microservice. This reduces the communication to one hop instead of two hops for service communication between internal microservices, so called mid-tier or core services.
Resilience Design Patterns
All the new concepts for a cloud-native architecture require new design patterns to offer a general repeatable solution to commonly occurring problems. Resilience design patterns prevent cascading failures, allow failing fast and recover rapidly by implementing logic for latency tolerance, fault tolerance and failback logic.
One of the most well-known patterns is the Circuit Breaker which is used to detect failures and encapsulate logic for preventing a failure to reoccur constantly (during maintenance, temporary external system failure or unexpected system difficulties). The Akka framework has a nice explanation and implementation of this pattern. Netflix Hystrix also offers sophisticated implementations to enable latency and fault tolerance in distributed systems. “Application Resiliency Using Netflix Hystrix” is a great post by the Ebay Tech Blog explaining how they leveraged it to realize cloud patterns.
There are plenty of cloud patterns emerging (and more will come in the future). For example, the Kubernetes Tech Blog explains “Patterns for Composite Containers” such as “Sidecar Containers”, “Ambassador Containers” or “Adapter Containers”.
Container Solution Stacks
As you have seen in the above sections, there are plenty of frameworks and tool chains available. The number is growing every month. This might remember many readers to Apache Hadoop and its unbelievably growing ecosystem with mature and less mature frameworks. The same is true for containers today. Therefore some “solution stacks” are emerging to help getting started and managing all the different challenges with one single (and commercially supported) container stack – well known as “distribution” in the Hadoop environment. Examples for container solution stacks are Tectonic (a Kubernetes + CoreOS Platform), Docker Datacenter, Mantl or HashiCorp’s Nomad. More will probably arise in the next months.
We have now discussed several concepts, frameworks and patterns to realize a cloud-native architecture leveraging containers and microservices. However, you also need some kind of cloud platform where you deploy and run all this on.
Private, Public or Hybrid Cloud-Native Platform
A cloud-native platform is a private, public or hybrid cloud which offers a self-service and agile cloud infrastructure (Infrastructure-as-a-Service, IaaS). On top of a cloud infrastructure you need a platform (Platform-as-a-Service, PaaS) where you can deploy and run your containers. The following picture shows the key characteristics of both:
Most enterprises select available mature offerings such as Amazon Web Services, Microsoft Azure or open source OpenStack for IaaS and PaaS platforms such as Red Hat’s OpenShift (which is based on Docker and Kubernetes) or Cloud Foundry (offered open source and enhanced by several vendors such as IBM with Bluemix or Pivotal).
The key advantage of using an existing PaaS platform is the out-of-the-box support for most requirements of a cloud-native architecture such as elastic scalability, container orchestration, dynamic service discovery, load balancing or dynamic distributed configuration management. Thus you should evaluate different PaaS platforms before deciding to build your own one based on all the different frameworks discussed above. Most platforms leverage one or the other of these frameworks implicitly.
After discussing all the requirements and available frameworks for a cloud-native architecture in much detail let’s now take a look at how all this is related to middleware.
Relation to Middleware (Integration, API Management, Event Processing)
Before going on, I have to clarify: Microservices, containers and cloud-native architectures are not suitable for all scenarios. Remember: These introduce a lot of new concepts and complexity. “Microservices are not a free lunch”!
I will focus especially on integration platforms in the following paragraphs because integration is key for success in most middleware projects. Due to trends such as cloud, mobile, big data and Internet of Things you cannot survive without good integration in IT architectures.
An Enterprise Service Bus (ESB) is used in many enterprises as a strategic integration platform between custom applications, commercial-off-the-shelf software, legacy applications, databases and cloud services. However not every ESB deployment needs to be cloud-native. In mission-critical deployments at banks, retailers, airlines, telcos and others a central ESB with high performance, high availability and fault-tolerance might still be the best choice for the next few decades.
On the other hand an ESB is not the complex, central and heavyweight beast you might think of. This might have been true 5 to 10 years ago (and one of the reasons several SOA projects failed that time) and it might still be true for some vendors today. But in general (and valid for many vendors) an Enterprise Service Bus in 2016 is a mature, stable and easy to use component, which should offer:
- Orchestration and Choreography
- APIs and Business Services
- Independent Deployments
- Scalable and Lightweight Platform
Based on your requirements you should be able to decide how cloud-native you need to be and if you should leverage microservices and containers (and all their pros and cons) or not. Select only the concepts, tools and features you really need.
Having said that let’s take a look at a few different middleware examples and how you might leverage microservices, containers and a cloud-native architecture for them:
- Integration: Build (micro)services and APIs using the integration capabilities of the ESB; integrate and orchestrate different (micro)services (build composite services)
- API Management: Expose, publish and monetize microservices internal or to partners and the public world via APIs.
- Event Processing: Correlate distributed microservice events in real time to add business value (e.g. fraud detection, cross-selling or predictive maintenance)
All the above middleware components
- Require agility and flexibility
- Control and leverage other microservices
- Have to support microservice characteristics itself (containers, CI / CD, elastic scalability, etc.) to fit into a cloud-native architecture and to allow quick changes
Let’s come back to the example of integration platforms and the ESB. If you need a more flexible, cloud-native integration solution instead of a classical, more central ESB deployment then you have three options (but do not care about the branding or shortcut of the product name):
Integration Middleware on Top of a PaaS
This is very similar to an on-premise ESB and used for implementing “core services” i.e. central, often complex and mission-critical services. Development is done in the traditional IDE. However key difference is that the solution is cloud-native i.e. it supports containers and microservices. You use this kind of integration middleware to develop integration applications that are deployed natively onto a PaaS platform such as Cloud Foundry or OpenShift. Some vendors offer a vendor-agnostic solution where you can deploy your integration applications anywhere without relying on a specific cloud platform or vendor.
You can develop different “cloud-native services” to be more agile, change quicker and provide web scale:
- Integration Apps and Services: Build consumable Web APIs out of backend web services like ERP, CRM, order management using enterprise technologies like SOAP, SAP, Oracle, IBM MQ, etc.
- Functional Microservices: Build apps focusing on business functionality without getting into code complexity
- API Choreography Services: Visually choreograph APIs leveraging the PaaS integration tooling (e.g. process orchestration, data mapper or connectors)
There are not many alternatives available on the market for building integration applications that are deployed natively onto a PaaS platform. TIBCO BusinessWorks Container Edition is a vendor-agnostic example supporting CloudFoundry, Docker, Kubernetes, AWS ECS, etc. JBoss Middleware Services allows the deployment of its middleware applications (including JBoss Fuse and A-MQ) onto OpenShift.
Cloud Integration Middleware (iPaaS)
An iPaaS Cloud Integration middleware is cloud-based, uses a web browser instead of a desktop IDE and supports the execution of integration flows, the development and life cycle management of integrations, the management and monitoring of application flows, governance and essential cloud features such as multi-tenancy, elasticity and self-provisioning. iPaaS can work closely together with an on premise ESB or integration middleware on top of a PaaS platform.
iPaaS tooling offers intuitive web-based integration and is intended for people with some technical understanding e.g. how to create and deploy REST services or to configure connections and policies of Open APIs. It is usually used to build “edge services”, sometimes also called “microflows” which might change more frequently and which are often not that mission-critical.
A more detailed overview including the pros and cons of iPaaS can be found here: “iPaaS: What this cloud technology is and why it’s important”.
SaaS Cloud Integration Middleware (iSaaS)
This kind of SaaS solution offers an intuitive web-based user interface for the business user i.e. the “Citizen Integrator” to realize personal integration without technical knowledge according to the do-it-yourself (DIY) principle. Citizen Integrators build new integration flows by configuring them rather than developing and building them from scratch. For instance a business user creates an automatic flow to synchronize his data via self-service from SaaS offerings such as Salesforce or Marketo and his Microsoft Excel sheets.
iSaaS integrations are clearly complementary to on-premise, PaaS and iPaaS integrations. They should also be viewed as “edge services” which are not strategic and mission-critical for the enterprise – but very relevant for the specific business user. Examples for iSaaS solutions are SnapLogic, TIBCO Simplr or IFTTT.
Hybrid Integration Platform (HIP)
A key for success is that you can transfer content across different platforms. Gartner calls this a Hybrid Integration Platform (HIP). Different components share metadata, one single IDE and consolidated operations management. Out-of-the-box integration capabilities with API Management components (API gateway and portal) are also very important for agile development, deployment and operations.
For example you might want to develop an orchestration service with a PaaS-based integration solution and want to port that to an on-premise integration platform later. Or you might want to define a REST service (via “contract first principle”) with an iPaaS middleware with a mock for early testing and later implement it on an on-premise ESB. The same service also needs to be exposed via an API to partner or for public access.
Some more Middleware Frameworks and Vendors
Finally I want to highlight some other frameworks and vendors, which might be relevant for realizing your cloud-native microservices but were not mentioned in the article yet:
- WSO2 Microservices Framework for Java is a good example for a low-level coding framework based on top of the vendors open source middleware.
- Amazon EC2 Container Service (ECS) and Google Container Engine are two examples of “Containers as a service (CaaS)” offerings which allow self-service usage of containers as SaaS solution
- Cloud vendors such as Amazon, Microsoft or Google are also middleware vendors in the meantime. For example Amazon AWS offers services for cloud messaging (SQS and others), streaming and analytics (Kinesis), containers (ECS), microservices (Lambda) and more.
- Plenty of other middleware vendors also work on cloud-native offerings. For more details see e.g. Software AG Cloud, Talend Integration Cloud or Oracle Cloud Platform.
- Middleware for the Internet of Things (IoT) is another sector which grows significantly these days. For example, take a look at open source integration solutions such as Node-RED (based on js, open source’d by IBM) or Flogo (based on Google’s Go Programming Language, to be released and open source’d by TIBCO very soon). Both offer a zero-code environment with web IDE for building and deploying integration and data processing directly onto connected devices using IoT standards such as MQTT, WebSockets or CoaP.
Finally I would like to mention the The Cloud Native Computing Foundation (CNCF) which might get much more relevant in the future for plenty of frameworks discussed in this article. The CNCF was founded to help facilitate collaboration among developers and operators on common technologies for deploying cloud native applications and services built on containers. Founding members included Google, Cisco, IBM, Docker and VMware. The first two projects hosted by CNCF are Kubernetes and Prometheus.
Microservices, Containers and Cloud-Native Architectures Do NOT Fit into Every Project…
… but they have a huge influence on our thinking about IT architectures. In many new projects these concepts absolutely make sense and create a lot of benefits such as flexible development, deployment and operations. Think about the trade-offs and leverage the parts of a cloud-native architecture which make sense for your project. Modern middleware will leverage microservices, containers and cloud-native architectures! No matter if you take a look at Integration, API Management, Event Processing, Streaming Analytics, Business Process Management or any other kind of on-premise or cloud middleware.
Thanks for reading this extensive article. I think it is very relevant for all of us, no matter if you implement custom applications or leverage middleware in your projects. As always, I appreciate any feedback and discussions via Comment, Email, Twitter or LinkedIn.
By the way: The content of this article is also discussed in a slide deck which I first presented in April 2016 at JPoint in Moscow, Russia: