These days, it seems like everybody is talking about microservices. You can read a lot about it in hundreds of articles and blog posts, but my recommended starting point would be this article by Martin Fowler, which initiated the huge discussion about this new architectural concept. This article is about the challenges, requirements and best practices for creating a good microservices architecture, and what role an Enterprise Service Bus (ESB) plays in this game.

Branding and Marketing: EAI vs. SOA vs. ESB vs. Microservices

Let’s begin with a little bit of history about Service-oriented Architecture (SOA) and Enterprise Service Bus to find out why microservices have become so trendy.

Many years ago, software vendors offered a middleware for Enterprise Application Integration (EAI), often called EAI broker or EAI backbone. The middleware was a central hub. Back then, SOA was just emerging. The tool of choice was an ESB. Many vendors just rebranded their EAI tool into an ESB. Nothing else changed. Some time later, some new ESBs came up, without central hub, but distributed agents. So, ESB served for different kinds of middleware. Many people do not like the term “ESB” as they only know the central one, but not the distributed one.

Therefore, vendors often avoid talking about an ESB. They cannot sell a central integration middleware anymore, because everything has to be distributed and flexible. Today, you can buy a service delivery platform. In the future, it might be a microservices platform or something similar. In some cases, the code base might still be the same of the EAI broker 20 years ago. What all these products have in common is that you can solve integration problems by implementing “Enterprise Integration Patterns”.

To summarise the history about branding and marketing of integration products: Pay no attention to sexy impressive sounding names! Instead, make looking at the architecture and features the top priority.Ask yourself what business problems you need to solve, and evaluate which architecture and product might help you best. It is amazing how many people still think of a “central ESB hub”, when I say “ESB”.

Death of the Enterprise Service Bus (ESB)?

Let’s get to the crux of this piece: Is the ESB concept dead? The answer clearly is no! However, an ESB is no longer recommended as a central, inflexible backbone for your whole enterprise. When you hear “ESB” today, you should think of an flexible, distributed, scalable infrastructure, where you can build, deploy and monitor any kind of (micro)services in an agile and efficient way. Development and deployment can be done on-premise, in the cloud, or a mixture of both (e.g. using the cloud just for short-living test environments or for handling consumer peaks).

You use an ESB for what it is built for: Integration, orchestration, routing, (some kinds of) event processing / correlation / business activity monitoring. You can also build applications via (micro)services, which implement your requirements and solve your business problems. Deploy these services independently from each other with a standardised interface to a scalable runtime platform – automatically. The services are decoupled and scale linearly across commodity hardware.

That is my understanding of an ESB as of today. An ESB is the best tool for these requirements! You “just” have to use the ESB wisely, i.e. in a service-oriented way, NOT in an ESB-oriented (central) way. Call it ESB, integration platform, service delivery suite, microservices platform, or whatever you want.

In addition to this tool (which I still call ESB), you use a service gateway for security, policy enforcement and exposing (micro)services as Open API to external consumers. The service gateway manages your integration services (built with your ESB), your application services (built with your ESB or any other technology), and external cloud services (you do not care how they are built; you just care about the service contract).

One more thing: Do you really need some kind of a “bus”? Well, a bus makes sense if you want to correlate events happening in different (micro)services. Keep these events in-memory, make them visible for real-time monitoring, analytics and pro-active (predictive) actions. More details about this topic later on. In defining my understanding of a modern ESB, I’ve already discussed microservices. So, as you see, ESB and microservices are not enemies, but friends and profiteers!

Definition of the Term “Microservices”

Let’s define the term “microservices”. As you have seen in the last section, the design, architecture, development and operation of applications must change. Organisations need to create a portfolio of services they can reuse in multiple apps. Still sounds like SOA? Indeed, but there are also important differences:

  • No commitment to a unique technology
  • Greater flexibility of architecture
  • Services managed as products, with their own lifecycle
  • Industrialised deployment

That is the beginning of the microservices era: Services implementing a limited set of functions. Services developed, deployed and scaled independently. This way you get shorter time to results and increased flexibility.

Challenges of Microservices

With all the benefits they bring, microservices also create several challenges:

  • All of these services require integration.
  • All of these services and technologies require automation of deployment and configuration.
  • All of these services require logging and monitoring.
  • All of these services require hybrid deployment.

So, forget about the product discussion for now. Think about the architecture you need to create microservices. The upcoming sections will provide you with six key requirements to overcome those challenges and leverage the full value of microservices:

Services Contract

  • Services contract
  • Exposing microservices from existing applications
  • Discovery of services
  • Coordination across services
  • Managing complex deployments and their scalability
  • Visibility across services

No matter if you use an ESB, a service delivery platform or “just” custom source code, be sure you’ve ticked off the following six requirements in your future projects for creating an agile, flexible and efficient microservices architecture.

Requirement #1: Services Contract

A service contract is the number one requirement in a world of distributed, independent services. The service provider uses the contract to express the purpose of the microservice, and its requirements. Other developers can easily access this information.

In the SOA world, a contract was defined with a SOAP interface. I don’t want to start another flame war – I think SOAP is still a good standard for internal communication as it offers a lot of standards for security, etc. Besides, tools offer good support for most important WS-* standards such as WS-Security or WS-Policy in the meantime.

In the microservices world, REST (to be correct: some kind of RESTful HTTP) is the de-facto standard. I think the reason is not so much the better performance, but a good architecture with simplicity, separation of concerns, no state, and a uniform interface. This is perfect especially for mobile devices and Internet of Things (IoT), two of the major drivers for microservices.

You can also use different data formats with REST, i.e. you can always choose between JSON and XML. While lightweight JSON format is perfect for mobile devices, I still think that XML (including XML Schema) is the better enterprise format. You can define schemas, do transformations and validations with much less effort and mature tooling. Performance was an argument against XML in the past. With today’s powerful commodity servers and in-memory computing, this is no longer a huge disadvantage in most use cases.

Communication is often done via HTTP. Though, HTTP does not fit and scale for many “modern use cases”. Messaging standards such as JMS are a good option for an event-enabled enterprise. WebSockets, MQTT, and other standards are also emerging for communication with millions of devices – an important requirement for the Internet of Things. Thus, it is important that your microservices architecture supports different data formats and transportation protocols without re-building services again and again.

Requirement #2: Exposing Microservices from Existing Applications

Most of the business of organisations still exists in applications. They need to mash up the functionalities of these applications with external services or their own microservices. Therefore, integration is the foundation for microservices. You can either build completely new services or expose parts of existing applications as a microservice. Part of this can be an API, an internal service or some legacy source code with a new interface.

Over time, microservices are likely to be reused in various contexts, and that means various communications needs. Separating the transport logic from the service logic is a best practice that proves vital in the context of microservices. When building the logic of a microservice, you should not have to think about how the service communications with an endpoint – which could be an enterprise server (XML / SOAP), a cloud service (XML / HTTP), a mobile device (JSON / HTTP) or a dumb IoT device (low level TCP, MQTT, or maybe even a proprietary protocol).

Requirement #3: Discovery of Services

A service contract is important. However, you also have to be able to discover and use other services. Services have to be published via a service gateway. The gateway enforces consumption contracts, ensures Y-scaling and reliability of microservices, and allows the reuse of microservices in multiple contexts without change.

The service gateway makes microservices available. It uses open standards such as SAML, Kerberos, OAuth, WS-* or XACML – depending on your requirements. Besides, developers need an easy way to discover microservices and their contracts. Usually a self-service portal is used to offer a service catalog and information about contracts.

While SOAP is supported by several frameworks and tools for years, Swagger seems to become the default standard for defining, implementing, discovering and testing REST services. Let’s take a look at the definition from its website: “Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability.” Swagger is also integrated into many middleware products, already.

Open API and API Management

While talking about microservices until now, you should be aware that most vendors do not talk about them in the context of discovery, but about APIs, Open API and API Management. Like the term “ESB”, this is just a term. No matter if you call it “Microservice Registry”, “API Management”, or anything else. What really matters is the business problems to solve, its requirements and a good architecture.

A New Front for SOA – Open API and API Management” explains the terms “Open API” and “API Management” in more detail and gives a technical overview about the components of an API Management solution: Gateway, Portal and Analytics. Besides, that content explains how API Management is related to an ESB in an enterprise architecture.

1_api_management

If you want to learn more about available open source and proprietary API Management products on the market and how to choose the right one, then you should read this article: API Management as a Game Changer for Cloud, Big Data and IoT: Product Comparison and Evaluation.

Requirement #4: Coordination Across Services

Microservices and their granularity are ideal for the development and maintenance of the service. But that does push complexity more towards the app itself. A complexity that those apps cannot manage as they are often executed on platform with constrained resources (battery, network, CPU). Combining services in a higher level logic to serve the purpose of apps or business processes proves to be faster to develop and easier to maintain.

A graphical tool can be used to build microservices, but also to create composite services easily and efficiently:

2_graphical_tool

The coordination of microservices can be done in different ways: Stateful or stateless. Service or event-driven. Even though stateless is best practice for a single service in most cases, a specific coordination / composite service might work better as stateful process.

Pros of stateful processes:

  • Easier to develop when state is shared across invocations
  • Does not require an external persistence store
  • In general, optimised for low latency

Cons of stateful processes:

  • Consume more memory if process is not well designed
  • Does not force the developer to design the process state
  • The process state cannot be queried without involving the process

In-Memory Data Grid

In many use cases, the change of context / state should be shared as event in an in-memory data grid to drastically improve performance and deliver ultra-low, predictable latency. It is important to understand that an in-memory data grid offers much more than just caching and storing data in memory. Further in-memory features are event processing, publish / subscribe, ACID transactions, continuous queries and fault-tolerance. See the slide deck of the following blog post for some Real World Use Cases and Success Stories for In-Memory Data Grids.

Requirement #5: Managing Complex Deployments and their Scalability

The context of utilisation of services will vary a lot. Services need to scale very rapidly. Automation is key for agile, flexible and productive microservices development. Without continuous integration / continuous delivery (DevOps), you cannot realise the microservices concept efficiently.

This way, you continuously deploy, configure and manage your applications and middleware, on premise or in the cloud. Tools should offer to end-to-end scripting, automation and visibility via dashboards, and monitoring the quality of deployed application, ports management and elastic load balancing.

Continuous delivery / DevOps can be implemented with automation tools such as Chef, Puppet and Docker. You can deploy microservices everywhere including private data centers, virtual machines and cloud environments – supporting environments such as Amazon Web Services, VMWare or OpenStack. It’s important to understand that every microservice is built and deployed independently. Instead of a self-coded / scripted DevOps environment, you can also use a product for continuous delivery. A product has the advantage that is supports a lot required functionality out-of-the-box and therefore reduces efforts a lot. In most cases, this product should be from the same vendor, which you use for your microservices – to leverage plenty of out-of-the-box features. However, if you choose a product be sure that it a) is extensible for technologies from other vendors and b) supports the integration of other automation tools and cloud infrastructure services.

Be aware that DevOps is an ideology, not a development methodology technology! Thus, you need an organisational change, not just a product, which supports you. Read the 10 Myths of DevOps to learn more about this topic.

Unified Administration

Unified administration is another key success factor for a good microservices architecture. Even though you develop microservices with different technologies (e.g. Java, Scala, Python, or a proprietary graphical tool), make sure that you can administer and monitor all Microservices with a single user interface. Full visibility is so important; otherwise a “Microservice chaos” will happen.

To make this possible, you cannot deploy every microservice to a different runtime, of course. I think even with microservices, you should choose a specific scalable, fault-tolerant, performant runtime for your project. Even though it might be a basic idea behind microservices, I do not like the idea that every developer can use every programming language, framework and runtime. In the long term, such a project or product is tough to maintain and ensure service level agreements (SLA). If you use a cloud service where someone else has to ensure SLA, then that’s fine. You do not have to care about the technology and runtime behind its service contract. However, within your project, you have to care about SLAs and maintainability.

Requirement #6: Visibility Across Services

Finally, after deploying and running your microservices in production, you can combine events, context and insights from different services for instant awareness and reaction. Correlation of events is the real power, as the folks from Google, Amazon and Facebook can no doubt attest.

Event correlation is a technique for making sense of a large number of events and pinpointing the few events that are really important in that mass of information. Even though it is a little bit off-topic, this is the future for any kind of data coming from microservices, Big Data, Internet of Things, and so on. Therefore, I think this topic is important to mention here. Let me direct you to an article, which explains complex event processing and streaming analytics in more details, as well as several real world use cases: Real-Time Stream Processing as Game Changer in a Big Data World with Hadoop and Data Warehouse. Use cases for event correlation can be found in every vertical. Some examples are network monitoring, intelligence and surveillance, risk management, e-commerce, fraud detection, smart order routing, pricing analytics or algorithmic trading.

The need for a Bus?

This doesn’t just close the last requirement for microservices. Remember the discussion about the need for a bus? Event correlation is the requirement where you really need a bus. However, this bus is not an ESB, but an (in-memory) event server:

3_bus_for_event_processing

You get events from many different sources (e.g. microservices, standard applications, legacy code) and correlate them in real time within the bus to react proactively.

Microservices with TIBCO, IBM, Oracle, Software AG, Microsoft, SAP, WSO2, MuleSoft, Talend, JBoss and others

This article describes different requirements and best practices for a good microservices architecture; independent of any specific software vendor or product.

To get a feeling about how the discussed requirements relate to software products from a middleware software vendor, I want to refer to a blog post I wrote: Microservices and DevOps with TIBCO Products. TIBCO offers the complete middleware stack solve all described requirements. The blog post explains how you can realize microservices with TIBCO products such as ActiveMatrix BusinessWorks, API Exchange and Silver Fabric. Complex Event Processing and Streaming Analytics can be implemented with BusinessEvents and StreamBase.

Other software vendors also have products to build microservices, of course. The completeness of vision – to speak in Gartner terms – differs a lot between different solutions. You can realize microservices with each vendor’s software or even without a product at all. Open source vendors such as Mulesoft , JBoss Fuse, Talend or WSO2 have a smaller product portfolio and less features than the well-known proprietary vendors such as TIBCO, Oracle, IBM, Software AG, SAP or Microsoft. While JBoss Fuse is a very developer-oriented tool, the others are more “UI-driven” with graphical development and debugging. MuleSoft focuses especially on ESB and API Management. WSO2 probably has the broadest product portfolio of open source vendors. Talend is coming from the ETL / Data Integration world and might be the best open source option if you incorporate master data management and data quality problems.

Proprietary vendors have more powerful (but sometimes also more complex and more expensive) products far beyond open source vendors. IBM has at least one product for every requirement, often you can (or have to) choose between several different powerful products. For example, IBM offers several completely different ESB products. TIBCO has a more consolidated portfolio with loosely coupled, but highly integrated products. Frankly, I do not know the other proprietary vendors in all details. So I will not comment on them. Good starting points for more research are Oracle Service Bus respectively Software AG’s webMethods. From my on-site experience at many different customers, I think SAP and Microsoft have a very special role. Companies usually only consider using one of these two vendors for middleware if they already use a lot of software of them. Therefore, it only makes sense to evaluate SAP or Microsoft if your company has made a strategic decision into this direction.

Regarding product comparison, evaluation and selection, think about the above discussed requirements for a good microservices architecture and evaluate your short list of vendors and products. Though, it is important to consider total cost of ownership (TCO), return-on-investment ROI), and long-term risk while creating your short list.

Microservices are Independent, Scalable Services!

Microservices are independent, scalable services. A modern architecture uses a scalable platform, which allows automating deployment of different technologies / services / applications independently. Use the tool(s) of your choice to define service contracts, implement (micro)services and service discovery, automate independent and scalable deployment. Coordinate different (micro)services and react proactively in real time to events by doing event correlation in-memory. That is how you create a good microservices architecture.

 

 

Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?

Profile photo of Kai Wähner
About The Author
- Kai Wähner works as Technology Evangelist at TIBCO. Kai’s main area of expertise lies within the fields of Big Data, Analytics, Machine Learning, Integration, SOA, Microservices, BPM, Cloud, Java EE and Enterprise Architecture Management. He is regular speaker at international IT conferences such as JavaOne, ApacheCon or OOP, writes articles for professional journals, and shares his experiences with new technologies on his blog (www.kai-waehner.de/blog). Contact: kontakt@kai-waehner.de or Twitter: @KaiWaehner. Find more details and references (presentations, articles, blog posts) on his website: www.kai-waehner.de

15 Comments

  • Matthew Faulkner
    Reply

    Kai, thank you for clarifying between ESB v ESI. A clearly written and informative brief without bias. We need objective writing like this to lower the hype and lift effective tool use.

  • AS
    Reply

    Agree that ESB as it exists right now should evolved to a set of services any one of them will be the naming service like in CORBA.

    RE “Coordinate different (micro)services” yes, and preferably via explicit coordination like processes.

    RE “react proactively in real time to events by doing event correlation in-memory” – a dispatch service?

    RE “That is how you create a good microservices architecture” I think this will help to the application architecture as well – see http://improving-bpm-systems.blogspot.ch/2014/12/architecting-application-architecture.html

    Thanks,
    AS

  • Nathaniel Auvil
    Reply

    “You use an ESB for what it is built for: Integration, orchestration, routing, (some kinds of) event processing / correlation / business activity monitoring. You can also build applications via (micro)services, which implement your requirements and solve your business problems. Deploy these services independently from each other with a standardised interface to a scalable runtime platform – automatically. The services are decoupled and scale linearly across commodity hardware.”

    An ESB is totally unneeded complexity for all of the things you list.

  • susidman
    Reply

    IMO ESB still goes with microservices, If you take WSO2 ESB (http://wso2.com/products/enterprise-service-bus/ ) an example, it address the ‘Chalenges of Micoservices’ listed in this article. WSO2 ESB is designed in a way that user can customize and extend the extension points given. Also it supports both on premise and cloud deployments. Further you can use puppet to automate the deployments. As i can see microservices are a subset of functionalities that comes with ESB.

  • Danielle
    Reply

    Interesting article. Your readers might also find real user reviews for Oracle SOA Suite on IT Central Station to be helpful. As an example, this user writes, “The bus virtualizes services with OSB (Oracle Service Bus) as it guarantees a secure and holding performance with throttling downstream. It allows us to hide systems like SAP, AS400 and more modern systems.” You can read the rest of his review here: https://goo.gl/meyHZV

  • Paul Mehmet
    Reply

    Oracle OSB is not an Enterprise Service Bus per-se but a component within an ESB: SOA-Suite is the ESB. With Fusion 12c Oracle seem to have extended OSB from the external-gateway to the ESB into adding it as the client-gateway as well. But it is not the ESB but part of the Fusion of Oracle products.

    P.S. OSB proxies are now called pipelines. As an old-fogey I am not happy! 🙁

  • DevCat
    Reply

    This Article speaks Warewolf! Everything about it describes the power behind this incredible open source software application. We have been trying to catergorise Warewolf for a while now, and this is exactly it.
    Thanks for the interesting read, and for nailing it!

  • Nicolas PAOLI
    Reply

    Thank you Kai for this article.

    I however have a comment about the Microservice definition.
    If the integration is to contact differents data sources to made some new informations it is by definition depending on this data source. The integration can’t be independant by defintion.
    If services must be independant why did you say: “All of these services require integration.” in Microservices definition ?

    • Kai Waehner
      Reply

      Hi Nicolas,

      I think every microservice is independent from others in terms of you develop, deploy and scale it independently. That is the huge benefit compared to monoliths.

      However, microservices also have to communicate with each other. This can be done directly (dumb pipes and smart endpoints via REST) or via an integration (micro)services. Of course, if you combine or integrate different microservices with each other, then they depend on each other, too. Depending on scenario, use case and architecture, you have to decide how independent your microservices shall be.

      In “more complex integration microservices” you integrate different services, data structures and technologies. Such a microservice only works if all other microservices are available, too. Otherwise, you cannot execute it. Sometimes this makes sense, sometimes you can avoid it.

  • Habib Qureshi
    Reply

    One of the core principles of the mircoservices architecture is technology agnostic between various services and go away with the central governance pitfalls; provide independence and responsibilities to developers.
    When bringing all down to one vendor solution; you are again going away from the main idea; and using just the protocols and technology standards to the enterprise’s liking.

    • Kai Waehner
      Reply

      Yes, the developer should be able to choose the technology for each microservice. But there should be guidance and some limitations.

      It is not about using one vendor solution! You can use whatever you want. However, if every developer can choose its own language or tool, you will have trouble managing and maintaining all that. Therefore, a key requirement for a successful microservice architecture is to use just a few technologies in a bigger projects. You might use Java, Go, JavaScript and one or another commercial tool to build microservices. But you should not begin with 20+ technologies.

      • TheNoSayer
        Reply

        I would rather use a framework/platform like https://nidulus.io with one standard for services than sitting in a team where everyone can pick and chose any language they like for their feature, any cloud they themselves prefer, and any self-written framework they prefer. It will be a failed platform where nobody knows how to get the full-picture of what is going on or how/where on the interwebs it is deployed or where to find the logs for a specific reported problem/bug. What a horror-project…. I do love to change language every now and then and develop in many different languages, but NO. Just NO 🙂

        It is extremely few changes needed to make if at some point someone decides the project should migrate to another platform. Like, only the input-function name and eventual parameters…. the code that the service performs when called would remain, just the extras how it is called need to be changed. Could even be scripted then…

        • TheNoSayer
          Reply

          Oh, and judging by the text on the site I linked, no BUS is needed to solve that problem. BUSes are slow anyway. Sounds more like MESH-like architecture is used, mixed with direct tcp and not a broker/bus.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>