Over the past few months there has been increasing discussion around the use of ‘sidecar’ or ‘ambassador’ applications with microservices and container-based services to provide a homogeneous interface to platform infrastructure, for example service discovery, dynamic configuration and resilient inter-service communication. This article provides an overview of the concept, and aims to help you decide whether this approach is appropriate for your microservice-based application.
What is a Sidecar?
A sidecar application is deployed alongside each microservice that you have developed and deployed to a server/hosting instance. It is conceptually attached to the “parent” service in the same manner a motorcycle sidecar is attached to the motorcycle – hence the name. A sidecar runs alongside your service as a second process and provides ‘platform infrastructure features’ exposed via a homogeneous interface such as a REST-like API over HTTP.
‘Platform infrastructure features’ is a catch-all term for the underlying infrastructure facilities that are provided by the service execution platform, such as service discovery, dynamic service configuration and resilient inter-service communication. It can be argued that these features provide the glue for assembling microservice components, or if you follow Sam Newman’s analogy of microservice development as ‘town planning’, then these features provide the transport links, commoditised utilities and policy enforcement mechanisms.
The diagram below is taken from the Netflix blog post which introduces their sidecar application, Prana, and shows how the pieces of a sidecar-based microservice platform can be assembled:
Figure 1. Netflix’s sidecar Prana architectural overview (taken from the Netflix blog)
The Netflix blog states that the use of a sidecar application to provide “non-intrusive platform capabilities” has gained popularity inside and outside of Netflix, particularly when operating and managing a heavily microservice-based ecosystem. The primary benefit of using a sidecar application is that it can offer a homogeneous, non-language specific interface to platform infrastructure for services that are increasingly heterogeneous in terms of implementation and deployment technologies.
Microservices: Multiple Development Languages, But One Platform
An often touted trait of the popular microservice architectural pattern is the ability to implement services in a variety of languages, the so-called ‘polyglot’ application/platform. This polyglot approach can itself present many challenges, such as increased requirements on the knowledge and skills of the development team, and the need for multiple deployment mechanisms. On the other hand, it also provides many advantages, such as the ability to use the ‘right tool for the job’, and allows externally-sourced disparate components to be easily integrated into the platform.
The primary challenge with the polyglot microservice approach is that for each language utilised, a set of platform infrastructure support libraries must be created, effectively reinventing the wheel for each language integration with the platform service discovery, configuration etc. This overhead might be acceptable for an application with two languages, but what about a platform with four, six or eight?
This problems presented by language heterogeneity were initially most visible within the build and deployment phase of developing microservices, and accordingly organisations such as Netflix and Amazon invested heavily in virtualisation technologies, with the primary goal to consolidate the deployable unit of code/binary into some homogenous artifact.
The use of virtualisation allowed the abstraction of the application build process and associated deployable artifact away from a proprietary build tool and dependency management framework, and the resulting proprietary binary, to an encapsulated build factory and resulting Virtual Machine (VM) images, such as AWS AMIs or a Packer images. These VM images could then be deployed onto a hypervisor-managed platform, for example, AWS’s EC2 or a locally managed VMWare cluster.
The resulting VM image was effectively language neutral, and nicely homogenised the deployable artifact of a service. However, applying this software development maxim of ‘encapsulate what varies’ to the build and deployment process has recently been taken to another level with the introduction of container technologies, such as Docker, which attempt to address the ‘heavyweight’ nature of traditional VM technology.
Containers, Containers Everywhere (But Still One Platform)
Docker exploded onto the software development scene in 2014, and properly introduced Linux LXC container technology to the masses. Many large organisations are now moving their application deployment from VMs to containers, or if they are arriving late to the microservice party, are moving directly from proprietary binary artifacts to containers.
The primary advantage of container-based platforms in comparison with traditional VM platforms is the reduced resource requirements, both in terms of hardware resource needed to run the underlying platform infrastructure on the host, and also the time taken to initialise the guest image or container. There are currently trade-offs with containerisation technology, such as process (security) isolation and host/guest kernel compatibility, but this is not stopping the tidal wave of container adoption.
The containerising approach nicely homogenises the application/service deployable unit, but does nothing to address the problem of the heterogeneous code contained within the application accessing the underlying platform infrastructure features, and still means that multiple language-specific libraries must be developed and included within the containerised application.
Sidecars to the Rescue!
The current absence of a homogenised approach to accessing the platform infrastructure features when using containers is exactly where the sidecar pattern can assist developers.
By pulling out the code that accesses the infrastructure features from a service into a new application deployed within a separate container (or image), and providing a standardised language-neutral interface to the resulting artifact, the need for language-specific libraries can be eliminated (or at least minimised). In its simplest form this is effectively an ‘Extract Sidecar’ refactoring, which is very similar to the classical ‘Extract Class’ refactoring pattern.
A common approach to implementing an interface to the underlying platform services is to expose features via a dedicated sidecar container that provides a REST-like API over HTTP. However, there is no reason a container should not expose endpoints using a binary Interface Definition Language (IDL) protocol, such as Thrift or Google Protocol Buffers, or utilise a proprietary protocol, such as Docker container linking.
The primary requirement for choosing the interface protocol is that it must not limit integrations at the language level. Providing that a protocol has client libraries or bindings available in a multitude of languages or, as in the case of Docker container linking, that the protocol is transparent to the parent application, this should be an acceptable option.
Current Sidecar Implementations:
Netflix recently released Prana, an open-source “sidecar” application the company developed to allow non-JVM-based services to use the NetflixOSS JVM-based platform support client libraries via an HTTP API.
Communicating cross-process over HTTP to Prana enables applications written in other languages such as Python and Node.js or services like Memcached, Spark and Hadoop to utilise features provided by a NetflixOSS library without the library being re-written for the target language or platform.
Prana provides the following features to a parent service to which a sidecar application has been attached; Service registration and discovery via the Eureka service; Dynamic configuration, provided by Archaius; Resilient inter-service communication using Hystrix/Ribbon; and Runtime insight and diagnostics of the service instance and corresponding environment via an embedded Admin Console.
The sidecar approach has also been used with NetflixOSS components outside of Netflix, for example Andrew Spyker has done great work within his Acme Air NetflixOSS-based demonstrator application. The Acme Air application uses a sidecar for service discovery, health checks and dynamic configuration management.
Docker Ambassador Cross-container Linking
The Docker website recommends that the sidecar (or ‘ambassador’) pattern be utilised for cross-container linking. The use of a sidecar here removes the need to hardcode network links between a service consumer and provider, which increases service portability.
This is a very focused use of the sidecar approach, and only provides a single platform infrastructure feature (service location transparency), but nicely demonstrates that a sidecar does not necessarily need to be an ‘all-singing, all-dancing’ application.
In 2013 AirBnB’s released their ‘SmartStack’ application for service discovery. The SmartStack application provided two sidecar services, Nerve, for service registration, and Synapse, for service discovery.
Nerve registers services in a distributed ZooKeeper configuration store, by updating ephemeral nodes with address/port combinations for the associated ‘parent’ backend application. Nerve also ensures the availability of a service by performing regular health checks, for example by probing an application ‘/health’ HTTP endpoint, or attempting a simple read and write to a database or middleware service.
Synapse runs beside a ‘parent’ application and transparently handles the inter-service communication. Synapse periodically reads the service location information stored in the distributed ZooKeeper, and then configures a HAProxy running locally to the parent service with the updated routing information. When the parent application wants to talk to an external service, it simply talks to the local HAProxy, which performs the appropriate communication redirects without the service knowing any of the routing details.
So, are Sidecars a Silver Bullet?
In a word, no. Anyone who has worked within the IT industry for more than a few months will realise that there are no silver bullets. However, sidecars are being used effectively by many organisations, and learning about the various approaches implemented by fellow developers is always useful.
An obvious disadvantage of the sidecar approach is the inefficiency introduced by the parent communicating cross-process to the sidecar application. Even when using binary protocols, this communication is obviously not as efficient when compared with communicating in-process. There has also been discussion in the community as to whether the use of sidecars introduces a redundant hop, for example with Docker container linking, which can make debugging and logging more difficult.
It can also be difficult for sidecar applications to access all of the parent service information required. For example, the Netflix blog states that for JVM-based applications the use of the native NetflixOSS client libraries is preferred, as these libraries are able to access internal application (and associated JVM) state for monitoring and reporting purposes.
It could also be argued that sidecars are an intermediate implementation step for platform infrastructure features, which will eventually be rendered redundant when microservice platform frameworks mature and provide all of these features ‘natively’. For example, Google’s Kubernetes, RedHat’s fabric8 and Mesosphere’s Mesos/Marathon cluster management frameworks are developing rapidly, and already perform varying degrees of platform resource allocation and service discovery.
Although sidecar applications may not be a silver bullet, they do provide an interesting and very useful pattern for encapsulating access to platform infrastructure features using a language-neutral approach, and as such this pattern can be a useful addition to your microservice development toolbox.
I’m keen to know what you think, and so please leave comments and questions below!
If you are interested in learning more about the use of microservices within the Enterprise, then be sure to check out my earlier Voxxed article ‘Exploring Microservices in the Enterprise’