German startup Giant Swarm aim to make life easier for developers by providing a simple microservice infrastructure, freeing their hands up to do the coding that they love. Or, as the Swarm folk put it, “the power and freedom to build the software that runs the world.” In this interview, we speak to team member Puja Abbassi about the company’s mission to confront the complexity dogging modern software development, what they’ve learnt from the great SOA experiment, and what’s on the roadmap ahead. We also discuss the comparisons to Kubernetes, and what their work has taught them about the limitations of Docker.
Voxxed: What are the key problems you are looking to solve with Giant Swarm?
Abbassi: We want to enable developers and their companies to focus on their code/product, not having to worry about the underlying infrastructure anymore. Infrastructure and differences between development, staging, testing, and production environments have always been a problem and these problems are only getting emphasized by the recent trend towards microservice style applications.
There’s always a lot of operations overhead to infrastructure, even on IaaS. We make it simple to run, manage, and scale microservices in containers, while giving users the assurance of running distributed services on a resilient infrastructure.
Can you give us a technical dive into your software? What languages is it compatible with, and will you be extending this?
Basically, Giant Swarm is a native microservice infrastructure for containers, which is quite unique. Native as we don’t use any VMs or other kind of virtualization in between. We are based on CoreOS, which we run directly on bare metal. Containers (currently Docker containers) run directly on the CoreOS cluster. Even our own tools and the microservice tooling we offer to our users runs in containers.
As for compatibility, almost anything that runs in a Docker container, also runs on Giant Swarm. There are a few things like “privileged mode” (getting full access to the host) that we can’t allow, but other than that the user is free to choose any language, framework, and datastore they like. Further, we are thinking of extending to other container formats like for example rkt.
What’s your take on the movement towards microservices? Are you concerned about the rise in complexity in software development?
We used microservices in our former company and we are building our own product as microservices. As we call ourselves a microservice infrastructure, we might be a bit biased. We believe that a lot of software will move towards microservice architectures as, done right, it does enable you to develop and iterate much faster, just look at the big tech companies, who are using it already.
However, you’re right microservices are hard and add complexity. The complexity on the development side is mostly, because people are not used to this pattern, yet, and a lot of things around microservices are still not very clear. The harder part is handling the complexity on the operations side, which is exactly what we are working on eliminating for our customers.
How would you respond to those who predict that microservices will ultimately fail in the enterprise or become “the new SOA”?
It is hard to establish new development or even worse architecture patterns in a company and the bigger the company is, the harder it gets. SOA promised us a lot, but my feeling was that it was mainly driven by theory and lots of thinking about how an architecture should be, without taking into account a lot of factors that influence this in practice. This resulted in perfectly publishable academic work for researchers and lots of work for consultants, but in practice companies mostly ended up with a bunch of components that were nearly as entangled as the monoliths before them, with the added complexity of an enterprise service bus full of business logic.
Now Microservices Architectures (MSA) are something that has been less driven by research and theory, but mainly resulted out of necessity in practice, e.g. in big tech companies like Netflix and co. This is not to say that the concept is perfect, yet. Neither that Netflix, Spotify, and the like have “solved” architecture and never go down anymore (see all the services that suddenly stopped working when AWS went down recently).
Microservices are still a young pattern and we all still need lots of work to be able to make them work nicely. But I have a feeling that we have learned from our failures with SOA. And you can partly see that in the uptake and early success that MSA have, even in bigger enterprise companies. We see it a lot in retail/e-commerce for example, in companies who are actually feeling a pain with their monoliths or with SOA. And enterprises are more careful this time around, there’s rarely the huge rehaul project, but much more often you see smaller teams breaking away parts of legacy systems and building microservices out of them.
Do you have any issues with Docker? How would you like to see it evolving in the future?
Any project as young a Docker still has some issues and working on use cases like ours, which test out the limits of the tools we use, we sure run into some issues here and there. One of the issues we have is that as a container infrastructure we sometimes don’t need the full package that comes with the current single docker binary. We’d like to see the project splitting up into multiple pieces that work independently from each other. This is actually already on the Docker roadmap as the second item after security, called the “plumbing project”, and first steps are being done with the Open Container Initiative and runC.
How does your technology compare to Kubernetes? What sets it apart?
Some part of our feature set, e.g. auto-restarting, re-scheduling, and replicating containers, is quite alike to what Kubernetes offers in terms of orchestration. However, as most people, who have actually tried setting up a Kubernetes cluster will tell you, it is not a small feat and involves quite some work to get it running, especially if you want to customize it to a decent storage solution and networking for example.
As I mentioned earlier, we believe that developers and companies should not need to worry about the infrastructure themselves anymore, so with us they can rely on us handing them a ready-to-run infrastructure that has all the features without users having to install or manage a thing, be that hosted by us or on-premise at the customer’s datacenter.
Furthermore, we think that users should have the utmost freedom in choosing the tools they work with, so our platform is more flexible to allow them to change up parts of it, for example use Consul instead of our service discovery or use a different storage solution underneath. On top of that, we are building an infrastructure that is specifically geared towards microservices, so a lot of the tools you need for running them in production are already included in Giant Swarm.
What’s on the roadmap for Giant Swarm?
Giant Swarm is still young and so we have lots of things to do. We try to work closely with our customers to find out what the right next features for a microservice infrastructure should be and what use cases we might want to support out-of-the-box. Some examples are features that make high-availability setups easier, like guaranteed distribution of containers to different machines and waiting for containers to be actually ready for handling requests, some others are more general and enable use cases like auto-scaling and the ability to update services without downtime.
There are also efforts to enable multi-tenancy without giving up on any functionality and thus also enable smaller customers to deploy their projects on our platform and get the same resiliency and power that before only bigger companies could afford.