The forces of history can act a bit like a pendulum, and this is certainly apparent when it comes to the relationship between development and operations.
The DevOps Disconnect
It isn’t difficult to see where the initial disconnect between developers and operations began. As I explained in a recent webinar, the very nature of the earliest computers mandated it.
The first computers were scarce, physically large, very expensive, and thus, invariably shared resources. They could also be seemingly finicky and fickle enough to require a dedicated IT collective to maintain and protect the machines from the uninitiated.
Operations were tasked with keeping the machines running properly as well as regularly executing the system-level jobs necessary to keep them so. In contrast, developers had very limited rights, access, and had to submit their code to run as a user-level job, usually only with permission and assistance from operations. This Dev vs Ops division of labor was necessary, simply because of the nature of the machine, its requirements, and operational parameters.
It’s not surprising that a corresponding “us versus them” mentality grew out of that division of labor.
Insiders versus outsiders. Dev versus Ops.
DevOps and the PC
Personal computers made it possible for individual developers to have their own machines; to develop, debug, and test with neither permission nor assistance from operations. In fact, operators often weren’t required at all.
This was especially the case with small businesses. It wasn’t uncommon for a lone developer or a small team to be fully responsible for building, testing, and deploying applications on all desktop machines. The same personnel were often responsible for configuring newly purchased machines, patching, and updating existing machines, and so forth. Dev and Ops were effectively one.
PCs didn’t go away, but it wasn’t long before Dev and Ops were adversaries once again. This happened largely because PC software grew exponentially more complicated. New challenges included supporting various client/server development and deployment models, a number of different networking topologies and platforms, and multiple operating systems.
The need to manage so many individual desktops and coordinate increasingly complex applications across multiple servers again made the IT collective a practical necessity. The rise of far more complicated operating systems for personal computers, such as Windows, OS/2, Linux, and so forth, made it harder to find full Dev and Ops-capable individuals. Once again the roles had been separated by the nature of the beast.
The increasing number of clients, servers, and the difficulty of managing all of it pointed to the need for simplified and more standardized mechanisms. So the drive to deprecate clients and move back toward a greater role for central servers was already well under way when the World Wide Web changed everything yet again.
DevOps and the Internet
But, testing and deploying were significantly easier. Tools like Hudson, later forked to become the now-nearly-ubiquitous Jenkins, made it possible to set up pipelines. Each commit could now be tested automatically, validated, and packaged for deployment. That, in turn, brought on a need for more automation of infrastructure management.
It’s fair to say that the sheer complexity of modern product development, along with increasing user expectations, has served to drive Dev and Ops back together.
The more you can iterate before launch, the better your product will typically be; so tighter release cycles are essential to fit more iterations into an oft-fixed release calendar. Automating repeatable processes like integration, build, test, and deployment radically cuts the time to move from programming to final product. As explained by DevOps guru Gene Kim in his recent talk at our MERGE 2016 conference, this provides huge competitive advantages.
As you can see, individual portions of DevOps as a concept aren’t new, but many of its tools and techniques are. The notion of automating parts of builds, testing, and deployment have been with us for much longer, but tying them all together and controlling them via external tools to produce a unified, highly automated pipeline is a clear step forward. Companies that can deliver incremental progress farther and more quickly lure users away; those that can’t court failure, even destruction.