In the past 10 years, Jenkins has evolved to the de-facto standard tool for automation in software development. Last year, the first major Jenkins version since years was released: Jenkins 2. In this article you’ll read what’s new.

Introduction

Jenkins has over 120.000 active installations. For 90% of the users, Jenkins is mission critical: it’s not your average hobby project. An important factor in the success of Jenkins is the huge collection of available plugins. An entire ecosystem has grown around Jenkins, which makes it suitable for almost every environment.

1. jenkins-overviewSource: http://www.slideshare.net/asotobu/jenkins-20-65705621

New in Jenkins 2

Jenkins 2 is a drop-in upgrade, fully backwards compatible with version 1.6. It features 3 large changes:

Better out-of-the-box experience

Functionality in the default Jenkins installation used to be fairly limited. Typically, you needed to install a bunch of plugins first to adapt Jenkins to a specific situation. This has improved quite a lot. During installation, you can now choose to have a default set of plugins installed. This allows you to get started right away.

Security is enabled by default now. On first startup, an initial admin password is set. You need to dig it up from the log file. It’s a bit more work to get started, but a lot safer than the previous situation, where Jenkins was unsecured by default. Knowing that bots scan the entire internet for unsecured Jenkins instances, I’d say the default has improved 😉

Revamped UI

The Jenkins UI has improved lightly over the 1.x UI. During installation and upgrade, a wizard is presented that helps you configure Jenkins. Once the installation is done, the UI improvements are marginal. Creating a job looks a bit different and the job configuration screen now includes tabs that make navigating to a specific configuration section a bit faster. The real UI changes are in project Blue Ocean. You’ll read more about that in a bit.

Pipeline as code

By far the biggest change in Jenkins 2 is the pipeline as code concept. A pipeline is an automated sequence of steps that brings software from version control into the hands of your end users. Pipeline as code allows you to describe pipelines in textual form and keep them in version control. These pipelines are written using a flexible Groovy DSL.

In Jenkins 1, creating pipelines was a bit of a pain, especially in a microservices environment with lots of separate jobs for building, testing and deploying. Builds for a project with 20 services quickly piled up to a stack of around 100 jobs. With Jenkins 2, pipelines are a lot more practical, for two reasons:

  1. You can create jobs that behave exactly how you want. A single job can perform a complete pipeline from commit to production deployment.
  2. Reuse of parts of jobs is a lot easier. You can read more about this further on in this article.

These changes position Jenkins for continuous delivery use cases and other more complex automation scenarios.

Pipelines

There are 2 ways to define a Jenkins pipeline. You can either type the pipeline script in the Jenkins UI, or you can place a file with the pipeline definition in version control. For the latter, the convention is to put a ‘Jenkinsfile’ in the root of the project that the job applies to. This is comparable to a Dockerfile or Maven pom.xml: a standard configuration file in a standard place.

When working on Jenkinsfiles in your IDE, there’s a GDSL file available that makes your IDE aware of the Jenkinsfile syntax. This enables highlighting and code completion in your IDE.

My first pipeline

This following pipeline definition builds a simple Maven project.

node('java8') {

  stage('Configure') {
    env.PATH = "${tool 'maven-3.3.9'}/bin:${env.PATH}"
  }

  stage('Checkout') {
    git 'https://github.com/bertjan/spring-boot-sample'
  }

  stage('Build') {
    sh 'mvn -B -V -U -e clean package'
  }

  stage('Archive') {
    junit allowEmptyResults: true, testResults: '**/target/**/TEST*.xml'
  }

}

The first step is selecting the type of Jenkins node where the build will run. In the simplest scenario, there’s only a single Jenkins ‘master’ node. More complex environments usually have a master node and a few to tens or hundreds of slave nodes. This allows for running lots of builds in parallel. In this example, we instruct Jenkins to run the job on a node with label ‘java8’. Jenkins will choose the first available node that is marked with this label. The entire pipeline will now run on this node. You can also choose to run different parts of the pipeline on different nodes, and even in parallel.

The example pipeline has 4 stages. A stage is a step in a pipeline, recognizable in the Jenkins UI. In the first stage, installed tool ‘maven-3.3.9’ is added to the PATH variable of the OS. The second step is a Git checkout, the third a Maven build and the fourth archives the unit test results.

When the pipeline is executed, the results are shown in the Jenkins UI:

1. pipeline

Pipelines grow with you from simple to complex. With pipeline scripts, it’s possible to define multiple jobs without repeating yourself. This is a big advantage over traditional, point-and-click pipelines (which were basically sequences of jobs).

Pipelines survive Jenkins restarts. This is useful when you want to upgrade your Jenkins instance now and then, which usually requires a restart. Jenkins also offers the possibility to ‘replay’ a pipeline. This allows you to view a previously executed pipeline script in the UI, perform changes inline and run the pipeline again. This is very useful when developing pipeline scripts.

Pipeline syntax

If you haven’t worked with the Jenkins Groovy DSL before, the pipeline syntax takes some getting used to. To smooth the transition a bit, the Jenkins UI contains a pipeline snippet generator. This allows you to configure the most-often used build actions in a traditional point-and-click way, and generate the accompanying pipeline DSL code from there.

2. snippet generator

The pipeline reference documentation is also very usable and gives a fairly complete overview of all possible pipeline steps. It is noticeable that the pipeline DSL is relatively new. Not all plugins are configurable through the DSL yet. In case the plugin documentation doesn’t help, you can always dive into the plugin source code. Jenkins components and plugins are written in Java or Groovy, and therefore are usually directly usable in a pipeline. This does require you to be familiar with Jenkins’ internals to some extent.

Workflow libs repository

Jenkins offers an internal Git repository for reusable pipeline scripts. These scripts are available to all your pipelines. One of the examples below shows how to use this repository. I personally don’t use the internal repository much. It doesn’t feel natural to use the internal Jenkins repository when your code and Jenkinsfile are already in a different repository.

Examples

The best way to get up to speed with the pipeline syntax is to simply get going with it. This section shows a number of examples of constructions that are regularly used in pipeline scripts.

Sending an e-mail when a build fails

try {
  // builds steps here
} catch (e) {
  currentBuild.result = 'FAILED' 

  mail to: '<to>', subject: '<subject>', body: '<body>', attachLog: true
  throw e 

}

 

Using classes, constants and includes

// In file common/Constants.groovy:
class Constants {
  static final SONAR_URL = 'https://sonar.company.com'
}
return this;

// In file Jenkinsfile:
load 'common/Constants.groovy'
sh "mvn -B -V -U -e sonar:sonar -Dsonar.host.url='${Constants.SONAR_URL}'"

 

Defining a reusable workflow step in Jenkins’ internal workflowLibs repository

// In repo ssh://<user>@<jenkins>:2222/workflowLibs.git, file my.company.MyWorkflowSteps:
package my.company
def someBuildStep() {
  // implementation of a build step
}

              

 // In Jenkinsfile:
 def mySteps = new my.company.MyWorkflowSteps()

 mySteps.someBuildStep()


 

Parallel execution of one or more parts of a job

parallel 'unit': {
  sh 'mvn test' 
}, 'integration': {
  sh 'mvn integration-test' 
}

 

For more examples, take a look at the slide deck of the Jenkins 2 talk I did at JavaOne last year.

Blue Ocean

The Blue Ocean project is a complete make-over of the Jenkins user experience. Initially, the project focuses on the UI around pipelines for developers. The end goal is to gradually replace the entire Jenkins UI.

Blue Ocean beta is available as plugin. You can install it through the Jenkins plugin manager. After a restart of Jenkins, a button ‘Open Blue Ocean’ appears at the top of the screen. A click brings you to the overview of your builds:

3. blue ocean overview

For the seasoned Jenkins user, this might cause a light shock. Jenkins suddenly looks somewhat sexy 😉

The pipeline details show a timeline of the build, with steps for each pipeline stage. You can select a stage and view the logs for this stage:

4. blue ocean pipeline logs

I think it’s a good improvement. Functionally, it doesn’t really bring that much new stuff, but the UI is a lot cleaner and looks a lot better. Blue Ocean is scheduled to go out of beta in March 2017.

Multibranch pipeline

A final noteworthy feature is the multibranch pipeline plugin. This allows you to automatically configure pipelines for all branches in a Git project. Just link the job to a project, and Jenkins starts scanning the repository and creates a pipeline for each branch it discovers. This is a life saver when your team workflow includes lots of pull requests and feature branches: it saves you loads of maintenance on your builds.

5. blue ocean multibranchThe GitHub Organization Folder Plugin kicks it up a notch. This plugin will automatically create pipelines for all branches of all repositories in a GitHub organisation. There is one condition: each repository must contain a Jenkinsfile in order to be included in the list of jobs.

Jenkins 2 in practice

At the Dutch National Police, my team is building several web applications with an Angular 2.x frontend and a microservices backend with Spring Boot. We’ve been using Jenkins 2 for a while. We have upgraded a Jenkins 1.6 installation with about 50 builds to Jenkins 2.0 without any trouble.

We’ve replaced all existing traditional jobs with pipelines. That took us quite some time. We spent time on a generic job setup with reusable steps, and on rewriting the existing jobs. Even though all jobs where doing about the same work, there were quite some differences between them. For each job, we needed to assess whether those differences where significant or not. It turned out, most differences weren’t significant and most jobs did pretty much the same.

We chose to use one single Git repository for all Jenkinsfiles for all jobs (with a subfolder for each job). The disadvantage of this approach is that the Jenkinsfile for a project does not live in the same place as the code for the project. The advantage is that you have all of your build definitions in one place. This allows you to easily refactor your build setup and perform bulk changes on all your builds.

We hardly make any use of the internal workflow repository in Jenkins. We only use it to bootstrap the generic components of our builds. These components are loaded from separate Groovy DSL files in the repository that contains all our build definitions. Two types of components exist: low level steps that perform a small part of a build, and high level steps that define a complete pipeline. The advantage of this approach is that we have a high level reuse for jobs that are similar, while still achieving reuse on a lower level for jobs that are less similar to each other.

The result: our builds are far more consistent than before. It’s hardly any work to add new jobs, and we can manage the definition of all our jobs in a single place.

Future improvements

At JavaOne, Kohsuke Kawaguchi (creator of Jenkins) gave a sneak peek at the future of Jenkins. The upcoming changes are focused on ease of use. The pipeline model will be simplified to look less like programming and more declarative. Jenkins wants to cater for both point-and-click users and text editor users. A change I’m looking forward to: pipeline DSL processing will fail when reading a pipeline, not when executing. Currently, you won’t notice typo’s in your Jenkinsfile until you run the build.

Conclusion

Jenkins 2 is a powerful continuous delivery platform. The new version is a drop-in upgrade for 1.6 installations, contains UI improvements and a more curated user experience. The core new feature is pipeline as code, which allows you to describe your jobs in a DSL and put them in version control. Generally speaking: less clicks, more code. If you ask me, there’s no reason to stay at 1.6. Let’s upgrade 😉

Pipeline as code: Continuous Delivery pipelines with Jenkins 2

| Methodology & Culture| 7,502 views | 0 Comments
Profile photo of Bert Jan Schrijver
About The Author
- Bert Jan is a software craftsman at JPoint in the Netherlands and CTO at OpenValue. His focus is on Java, Continuous Delivery and DevOps. He is User Group leader for NLJUG, the Dutch Java User Group and a JavaOne Rock Star speaker. He loves to share his experience by speaking at conferences, writing for the Dutch Java magazine and helping out Devoxx4Kids with teaching kids how to code.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>