No one likes a broken night build. Not the testers, not DevOp, and especially not the developers who then need to scramble to find, blame, fix the issue, then commit said fix to the code. All before they even consume their morning dose of caffeine after a long night of coding.

Because of this universal (and justified) dislike of broken builds, the concepts of continuous delivery and continuous integration (CI/CD) came to be and were quickly adopted by R&D teams. One of the oldest and most popular solutions for implementing end-to-end CI/CD in application development lifecycles are Jenkins Pipelines.

Before we dive into the specifics of building and adding Jenkins Pipeline to your projects, it’s worth understanding what continuous delivery and continuous integration actually are. And also why you need them in your life.

What are CI/CD (Continuous Delivery and Continuous Integration)?

Continuous Integration (CI) and Continuous Delivery (CD) are both popular DevOps practices that are part of a software development pipeline/orchestration approach. They are often abbreviated and merged as “CI/CD”, though each refers to a different set of processes and events.

Continuous Integration (CI) is a development methodology that assumes frequent (and thus “continuous”) integration of code into a shared repository. It includes development, code analysis, unit testing, code coverage calculations, and build activities. Many of the tasks are executed using various automation tools.

Continuous Delivery (CD) is the process of deploying changes into production in a way that prevents broken code from finding its way onto software in live environments. The goal of CD is to ensure that the code is always in a deployment-ready state for all supported environments, regardless of the number of developers that may be making changes to it at any time.

What is Jenkins Pipeline?

Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a series of events or tasks performed in a specific sequence to transform the code from version control into a stable software product by employing a suite of plugins in Jenkins. Thus, enabling the implementation and integration of Continuous Delivery processes within Jenkins.

A Jenkins pipeline contains several states (or stages): build, deploy, test and release. Interlinked and sequential, each state contains events that make for a continuous delivery pipeline.

What is a JenkinsFile?

Jenkins Pipeline has a customizable and scalable automation system that lets you build distribution pipeline scripts – also dubbed “Pipeline as Code”. Written as plain text in Groovy Domain Specific Language (DSL), these scripts are called JenkinsFile and are surprisingly easy to write and comprehend.

A JenkinsFile stores the entire CI/CD process as code on the local machine. As such, the file can be reviewed and checked into a Source Code Management (SCM) platform (be it Git or SVN) along with your code. Hence, the term “Pipeline as Code”.

There are two different types of syntax you can use to construct the Jenkins pipeline, each drastically different in approach from the other: declarative and scripted.

Scripted pipeline syntax

With the Scripted pipeline, the JenkinsFile is written on the Jenkins user interface instance. It then runs on the Jenkins master with the help of an executor (agent). The code is defined inside node blocks, so it takes relatively few resources to translate the scripted pipeline into individual commands.

Declarative pipeline syntax

The Declarative pipeline is a newer approach to creating pipelines by allowing the reading and writing of the pipeline code. As it contains a predefined hierarchy to design pipelines, it makes it easier to organize and control all the aspects of pipeline execution

With Declarative pipeline syntax, code is written within a JenkinsFile in pipeline blocks that can then be checked into the SCM of your choice.

Blue Ocean for Jenkins Pipelines

Generally speaking, most opt for Declarative pipelines over Scripted not only because of its abundant syntactical attributes. The approach and syntax are generally a lot more intuitive and easy to learn, especially with the introduction of Blue Ocean for Jenkins Pipelines.

Blue Ocean has been compared to a shiny layer of paint on top of the fully functional vehicle that is Jenkins Pipeline. Blue Ocean for Jenkins Pipeline, essentially the Jenkins Pipeline GUI, is a suite of plugins that introduces improved UX, as well as a set of tools to visualize the jobs getting executed in a personalized view for every member in a team.

Moreover, you don’t need knowledge of any scripting language in order to create Jenkin Pipeline projects. But it sure does help.

How to build a Jenkins Pipeline project in Blue Ocean

Before you can get started with your very first Jenkins pipeline, you will need to install Jenkins, the Jenkins Pipeline plugin and the Blue Ocean plugin package. Once you have all the ingredients, you can start setting up your Jenkins Pipeline project.

The Blue Ocean UI makes Pipeline project creation an ocean breeze. Not only does it help you set up the project, but it also automatically creates and writes your JenkinsFile for you, while you manage the project with little to no manual scripting.

Start by accessing Blue Ocean in your Jenkins menu.

If you have no Pipeline projects in this instance on Jenkins, you’ll see a welcome window offering you to create your first Pipeline.

If it’s not your first visit, you will instead see the dashboard, where you can choose to add a New Pipeline under the Pipelines tab.

Clicking the link will start a user-friendly wizard, beginning with selecting the location of your code.

Unsurprisingly, configuring most of the above options is fairly straight-forward and similar. The only differences lay in authentication and token generation process for each.

For example, we can choose Git and proceed to connect to a local or remote (via SSH) git repository by entering its URL.

Enter your repository URL, and Jenkins will provide you with a public key. Since you want to give Jenkins permissions to make commits, you should create a new service user and add the public key to it. Then simply click Create Pipeline

This will prompt Blue Ocean to scan your repository’s branches for a Jenkinsfile, commencing a Pipeline run for each branch containing one. If Blue Ocean cannot find any Jenkinsfile, it will create one for you. You will, of course, need to help it out using the Pipeline Editor.

By default, the right pan of the editor UI shows the Pipeline Settings. Here, you can define the agent used by the pipeline, as well as environment variables.

In the left and main pane of the editor UI you will see your Pipeline Editor, where you can start adding pipeline stages, and populating them with steps, each including different tasks.

To add your first stage, click on the + icon and enter a meaningful name for the stage, like Build.

Then, click + Add Step to add the first step to this stage. For example, select Shell Script and enter mvn clean install -Dlicense.skip=true. You can then add Print Message steps along the stage to indicate the process to the user and ease logging and troubleshooting.

Finally, before you can run your pipeline, you need to save your project in source control by simply clicking save. This will bring up the Save Pipeline prompt, where you can leave a comment describing the changes made to Jenkinsfile.

Clicking on Save & run will save any changes to the pipeline as a new commit. Then it will start a new Jenkins Pipeline Run based on those changes. Finally, Blue Ocean will navigate to the Activity View for this pipeline.

Summary

Pipeline as Code is becoming an increasingly DevOps concept. It is also one that has been adopted by many leading tools like Azure DevOps.

However, Jenkins Pipeline is a much more rounded and mature solution for end-to-end CI/CD process integration and implementation. Especially if you take into account the ease-of-use and cross-team collaboration features Blue Ocean brings to the table.

About the author

Ilana is a content writer for the Codota.com blog