In today’s fast-paced world, it’s crucial to reduce the time between development and deployment. Continuous integration and continuous delivery (CI/CD) software enable organizations to do so by setting up automated workflows for testing, packaging, and delivery.
Jenkins is arguably the go-to CI/CD software. It rests at the heart of several IT infrastructures, allowing code to move from a developer’s machine to the client’s server as quickly as possible. In the following article, we discuss Jenkins,explore the syntax,uses, and the differences between the two types of pipelines in Jenkins.
Jenkins is an open-source automation server that is widely used to set up CI/CD workflows. CI/CD is a modern software delivery approach that decreases time to market. In a typical CI/CD setup, the following steps ensure that well-tested code is automatically deployed to a production machine, without manual effort:
Jenkins has built-in functionality and native plug-ins that support all the above and more. Whether you want to write a 100-step deployment script, compile code against every Git commit, calculate unit-test coverage on the fly, or identify bugs and vulnerabilities before merging code, Jenkins is the way to go.
A job in Jenkins is a user-specified description of work, typically divided into sequential steps. For example, a job may fetch source code from a Git repository, compile it using the configured compiler, run it inside a staging environment, examine the output for any errors, and send an email notification to the user.
Jenkins offers different types of jobs to handle most business use cases. The most commonly used job types are:
A Jenkins pipeline is a set of plug-ins to create automated, recurring workflows that constitute CI/CD pipelines. A Jenkins pipeline includes all the tools you need to orchestrate testing, merging, packaging, shipping, and code deployment.
A pipeline is typically divided into multiple stages and steps, with each step representing a single task and each stage grouping together similar steps. For example, you may have “Build”, “Test”, and “Deploy” stages in your pipeline. You can also run existing jobs within a pipeline.
Pipelines offer several benefits. You can:
The code that defines a pipeline is written inside a text file, known as the Jenkinsfile. The pipeline-as-code model recommends committing the Jenkinsfile to the project source code repository. This way, a pipeline is modified, reviewed, and versioned like the rest of the source code.
It’s also possible to create a pipeline and provide its specification using Jenkins’ web UI. The syntax remains the same, whether you define a pipeline via the web UI, or via a Jenkinsfile. As an example,
consider this simple Jenkinsfile, which creates a three-step continuous delivery pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
// compile code
// if successful, move build artifacts to the desired location
}
}
stage('Test') {
steps {
// check if artifacts are present in the correct location
// load test data into the database
// execute the test case
// continue only if tests passed
}
}uio
stage('Deploy') {
steps {
// Fetch tested code and deploy it to production
}
}
}
}
Example 1: A simple three-step Jenkins pipeline definition
As you can see, we have a pipeline with three stages: “Build”, “Test”, and “Deploy”. Inside each stage definition, we would write relevant steps. For example, inside the “Build” stage, we would define the steps to compile our code. It’s worth mentioning that Jenkins isn’t a replacement for build tools like GNU or Maven. However, it does allow you to invoke build commands, such as make and mvn package, during pipeline execution.
In the “Test” stage, we would perform all the steps needed to test code, including loading test data, executing the test cases, and verifying the output. In the “Deploy” stage, we would fetch the tested code and ship it to the production server.
Jenkins offers two types of syntax to create pipelines: declarative and scripted. Declarative syntax has recently been added to Jenkins to make pipeline code richer and more readable. Even though the structures of scripted and declarative pipelines differ fundamentally, both have the same building blocks of stages and steps.
In many ways, declarative syntax represents the modern way of defining pipelines. It is robust, clean, and easy to read and write. The declarative coding approach dictates that the user specifies only what they want to do, not how they want to do it. This makes it easier to create declarative pipelines, as compared to scripted pipelines, which follow the imperative coding approach.
However, this simplicity comes at the cost of expressiveness and a limited feature set. For example, it’s impossible to inject code into a declarative pipeline. If you try to add a Groovy script or a Java API reference to your declarative pipeline, you will get a syntax error. For some engineers, this can be a deal-breaker, as it means they can’t introduce complicated logic into the definition of a Jenkins pipeline.
The Jenkins team added this limitation on purpose. It’s considered best practice not to add complicated code directly to the definition of a Jenkins pipeline. Other approaches should be considered instead. For example, you can bundle up complex code into a Jenkins plug-in, and then load it into the pipeline. Or you can create a shared library, which can be referenced within the Jenkinsfile. Both these approaches also offer the additional benefits of reusability and maintainability.
In a declarative pipeline definition, the entire code is encapsulated within the pipeline block. Consider the following example:
pipeline {
agent any
options {
retry(3)
skipStagesAfterUnstable()
}
stages {
stage('Build') {
steps {
sh 'cmake'
sh 'make sampleapp'
}
}
stage('Test'){
steps {
sh './testcasescript'
sh './verifytestoutputscript'
}
}
stage('Deploy') {
steps {
sh './deployscript'
}
}
}
}
Example 2: A declarative pipeline
Inside the root pipeline block, we first have the agent keyword. This keyword tells the Jenkins engine to allocate an executor and workspace for the pipeline. Next, we have the options block, where we can specify configurations for the pipeline. In this example, we have set the value of retry to 3, which means that if the execution fails, Jenkins will retry three times. The skipStagesAfterUnstable() function will suspend execution if the status becomes unstable.
The stages block contains most of the work that the pipeline performs. There can only be one stages block in a pipeline definition. Inside the stages block, we have defined our three stages: Build, Test, and Deploy. Each stage includes individual actions defined inside the steps section.
In the Build stage, we are executing the cmake and make commands to compile our code. In the Test stage, we are running two scripts: the first to execute our test cases and the second to verify the output of the test run. In the Deploy stage, we are running the script that will deploy our code to production.
Before the pipeline plug-in v2.5 introduced declarative pipelines, a scripted syntax was the only way to define pipeline code. Even today, many developers prefer it over declarative because it offers more flexibility and extensibility.
The scripted syntax offers a fully-featured programming environment, allowing developers to implement complicated business logic inside pipeline code. Scripted pipelines follow the imperative coding approach, in which the developer has complete control over what they want to achieve and how they want to achieve it.
However, the Groovy-based syntax poses a learning curve for Jenkins beginners. This was a primary reason the Jenkins team introduced declarative pipelines, a more readable syntax. Scripted pipelines lack several features that are available out-of-the-box in declarative pipelines, including the environment and the options blocks.
A scripted pipeline definition begins with the node keyword. Consider the following example:
node {
stage('Build') {
sh 'cmake'
sh 'make sampleapp'
}
stage('Test') {
sh './testcasescript'
sh './verifytestoutputscript'
}
if (currentBuild.currentResult == 'SUCCESS') {
stage('Deploy') {
sh './deployscript'
}
}
}
Example 3: A scripted pipeline
If we compare Example 3 with Example 2, we’ll notice that the structures of the two pipelines are fundamentally different. The scripted pipeline doesn’t have the options directive because it’s not a part of the scripted syntax. We also don’t see a stages block in the scripted pipeline, and instead, individual stages are nested inside the node block.
Since we don’t have the skipStagesAfterUnstable() function available in the scripted pipeline, we must manually check the current result of the build before executing the Deploy stage.
pipeline {
agent any
stages {
stage('Sample') {
steps {
echo 'This is a sample'
script {
def apps = ['web', 'cli']
for (int i = 0; i < apps.size(); ++i) {
echo "Testing the ${apps[i]} app"
}
}
}
}
}
}
Example 4: Scripted pipeline block inside the script step of a declarative pipeline
Even though declarative and scripted pipelines differ syntactically and programmatically, they share the same pipeline subsystem. Both are viable implementations of the pipeline-as-code paradigm. Both allow you to codify your CI/CD ecosystems from scratch using Jenkins’ robust plug-ins and shared libraries.
There are no differences in the performance, scalability, availability, or resilience of declarative and scripted pipelines. The pipeline execution engine of Jenkins is syntax-agnostic; i.e., scripted and declarative pipelines are executed in the same manner.
There is no single right or wrong answer to which pipeline you should use for your business. Depending on your personalized needs, developers’ expectations, and time constraints, one may be a better fit than the other.
You want to set up your CI/CD pipeline in the minimum amount of time.
You want to write straightforward, readable pipeline code, loading plug-ins or shared libraries for complicated business logic. (This will also help you adhere to the core principles of CI/CD and pipeline-as-code.)
You don’t expect to reference Java APIs or Groovy scripts in your pipeline code.
You want access to state-of-the-art Jenkins features, such as the options block, the when directive, the environment directive, and intuitive code editors.
You want to future-proof your CI/CD implementation.
Jenkins is a staple of numerous automation-driven, CI-CD-enabled IT infrastructures. It empowers organizations to increase productivity, build resilient software, decrease time-to-market, and reduce maintenance costs. In this article, we compared the two types of Jenkins pipelines in detail. We hope that it helps you choose the right one for your business.
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now