Use of pipelines in Jenkins

 Reading time: 6

Author: Enzo

In this article we are going to see the use of one of the most used components in the industry.

Jenkins is an open source server for continuous integration, a tool used to compile and test software projects continuously. This makes it easier for developers to integrate changes in a project and deliver new versions. This makes it easier for developers to integrate changes in a project and deliver new versions.

Currently, it is one of the most widely used components in the industry for this purpose.

Advantages

Among the most outstanding advantages we can name:

SIMPLE CONFIGURATION

Easy to install and requires no additional installations or components.

AUTOMATION

Provides the benefit of time savings and error reduction in software deployment and testing processes.

MULTIPLATFORM

Jenkins is available for all platforms and different operating systems, whether OS X, Windows or Linux.

TROUBLESHOOTING

It has a system capable of identifying and fixing faults very quickly.

COMPLEMENTARY ARCHITECTURE

Easily configurable; Jenkins can be easily modified and extended.

OPEN SOURCE

Jenkins has a large user community behind it.

What is a Pipeline?

A Pipeline is a set of plugins that initialize a continuous integration and delivery flow in Jenkins. According to Jenkins documentation, a Pipeline is an automated expression for getting software from a version control system to target teams.

Every software change goes through a complex process before it is delivered. This process involves building it in a reliable and repeatable way, and then making it vascular between the different stages of implementation testing.

The definition of a Jenkins Pipeline is written to a text file (called Jenkinsfile) which, in turn, exists in the version control repository of the project in question. This is the basis of Pipeline-as-code; it treats the CD (continuous delivery) stream as part of the application to be versioned and reviewed like any other piece of code.

Creating a Jenkinsfile and submitting it to the version control server provides a number of immediate benefits:

Basic structure of a Pipeline

The basic syntax of a Pipeline is as follows:

				
					<código>
pipeline {
    agent any #1
    stages {
        stage('Build') { #2
           steps {
                // #3
            }
        }

        stage('Test') { #4
            steps {
                // #5
            }
        }

        stage('Deploy') { #6
            steps {
                // #7
            }
        }

    }
}

</código>
				
			

01

Run the Pipeline and any of its phases in any of the available agents.

02

Define the phase <Build>

03

Perform the steps related to the fase <Build.>

04

Define phase <Test>

05

Perform the steps related to phase <Test>

06

Define phase <Deploy>

07

Perform the steps related to phase <Deploy>

Shared libraries and their use

As a Pipeline is adopted for more projects in an organization, common patterns are likely to emerge. Often, it is useful to share parts of Pipelines across multiple projects to reduce redundancies and maintain code.

A Pipeline supports “Shared Libraries” that can be defined in external version control repositories.

According to Jenkins documentation, the directory structure would be as follows:

How are Shared libraries used?

The shared libraries configured by default in Jenkins are loaded implicitly and allow Pipelines to use them automatically. Jenkins is flexible, so it allows at a given time to make use of another library. To do so, the @Library connotation must be used, specifying the library name:

The connotation can be anywhere in the script according to Groovy criteria. When referencing class libraries (with src/ directories), conventionally, the connotation goes in an import statement:

Simplifying Pipelines in Jenkins

In the previous point we have seen the structure of a Pipeline. Now let’s make it a little more real It would look like this:

				
					<código>

pipeline {

    agent any

 

    stages {

        stage('step_one') {

            steps {

                dir('Pipelines/PrimerPipeline/') {

                    script {

                        sh "docker build -t oscarenzo/mywebsite:0.0.1 ."

                               sh "echo \"Se va a crear una imagen de docker\""

                        sh "echo \"Nombre: oscarenzo/mywebsite\""

                        sh "echo \"Version: 0.0.1\""

                    }

                }

            }

        }

 

        stage('step_two') {

            steps {

                script {

                    sh "docker run -dit -p 80:80 --name nginx-server oscarenzo/mywebsite:0.0.1"

                }

            }

        }

 

        stage('step_three') {

            steps {

                script {

                    sh "nc -vz 127.0.0.1 80"

                    sh "echo \"Now you can join to the website using this link: http://127.0.0.1:8080\""

                }

            }

        }

    }

}

</código>
				
			

Using Jenkins shared libraries, the Pipeline could look like this:

				
					<código>

pipeline {

    agent any

 

    stages {

        stage('step_one') {

            steps {

                script {

                    dir('Pipelines/PrimerPipeline/') {

                        dockerActions.build(

                            image: "oscarenzo/mywebsite",

                            version: "0.0.1"

                        )

                    }

                }

            }

        }

        stage('step_two') {

            steps {

                script {

                    dockerActions.runContainer(

                        image: "oscarenzo/mywebsite",

                        version: "0.0.1",

                        port: "80"

                    )

                }

            }

        }

        stage('step_three') {

            steps {

                script {

                    dockerActions.digest(

                        port: "80"

                    )

                }

            }

        }    

    }

}

</código>
				
			

As you can see in the examples, we have gone from having this:

				
					<código>
stage('step_one') {
            steps {
                dir('Pipelines/PrimerPipeline/') {
                    script {
                        sh "docker build -t oscarenzo/mywebsite:0.0.1 ."
                        sh "echo \"Se va a crear una imagen de docker\""
                        sh "echo \"Nombre: oscarenzo/mywebsite\""
                        sh "echo \"Version: 0.0.1\""
                    }
                }
            }
        }
</código>

A esto:

<código>
stage('step_one') {
            steps {
                script {
                    dir('Pipelines/PrimerPipeline/') {
                        dockerActions.build(
                            image: "oscarenzo/mywebsite",
                            version: "0.0.1"
                        )
                    }
                }
            }
        }
</código>
				
			

With this, we have managed to standardize part of the Pipeline, managing in a single way the task of creating a Docker image and all the tasks that could follow from it. In addition to making them more parameterizable, making use of variables, converting the task, or set of tasks, into a function.

Let’s go a little further and simplify the Pipeline even more by converting this:

				
					<código>

pipeline {

    agent any

 

    stages {

        stage('step_one') {

            steps {

                dir('Pipelines/PrimerPipeline/') {

                    script {

                        sh "docker build -t oscarenzo/mywebsite:0.0.1 ."

                        sh "echo \"Se va a crear una imagen de docker\""

                        sh "echo \"Nombre: oscarenzo/mywebsite\""

                        sh "echo \"Version: 0.0.1\""

                    }

                }

            }

        }

 

        stage('step_two') {

            steps {

                script {

                    sh "docker run -dit -p 80:80 --name nginx-server oscarenzo/mywebsite:0.0.1"

                }

            }

        }

 

        stage('step_three') {

            steps {

                script {

                    sh "nc -vz 127.0.0.1 80"

                    sh "echo \"Now you can join to the website using this link: http://127.0.0.1:8080\""

                }

            }

        }

    }

}

</código>
				
			

To this:

				
					<código>

dockerPipeline(

    image: "oscarenzo/mywebsite",

    version: "0.0.1",

    port: "80"

)
</código>
				
			

In this way we managed to standardize the entire Pipeline to manage in a single way the tasks, phases and forms of execution of these. Following the line of making them more parameterizable with the use of variables, we have converted the entire Pipeline into a single function.

Accessing secrets and environment variables

Credentials function

The credentials function hangs from the credentials-plugin plugin, with this we can access to the available secrets in our Jenkins only indicating as value the id of them:

				
					<código>
AWS_ACCESS_KEY_ID     = credentials('jenkins-aws-secret-key-id')

echo ‘${AWS_ACCESS_KEY_ID_USR}’
echo ‘${AWS_ACCESS_KEY_ID_PSW}’
</código>
				
			

This is useful in cases where the credential is required for the entire Pipeline cycle. This avoids having to perform multiple queries each time a secret is required.

This plugin supports the following types of secrets:

Funcion with Credentials

The function with Credentials hangs from the plugin credentials-binding-plugin, with this we can access the available secrets in our Jenkins indicating the id of them and assigning variables to the available fields in them dynamically. For example:

				
					<código>

withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {

  sh 'echo $PASSWORD'

  // O También podemos imprimirlo como variable de Groovy

  echo USERNAME

}
</código>
				
			

It is important to bear in mind that this credential will only be available for the duration of the invocation, it means tha this credential will not be available in the next task, so it will be necessary to invoke it as many times as necessary throughout the life of the Pipeline.

Accessing variables in Jenkins

Environment variables in Jenkins are common for the entire Pipeline. For example:

				
					<código>
pipeline {
    agent any

    environment {
        image = “oscarenzo/mywebsite”
        version = “0.0.1”
    }


    stages {
        stage('step_one') {
            steps {
                dir('Pipelines/PrimerPipeline/') {
                    script {
                        sh "docker build -t ${image}:${version} ."
                    }
                }
            }
        }

        stage('step_two') {
            steps {
                script {
                    sh "docker run -dit -p 80:80 --name nginx-server ${image}:${version}"
                }
            }
        }

        stage('step_three') {
            steps {
                script {
                    sh "nc -vz 127.0.0.1 80"
                    sh "echo \"Now you can join to the website using this link: http://127.0.0.1:8080\""
                }
            }
        }
    }
}
</código>

				
			
Ana Ramírez

Ana Ramírez

Did you find it interesting?

Leave a Reply

Your email address will not be published. Required fields are marked *

FOLLOW US

CATEGORIES

LATEST POST