There are many DevOps lifecycle tools out there, however GitLab is a complete package designed for coordinating CI/CD pipelines.

GitLab is a web-based DevOps lifecycle tool. This application offers functionality to automate the entire DevOps life cycle from planning to creation, build, verify, security testing, deploying, and monitoring, offering high availability and replication. It is highly scalable and can be used on-prem or on the cloud. GitLab also includes a wiki, issue-tracking, and CI/CD pipeline features.

When DevOps projects are spread across large, geographically dispersed teams a complete DevOps tool is highly useful to maintain collaboration, incorporate feedback, avoid mistakes, and speed up the development process.

GitLab goes beyond being just a repository manager; it has a built-in CI/CD, which saves enormous amounts of time and keeps the workflow smooth. Along with its own CI/CD, GitLab also allows for a range of 3rd party integrations with external CI, so you always have the option of working with the tools based on your workflow. 

Here is a quick run-through on how to start with GitLab.

Project:

In GitLab, we can create projects for hosting codebase, use it as an issue tracker, collaborate on code, and continuously build, test, and deploy apps with built-in GitLab CI/CD. Projects can be available publicly, internally, or privately, at our choice. GitLab does not limit the number of private projects we create.

Create a project in GitLab

In the dashboard, click the green “New project” button or use the plus icon in the navigation bar. This opens the New Project page.

On the New Project page :

  • Create a Blank project
  • Fill the name of your project in the Project name field
  • Project URL Field which is the URL path for the project that the GitLab instance will use
  • Project slug field will be auto-populated
  • The Project description (optional) field enables you to enter a description for the project’s dashboard
  • Changing the Visibility Level modifies the project’s viewing and access rights for users
  • Selecting the Initialize repository with a README option creates a README file so that the Git repository is initialized, has a default branch, and can be cloned
  • Click Create project

Repository

A repository is a part of a project, which has a lot of other features.

Host your codebase in GitLab repositories by pushing files to GitLab. You can either use the user interface (UI) or connect your local computer with GitLab through the command line.

GitLab Basic Commands: https://docs.gitlab.com/ee/gitlab-basics/command-line-commands.html

Branch

When you create a new project, GitLab sets the master as the default branch for your project. You can choose another branch to be your project’s default under your project’s Settings > Repository.

Commits

When you commit your changes, you are introducing those changes to your branch. Via a command line, you can commit multiple times before pushing.

A commit message is important to identify what is being changed and, more importantly, why. In GitLab, you can add keywords to the commit message that will perform one of the actions below:

  • Trigger a GitLab CI/CD pipeline: If you have your project configured with GitLab CI/CD, you will trigger a pipeline per push, not per commit.
  • Skip pipelines: You can add to you commit message the keyword [ci skip], and GitLab CI will skip that pipeline.
  • Cross-link issues and merge requests: Cross-linking is great to keep track of what’s somehow related in your workflow. If you mention an issue or a merge request in a commit message, they will be shown on their respective thread.

CI/CD Pipeline

Continuous Integration works by pushing small code chunks to your application’s code base hosted in a Git repository, and, to every push, run a pipeline of scripts to build, test, and validate the code changes before merging them into the main branch.

 Continuous Delivery and Deployment consist of a step further CI, deploying your application to production at every push to the default branch of the repository.

These methodologies allow you to catch bugs and errors early in the development cycle, ensuring that all the code deployed to production complies with the code standards you established for your app.

Two top-level components are:

  1. .gitlab-ci.yml
  2. GitLab Runner
.gitlab-ci.yml

The .gitlab-ci.yml file is where we configure what CI does with the project. It lives in the root of the repository. On any push to the repository, GitLab will look for the .gitlab-ci.yml file and start jobs on Runners according to the contents of the file, for that commit.

Pipeline configuration begins with jobs. Jobs are the most fundamental element of a .gitlab-ci.yml file.

Jobs are:

  • Defined with constraints stating under what conditions they should be executed
  • Top-level elements with an arbitrary name and must contain at least the script clause
  • Not limited in how many can be defined

For example:

job1:
  script: “execute-script-for-job1”

job2:
  script: “execute-script-for-job2”

GitLab Runner

GitLab Runner is a build instance that is used to run jobs and send the results back to GitLab. It is designed to run on the GNU/Linux, macOS, and Windows operating systems.

Runners run the code defined in .gitlab-ci.yml. They are isolated (virtual) machines that pick-up jobs through the coordinator API of GitLab CI. If we want to use Docker, install the latest version. GitLab Runner requires a minimum of Docker v1.13.0.

Types of Runner :
  1. Specific Runner – useful for jobs that have special requirements or for projects with a specific demand.
  2. Shared Runner – useful for jobs that have similar requirements between multiple projects. Rather than having multiple Runners idling for many projects, you can have a single or a small number of Runners that handle multiple projects.
  3. Group Runner – useful when you have multiple projects under one group and would like all projects to have access to a set of Runners. Group Runners process jobs using a FIFO (First In, First Out) queue.
How to use Runner:
  1. Install Runner  –  https://docs.gitlab.com/runner/#install-gitlab-runner
  2. Register Runner – https://docs.gitlab.com/runner/register/index.html

Sample Docker Project:

  1. Create/Upload 2 files. .gitlab-ci.yml and Dockerfile
File Contents:
.gitlab-ci.yml:

# Official docker image.
image: docker:19.03.0-dind

services:
  - docker:19.03.0-dind

before_script:
  - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY

build-master:
  stage: build
  script:
    - docker build --pull -t "$CI_REGISTRY_IMAGE" .
    - docker push "$CI_REGISTRY_IMAGE"
  tags:
    - docker

Dockerfile

FROM python:2.7
RUN pip install howdoi
CMD ['howdoi']

Once we create .gitlab-ci.yml file, each push will trigger the pipeline. We didn’t create the Runner yet. So, while creating these files, add “[ci skip]” to commit message. This will skip the CI/CD pipeline.

2. Install Runner

For this example, we are using a specific runner and are going to install a runner in Windows. Refer to this link to Install Runner in Windows: https://docs.gitlab.com/runner/install/windows.html

3. Register Runner:

In order to register a runner, we need a registration token which can be found in Settings > CI/CD > Runner Tab

Check below Specific Runner section. You can copy the token from there.

To register a Runner under Windows run the following command in the path where we install GitLab Runner:
./gitlab-runner.exe register

Enter your GitLab instance URL:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com)
https://gitlab.com

Enter the token you obtained to register the Runner:
Please enter the gitlab-ci token that we copied earlier

Enter a description for the Runner, you can change this later in GitLab’s UI:
Please enter the gitlab-ci description for this Runner
[hostname] my-runner

Enter the tags associated with the Runner, you can change this later in GitLab’s UI:
Please enter the gitlab-ci tags for this Runner (comma separated):
docker

Enter the Runner executor:
Please enter the executor: ssh, docker+machine, docker-ssh+machine, kubernetes, Docker, parallels, virtualbox, docker-ssh, shell:
docker

If you chose Docker as your executor, you’ll be asked for the default image to be used for projects that do not define one in .gitlab-ci.yml:
Please enter the Docker image (eg. ruby:2.6):
python:2.7

Once the Runner is created successfully, it will be displayed under Settings > CI/CD > Runner > Specific Runner section.

4. Now, Go to CI/CD > Pipelines and Click  Run Pipeline Button

It will open a new window. Click Run Pipeline button again.

5. Once the job completed successfully, it will be displayed as below.

A Kubernetes pod – incidentally, some say it is named after a whale pod because the docker logo is a whale – is the foundational unit of execution in a K8s ecosystem. While docker is the most common container runtime, pods are container agnostic and support other containers as well.

Simply put, a K8s pod is a layer of abstraction wrapped around containers to group them together to allocate resources and to manage them efficiently.

Continuous integration and delivery or CI/CD is a critical part of DevOps ecosystems, especially for cloud-native applications. DevOps teams frequently use Jenkins CI/CD pipelines to increase the speed and quality of collaborated software development ecosystems by adding automation. Thanks to Helm, deploying Jenkins server to K8s is quick and easy. The difficult bit is building the pipeline.

Here is a post that describes how to deploy a pod containing three applications using a Jenkins CI/CD pipeline and update them selectively.

Task on Hand:

Use a Jenkins pipeline to build a spring-boot application to generate jar file, dockerize the application to create a docker image, push the image to a docker repository and pull the image into the pod to create containers. The pod should contain 3 containers for the three applications, respectively. Upon git commit, only the container for which there is a change must be updated (rolling update).

Steps
  1. Create a pipeline using groovy script to clone the respective git repo, build the project using maven, build the docker images, push it to dockerhub and pull these images to run containers in the pod.
  2. Repeat the steps for all the three applications in separate stages. Make sure to create a separate directory in each stage to prevent conflicts when using similar files. Also, this clones the different git repos into different folders to avoid confusion.
  3. Here is the Jenkinsfile/Pipeline script to perform the above task:
pipeline {
agent any
stages {
stage('Build1'){
    steps{
        dir('app1'){
            script{
                git 'https://github.com/cloud/simple-spring.git'
                sh 'mvn clean install'
                app = docker.build("cloud007/simple-spring")
                docker.withRegistry( "https://registry.hub.docker.com", "dockerhub" ) {
                // dockerImage.push()
                app.push("latest")
            }
        }
    }
}

}
stage('Build2'){
    steps{
        dir('app2'){
            script{
                git 'https://github.com/cloud/simple-spring-2.git'
                sh 'mvn clean install'
                app = docker.build("cloud007/simple-spring-2")
                docker.withRegistry( "https://registry.hub.docker.com", "dockerhub" ) {
                // dockerImage.push()
                app.push("latest")
            }
        }
    }
}

}
stage('Build3'){
    steps{
        dir('app3'){
            script{
                git 'https://github.com/cloud/simple-spring-3.git'
                sh 'mvn clean install'
                app = docker.build("cloud007/simple-spring-3")
                docker.withRegistry( "https://registry.hub.docker.com", "dockerhub" ) {
                // dockerImage.push()
                app.push("latest")
            }
        }
    }
}
}
stage('Orchestrate')
{
    steps{
        script{
    sh 'kubectl apply -f demo.yaml'
        }
    }
}

}
}

4. Make sure to properly configure docker and expose the dockerd in port 4243 and then change permission to allow Jenkins to use docker commands by changing permission for the /var/run/docker.sock shown.

5. Coming to integrating Kubernetes with Jenkins, it can be done using two plugins:

  • Kubernetes plugin: When using this plugin, we configure the credentials to use our local cluster/azure cluster and specify the container templates for the containers to be created in the pipeline. But since all the tasks must run in containers, it is a little confusing approach. A better approach would be to use the kuberentes-cli plugin.

Refer: https://github.com/jenkinsci/kubernetes-plugin

  • Kubernetes-cli plugin: It provides a withconfig() for pipeline support, which uses the configure credentials to connect to our cluster and perform kubectl commands. However when running the pipeline, the kubeconfig wasn’t  recognized for some reason, and kept giving the error ‘file not found’.

Refer: https://github.com/jenkinsci/kubernetes-cli-plugin/blob/master/README.md

Hence, we installed kubectl on the Jenkins host, configured the cluster manually, and ran shell commands from the Jenkins pipeline, where Jenkins was recognized as an anonymous user and was only granted get access but couldn’t create pods/deployments.

Here are some common problems faced during this process and the troubleshooting procedure.

  • Configuring Jenkins to use local minikube cluster:
    We had trouble using both the plugins to properly configure Jenkins to create deployments as required. Using shell commands to run kubectl was also not successful since Jenkins was recognized as an anonymous user, and authorization prevented anonymous users from creating deployments.
  • Permission for /var/run/docker.sock is reset to root after every restart, so make sure to change it to allow Jenkins to continue to use docker commands: choose Jenkins /var/run/docker.sock
  • Installing Minikube:
    i) Started minikube cluster using hyperv as the driver and created a virtual    switch:
    minikube start –vm-driver=hyperv  –hyperv-virtual-switch=”Primary Virtual Switch”

    ii) Installation takes a lot of time, so we have to wait patiently, and eventually, the cluster will get configured and started. If there is a problem with apiserver, then stop the machine after SSHing into minikube vm:
    minikube ssh
    sudo poweroff

    iii) Then start minikube the same way.

Here are some suggested best practices

Maintaing git repo:
  • Branching must be used while updating the source code or adding a particular file to the repository. Suppose you want to add a readme, then create a new branch from master, create the readme and commit it and then merge the branch with the origin.
  • Similarly, for adding some test files/changing source code – create a new branch for testing/modification, update and commit the code and merge with the master when finished. The purpose of this is to allow easy roll back to the original master if you run into some errors when working with the new files and to prevent any conflict in code with the master.
  • Commit only after you have tested the code properly, never commit incomplete code.
  • Write good commit messages to keep track of the changes you have made.
Versioning:
  • Follow the versioning convention X.Y.Z where X is incremented for a new major update/feature, Y is incremented for minor updates/minor features and Z for minor patches/bug fixes
  • Avoid version lock that has too many dependencies in a single version. In such a scenario the package can only be updated after releasing new versions for every dependent package.
Docker repo:
  • Use unique tags for deployment/pushing images to the repository.
  • Always use stable tags for building images but never deploy/pull images using stable tags. Stable tags are the ones that do not roll over the updates to future versions but are bound to the current version (tag). 
What is Jenkins?

Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

With Jenkins, organizations can accelerate the software development process through automation. Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.

Jenkins achieves Continuous Integration with the help of plugins. Plugins allows the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example: Git, Maven 2 project, Amazon EC2, HTML publisher etc.

Advantages of Jenkins include:

  • It is an open source tool with great community support.
  • It is easy to install.
  • It has 1000+ plugins to ease your work. If a plugin does not exist, you can code it and share with the community.
  • It is free of cost.
  • It is built with Java and hence, it is portable to all the major platforms
What is Continuous Integration?

Continuous Integration is a development practice in which the developers are required to commit changes to the source code in a shared repository several times a day or more frequently. Every commit made in the repository is then built. This allows the teams to detect the problems early. Apart from this, depending on the Continuous Integration tool, there are several other functions like deploying the build application on the test server, providing the concerned teams with the build and test results etc.

Continuous Integration with Jenkins
  • First, a developer commits the code to the source code repository. Meanwhile, the Jenkins server checks the repository at regular intervals for changes.
  • Soon after a commit occurs, the Jenkins server detects the changes that have occurred in the source code repository. Jenkins will pull those changes and will start preparing a new build.
  • If the build fails, then the concerned team will be notified.
  • If built is successful, then Jenkins deploys the built in the test server.
  • After testing, Jenkins generates a feedback and then notifies the developers about the build and test results.
  • It will continue to check the source code repository for changes made in the source code and the whole process keeps on repeating.
Jenkins Distributed Architecture

Jenkins uses a Master-Slave architecture to manage distributed builds. In this architecture, Master and Slave communicate through TCP/IP protocol.

Jenkins Master

Your main Jenkins server is the Master. The Master’s job is to handle:

  • Scheduling build jobs.
  • Dispatching builds to the slaves for the actual execution.
  • Monitor the slaves (possibly taking them online and offline as required).
  • Recording and presenting the build results.
  • A Master instance of Jenkins can also execute build jobs directly.
Jenkins Slave

A Slave is a Java executable that runs on a remote machine. Following are the characteristics of Jenkins Slaves:

  • It hears requests from the Jenkins Master instance.
  • Slaves can run on a variety of operating systems.
  • The job of a Slave is to do as they are told to, which involves executing build jobs dispatched by the Master.
  • You can configure a project to always run on a particular Slave machine, or a particular type of Slave machine, or simply let Jenkins pick the next available Slave.
What is a Jenkins pipeline?

A pipeline is a collection of jobs that brings the software from version control into the hands of the end users by using automation tools. It is a feature used to incorporate continuous delivery in our software development workflow.

Over the years, there have been multiple Jenkins pipeline releases including, Jenkins Build flow, Jenkins Build Pipeline plugin, Jenkins Workflow, etc. What are the key features of these plugins?

  • They represent multiple Jenkins jobs as one whole workflow in the form of a pipeline.
  • What do these pipelines do? These pipelines are a collection of Jenkins jobs which trigger each other in a specified sequence.

Lets look at an example. Suppose I’m developing a small application on Jenkins and I want to build, test and deploy it. To do this, I will allot 3 jobs to perform each process. So, job1 would be for build, job2 would perform tests and job3 for deployment. I can use the Jenkins build pipeline plugin to perform this task. After creating three jobs and chaining them in a sequence, the build plugin will run these jobs as a pipeline.

This approach is effective for deploying small applications. But what happens when there are complex pipelines with several processes (build, test, unit test, integration test, pre-deploy, deploy, monitor) running 100’s of jobs?

The maintenance cost for such a complex pipeline is huge and increases with the number of processes. It also becomes tedious to build and manage such a vast number of jobs. To overcome this issue, a new feature called Jenkins Pipeline Project was introduced.

The key feature of this pipeline is to define the entire deployment flow through code. What does this mean? It means that all the standard jobs defined by Jenkins are manually written as one whole script and they can be stored in a version control system. It basically follows the ‘pipeline as code’ discipline. Instead of building several jobs for each phase, you can now code the entire workflow and put it in a Jenkinsfile. Below is a list of reasons why you should use the Jenkins Pipeline.

Jenkins Pipeline Advantages
  • It models simple to complex pipelines as code by using Groovy DSL (Domain Specific Language)
  • The code is stored in a text file called the Jenkinsfile which can be checked into a SCM (Source Code Management)
  • Improves user interface by incorporating user input within the pipeline
  • It is durable in terms of unplanned restart of the Jenkins master
  • It can restart from saved checkpoints
  • It supports complex pipelines by incorporating conditional loops, fork or join operations and allowing tasks to be performed in parallel
  • It can integrate with several other plugins
What is a Jenkinsfile?

A Jenkinsfile is a text file that stores the entire workflow as code and it can be checked into a SCM on your local system. How is this advantageous? This enables the developers to access, edit and check the code at all times.

The Jenkinsfile is written using the Groovy DSL and it can be created through a text/groovy editor or through the configuration page on the Jenkins instance. It is written based on two syntaxes, namely:

  • Declarative pipeline syntax
  • Scripted pipeline syntax

Declarative pipeline is a relatively new feature that supports the pipeline as code concept. It makes the pipeline code easier to read and write. This code is written in a Jenkinsfile which can be checked into a source control management system such as Git.

Whereas, the scripted pipeline is a traditional way of writing the code. In this pipeline, the Jenkinsfile is written on the Jenkins UI instance. Though both these pipelines are based on the groovy DSL, the scripted pipeline uses stricter groovy based syntaxes because it was the first pipeline to be built on the groovy foundation. Since this Groovy script was not typically desirable to all the users, the declarative pipeline was introduced to offer a simpler and more optioned Groovy syntax.

The declarative pipeline is defined within a block labelled ‘pipeline’ whereas the scripted pipeline is defined within a ‘node’

An example Jenkinsfile looks like this:

pipeline {
environment {
BUILD_SCRIPTS_GIT="http://10.100.100.10:7990/scm/~myname/mypipeline.git"
BUILD_SCRIPTS='mypipeline'
BUILD_HOME='/var/lib/jenkins/workspace'
 }
agent any
stages {
stage('Checkout: Code') {
steps {
sh "mkdir -p $WORKSPACE/repo;\
git config --global user.email '[email protected]';\
git config --global user.name 'myname';\
git config --global push.default simple;\
git clone $BUILD_SCRIPTS_GIT repo/$BUILD_SCRIPTS"
sh "chmod -R +x $WORKSPACE/repo/$BUILD_SCRIPTS"
  }
 }
stage('Yum: Updates') {
steps {
sh "sudo chmod +x $WORKSPACE/repo/$BUILD_SCRIPTS/scripts/update.sh"
sh "sudo $WORKSPACE/repo/$BUILD_SCRIPTS/scripts/update.sh"
   }
  }
 }
post {
always {
cleanWs()
  }
 }
}

The above Jenkins file does the following.

  • sets up environment variables
  • pulls data down from a git repo
  • sets it up in a Jenkins workspace
  • runs a script under scripts/
  • once completes by cleaning up the workspace (successful or not)
Pipeline concepts
  • Pipeline

This is a user defined block which contains all the processes such as build, test, deploy, etc. It is a collection of all the stages in a Jenkinsfile. All the stages and steps are defined within this block. It is the key block for a declarative pipeline syntax.

  • Node

A node is a machine that executes an entire workflow. It is a key part of the scripted pipeline syntax.

There are various mandatory sections which are common to both the declarative and scripted pipelines, such as stages, agent and steps that must be defined within the pipeline. These are explained below:

  • Agent

An agent is a directive that can run multiple builds with only one instance of Jenkins. This feature helps to distribute the workload to different agents and execute several projects within a single Jenkins instance. It instructs Jenkins to allocate an executor for the builds.

A single agent can be specified for an entire pipeline or specific agents can be allotted to execute each stage within a pipeline. Few of the parameters used with agents are:

  • Any

Runs the pipeline/ stage on any available agent.

  • None

This parameter is applied at the root of the pipeline and it indicates that there is no global agent for the entire pipeline and each stage must specify its own agent.

  • Label

Executes the pipeline/stage on the labelled agent.

  • Docker

This parameter uses docker container as an execution environment for the pipeline or a specific stage. In the below example I’m using docker to pull an ubuntu image. This image can now be used as an execution environment to run multiple commands.

  • Stages

This block contains all the work that needs to be carried out. The work is specified in the form of stages. There can be more than one stage within this directive. Each stage performs a specific task. In the following example, I’ve created multiple stages, each performing a specific task.

  • Steps

A series of steps can be defined within a stage block. These steps are carried out in sequence to execute a stage. There must be at least one step within a steps directive. In the following example I’ve implemented an echo command within the build stage. This command is executed as a part of the ‘Build’ stage.

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

3520 NE Harrison Drive, Issaquah, WA, 98029

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097


© 2023 CloudIQ Technologies. All rights reserved.

Get in touch

Please contact us using the form below

    USA

    3520 NE Harrison Drive, Issaquah, WA, 98029

    +1 (206) 203-4151

    INDIA

    Chennai One IT SEZ,

    Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097

    +91-044-43548317