Continuous Deployment using Jenkins and Docker - docker

We are building a java based high-availability service for a financial application. I am part of the team for managing continuous integration using Jenkins.
Lately we introduced continuous deployment too in the list and we opted for Docker containers.
Here is the the infrastructure:
The production cluster will have 3 RHEL machines running the following docker containers on each of them:
3 instances of Wildfly
Cassandra
Nginx
Application IDE is Netbeans and source code is in git.
Currently we are doing manual deployment on this infrastructure.
Please suggest me some tools which I use with Jenkins to complete the continuous deployment process.

You might want jenkins to trigger on each push to your jenkins repository. There are plugins that help you do that with a webhook.Gitlab-plugin is a solution similar solution exist for Github and other git solutions.
Instead of heavily relying on bash and jenkins configuration you might want to setup a jenkins pipeline with the jenkins pipeline plugin or even pipeline: multibranch plugin. With those you can automate your build in groovy code (jenkinsfile) in a repository with the possibility to add functunality with other plugins building on them.
You can then use the docker pipeline plugin to easily build docker containers, push docker images and run code inside docker containers.
I would suggest building your services inside docker so that your jenkins machine does not have all the different dependencies installed (and therefore maybe conflicting versions). Use docker containers with all the dependencies and run your build code in there with the docker pipeline plugin from groovy.
Install a registry solution to push and pull your docker images to.
Use the Pipeline: Shared Groovy Libraries to extract libraries from your jenkinsfiles so that they can be reused. Those library files should have their own repository which your jenkins knows about and keeps up to date. Possibly you can even have an entire pipeline process shared between multiple projects which simply add parameters in their jenkinsfile.
A lot of text and no examples. If you think something is interesting and you want to see some code just ask. I am currently setting all this up.

Related

Chef deployment using Declarative Pipeline

I would like to create a declarative Jenkins pipeline setup for the continues integration and Deployment, My only confusion was how Jenkins and chef are going to communicate in this process, after the continue integration, I want the chef to take over and install the Jar or Zip packages and deploy them on the several nodes from Jfrog Repo. Here maven is my build tool. In Jenkins pipeline I can setup until the build is done , is there any thing that I can do in the Post section of the pipeline for the chef communication for deployment or it has to be done. Please share some suggestion.
You can achieve this by putting workstation in one of the jenkins node.
Check out the below image for the same, instead of Ansible use Chef. I suppose this might work, haven't use chef recently.
Link: https://thenucleargeeks.com/2020/06/07/jenkins-openshift-pipeline/

What's the benefits of docker with Jenkins Pipelines?

I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.

Jenkins pipeline using docker on existing slaves

We have the following jenkins setup:
Jenkins master
Jenkins Slave1
Jenkins Slave2
Jenkins Slave3
Those are all virtual machines and the slaves do always exist. They don't spawn automatically up and down.
Now we have builds which needs a lot of tools (maven, python, aws cli, ...). We can install every tool on every slave and everything will work fine.
But we want to build a docker approach.
Nearly all the tutorials I've seen are using slaves in Docker. They use some orchestration tool like Kubernetes and are creating slaves in Docker, do their stuff and delete the pod again.
We don't have the possibility to do this:
Question: Is it a decent approach to use an 'old' Jenkins setup with
real VM slaves on which we use docker?
What I'm thinking about is writing a pipeline and in each stage we use a docker container:
start build (it will choose a slave, e.g. Slave1)
pipeline will start
stage1: spin up e.g. a python container: git clone and execute python commands. mount volume to workspace??
stage2: sping up e.g. aws container and mount the content of the workspace and execute new commands etc.
Can someone evaluate this approach?
This is a very good approach. In fact the way to do that is documented under jenkins docs under Using multiple containers section.
In each stage you basically spin up a container with the necessary tools available and you can use a volume to presist output from the stage into the workspace so that other
stages can use it.

Run Jenkins master and slave with Docker

I want to setup Jenkins master on server A and slave on server B with use of Docker.
Both servers are virtual machines dedicated for Jenkins.
Currently I have started Docker container on server A for master, based on the official Jenkins docker image. But what docker image should I use for Jenkins slave?
That actually depends on the environment and tools you need in your build environment. For example, if you build a C project, you would need an image containing a C compiler and possibly make if you use Makefiles. If you build a Java project, you would need a JDK with a Java compiler and possibly Ant / Maven / Gradle if you use them as part of your build.
You can use the evarga/jenkins-slave as a good starting point for your build slave.
This image already contains JDK. If you simply need JDK and Maven on your build slave, you can build your Docker image with the following Dockerfile:
FROM evarga/jenkins-slave
run apt-get install maven
Using Docker images for build slaves is actually a good idea. Some of the reasons appear at Templating Jenkins Build Environments with Docker Containers:
Docker has established itself as a popular and convenient way to
bootstrap isolated and reproducible environments, which enables Docker
containers to be the most maintainable slave environments. Docker
containers’ tooling and other configurations can be version controlled
in an environment definition called a Dockerfile, and Dockerfiles
allows multiple identical containers can be created quickly using this
definition or for more customized off-shoots to be created by using
that Dockerfile’s image as a base.
I suggest you take trying to use dynamic|ephemeral docker nodes, instead of manually creating nodes and connecting to them via ssh. Take a look at https://engineering.riotgames.com/news/putting-jenkins-docker-container, it's very powerful and I think it's one of killer usecases for Docker.

Dockerizing Jenkins builds - slaves as containers or builds as containers?

I'm tyring to figure out the best strategy for containerizing builds in a Jenkins CI/CD infrastructure using Docker. From what I see I have 2 options:
(1) Use ephemeral slaves that get provisioned on-demand on Docker hosts using the Docker Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
Once the build completes the slave is disposed. As a consequence, only one build ever gets run on a single slave.
(2) Use static slaves (e.g. VMs) that run builds inside Docker containers using the CloudBees Docker Custom Build Environment Plugin: https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin As a consequence, multiple (isolated) builds can run on a single slave.
What are the main advantages/disadvantages of one approach over the other? When and why should should I choose one over the other? This does not appear at all obvious to me.
I suspect builds are lighter weight that slaves, so for a CI/CD infrastructure orchestrating a large end-to-end pipeline with many jobs running (2) would be more scalable - each Jenkins slave incurs at least 2 threads on the master node.
Edit
My preference is the option 1 (ephemeral slaves) with the Docker plugin.
With this plugin, you declare your build images in the global Jenkins settings, you can affect labels to your Docker images:
On your job, you just have to use the relevant labels, and the Docker plugin will create the relevant slave into a new container.
With the Docker plugin, Jenkins will spin-up a new slave in a few seconds. So even if you're using a pipeline with a lot of stages, it will work fine.
This is what I'm going to implement at Forgerock (my company):
2 powerful bare metal machines (with SSD, 32 cores and 1 TB of RAM)
The Jenkins Docker plugin
Maven artifacts caching using Artifactory (to not download the internet)
The docker container will use a local Maven cache (so I'm sure to not use an old/odd Maven artefact)
I did a POC on a small bare metal machine and it works well :)
If you are using ephemeral slaves without Maven caching, it can become a problem regarding the performance.
Regarding the Jenkins plugins, there is a new one developed by Nicolas De Loof: Docker Slaves plugin.
I have to try this new plugin.

Resources