I want to setup Jenkins master on server A and slave on server B with use of Docker.
Both servers are virtual machines dedicated for Jenkins.
Currently I have started Docker container on server A for master, based on the official Jenkins docker image. But what docker image should I use for Jenkins slave?
That actually depends on the environment and tools you need in your build environment. For example, if you build a C project, you would need an image containing a C compiler and possibly make if you use Makefiles. If you build a Java project, you would need a JDK with a Java compiler and possibly Ant / Maven / Gradle if you use them as part of your build.
You can use the evarga/jenkins-slave as a good starting point for your build slave.
This image already contains JDK. If you simply need JDK and Maven on your build slave, you can build your Docker image with the following Dockerfile:
FROM evarga/jenkins-slave
run apt-get install maven
Using Docker images for build slaves is actually a good idea. Some of the reasons appear at Templating Jenkins Build Environments with Docker Containers:
Docker has established itself as a popular and convenient way to
bootstrap isolated and reproducible environments, which enables Docker
containers to be the most maintainable slave environments. Docker
containers’ tooling and other configurations can be version controlled
in an environment definition called a Dockerfile, and Dockerfiles
allows multiple identical containers can be created quickly using this
definition or for more customized off-shoots to be created by using
that Dockerfile’s image as a base.
I suggest you take trying to use dynamic|ephemeral docker nodes, instead of manually creating nodes and connecting to them via ssh. Take a look at https://engineering.riotgames.com/news/putting-jenkins-docker-container, it's very powerful and I think it's one of killer usecases for Docker.
Related
I have a workflow where
1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A.
2) Their app should contain Dockerfile or docker-compose.yml
3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test
4) Jenkins should build a new docker image and deploy it.
Question:
If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?
Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.
I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.
I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)
Please advise me on the workflow. Thank you
I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.
For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".
For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.
So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).
So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.
Now this picture doesn't contradict with the containerization:
The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).
So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.
Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.
Whoa, that's a big one and there are many ways to solve this challenge.
Here are 2 approaches which you can apply:
Solve it by using different types of Jenkins Slaves
In the long run you should consider running Jenkins workloads on slaves.
This is not only desirable for your use case but also scales much better on higher workloads.
Because in worst case denser workloads can kill your Jenkins master.
See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference.
When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.
These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.
Solve it by doing everything within the Docker Environment
The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.
I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.
Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops#yourcompany.biz
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .
CMD [ "npm", "start"]
Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
...
steps {
script {
docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
}
}
}
}
Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.
If you'd try to define your questions a little more in detail, you'd get better answers.
I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.
We have the following jenkins setup:
Jenkins master
Jenkins Slave1
Jenkins Slave2
Jenkins Slave3
Those are all virtual machines and the slaves do always exist. They don't spawn automatically up and down.
Now we have builds which needs a lot of tools (maven, python, aws cli, ...). We can install every tool on every slave and everything will work fine.
But we want to build a docker approach.
Nearly all the tutorials I've seen are using slaves in Docker. They use some orchestration tool like Kubernetes and are creating slaves in Docker, do their stuff and delete the pod again.
We don't have the possibility to do this:
Question: Is it a decent approach to use an 'old' Jenkins setup with
real VM slaves on which we use docker?
What I'm thinking about is writing a pipeline and in each stage we use a docker container:
start build (it will choose a slave, e.g. Slave1)
pipeline will start
stage1: spin up e.g. a python container: git clone and execute python commands. mount volume to workspace??
stage2: sping up e.g. aws container and mount the content of the workspace and execute new commands etc.
Can someone evaluate this approach?
This is a very good approach. In fact the way to do that is documented under jenkins docs under Using multiple containers section.
In each stage you basically spin up a container with the necessary tools available and you can use a volume to presist output from the stage into the workspace so that other
stages can use it.
We are building a java based high-availability service for a financial application. I am part of the team for managing continuous integration using Jenkins.
Lately we introduced continuous deployment too in the list and we opted for Docker containers.
Here is the the infrastructure:
The production cluster will have 3 RHEL machines running the following docker containers on each of them:
3 instances of Wildfly
Cassandra
Nginx
Application IDE is Netbeans and source code is in git.
Currently we are doing manual deployment on this infrastructure.
Please suggest me some tools which I use with Jenkins to complete the continuous deployment process.
You might want jenkins to trigger on each push to your jenkins repository. There are plugins that help you do that with a webhook.Gitlab-plugin is a solution similar solution exist for Github and other git solutions.
Instead of heavily relying on bash and jenkins configuration you might want to setup a jenkins pipeline with the jenkins pipeline plugin or even pipeline: multibranch plugin. With those you can automate your build in groovy code (jenkinsfile) in a repository with the possibility to add functunality with other plugins building on them.
You can then use the docker pipeline plugin to easily build docker containers, push docker images and run code inside docker containers.
I would suggest building your services inside docker so that your jenkins machine does not have all the different dependencies installed (and therefore maybe conflicting versions). Use docker containers with all the dependencies and run your build code in there with the docker pipeline plugin from groovy.
Install a registry solution to push and pull your docker images to.
Use the Pipeline: Shared Groovy Libraries to extract libraries from your jenkinsfiles so that they can be reused. Those library files should have their own repository which your jenkins knows about and keeps up to date. Possibly you can even have an entire pipeline process shared between multiple projects which simply add parameters in their jenkinsfile.
A lot of text and no examples. If you think something is interesting and you want to see some code just ask. I am currently setting all this up.
I'm tyring to figure out the best strategy for containerizing builds in a Jenkins CI/CD infrastructure using Docker. From what I see I have 2 options:
(1) Use ephemeral slaves that get provisioned on-demand on Docker hosts using the Docker Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
Once the build completes the slave is disposed. As a consequence, only one build ever gets run on a single slave.
(2) Use static slaves (e.g. VMs) that run builds inside Docker containers using the CloudBees Docker Custom Build Environment Plugin: https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin As a consequence, multiple (isolated) builds can run on a single slave.
What are the main advantages/disadvantages of one approach over the other? When and why should should I choose one over the other? This does not appear at all obvious to me.
I suspect builds are lighter weight that slaves, so for a CI/CD infrastructure orchestrating a large end-to-end pipeline with many jobs running (2) would be more scalable - each Jenkins slave incurs at least 2 threads on the master node.
Edit
My preference is the option 1 (ephemeral slaves) with the Docker plugin.
With this plugin, you declare your build images in the global Jenkins settings, you can affect labels to your Docker images:
On your job, you just have to use the relevant labels, and the Docker plugin will create the relevant slave into a new container.
With the Docker plugin, Jenkins will spin-up a new slave in a few seconds. So even if you're using a pipeline with a lot of stages, it will work fine.
This is what I'm going to implement at Forgerock (my company):
2 powerful bare metal machines (with SSD, 32 cores and 1 TB of RAM)
The Jenkins Docker plugin
Maven artifacts caching using Artifactory (to not download the internet)
The docker container will use a local Maven cache (so I'm sure to not use an old/odd Maven artefact)
I did a POC on a small bare metal machine and it works well :)
If you are using ephemeral slaves without Maven caching, it can become a problem regarding the performance.
Regarding the Jenkins plugins, there is a new one developed by Nicolas De Loof: Docker Slaves plugin.
I have to try this new plugin.