What is the difference between Docker plugin and Docker with Jenkins pipeline - docker

I am new to jenkins. I want a way to integrate my jenkins and Docker. What is the difference between docker jenkins plugin and jenkins pipeline with docker?
I have read both this
https://wiki.jenkins.io/plugins/servlet/mobile?contentId=71434989#content/view/71434989
And this
https://jenkins.io/doc/book/pipeline/docker/
I feel like both approaches do the same thing running jenkins slaves /node on a docker container, but I am not sure.
Thanks

Update
I got this answer form Reddit post
The first link is about using docker commands in your jenkins job to build your software. For example your tools are inside docker containers and you want to run docker run --it maven:latest build against your code. It is normally a single step in the build job.
The second link is is about running a jenkins agent as a docker container and running tools inside the container against your code. Here you will run a jenkins agent, that will get the job definition from the jenkins master and the execute the jobs steps, i.e. more than one step also while being contained.

Related

Jenkins in Docker container?

I would like to know if jenkins in docker is suitable for the CI/CD pipeline im involved to develop.
I would like to run javascript commands for selenium. (Jenkins docker is not able to execute nodejs)
I would like to run selenium in docker container from the pipeline (its like running docker inside the docker or executing the command from the docker but running in the host)
Build the application Deploy new code in the Jfrog artifactory.
I dont know if the effort worth to take here? Keeping Jenkins local i can just do ssh, node and execute any command i want as from the local machine.
Thanks a lot.

Access job's workspace in a docker container step in dockerized Jenkins

I have a dockerized jenkins server. In a freestyle job I have a bash step that I'd like to run in a docker container. While I'm able to create a docker from inside the Jenkins docker it's not trivial how to give access to the current job's workspace to the new container step. This is possible in the pipeline syntax by setting reuseNode boolean to True. What is the equivalent of this in a freestyle Jenkins job? I can pass something like -v jenkins-data:/var/jenkins_home --workdir $WORKSPACE to the new container and it almost works but I usually get all sorts of permission issues.
Have you tried to use Docker agents? As far as I understand your usecase, it does exactly what you want and takes care of user and volume management. Check this article for directions.

Not able to see the Docker logo in Build History for a pipeline job

Converted a jenkins job running in a docker container to a pipeline job. Although the job runs smoothly, I am not able to see the Docker logo in the Build History of the pipeline job which was the case for a normal free style job running on Docker.
All the stages of the pipeline are inside the Docker container so essentially, the entire job runs inside the container. So why isn't the Docker logo visible against each build in the Build History. There are Docker fingerprints available in each build.
Could someone suggest what I am missing here?

Easiest way to do docker build command within Jenkinsfile running on Jenkins slave node?

Basic example of what I want my Jenkinsfile to do:
node {
sh 'docker build -t foo/bar .'
}
It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)
Are my assumptions correct?
When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave
Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?
Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the gcr.io/cloud-solutions-images/jenkins-k8s-slave image for my Jenkins slave.
Currently it errors out saying "docker: not found"
My assumption is that you want to docker build inside the Jenkins slave (which is a Kubernetes pod, I assume created by the Kubernetes Jenkins Plugin)
To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).
So you are actually trying to run a Docker command inside a container based on gcr.io/cloud-solutions-images/jenkins-k8s-slave, which is most likely based on the official Jenkins JNLP Slave, which does not container Docker!
From this point forward, there are two approaches that you can take:
use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (/var/run/docker.sock) inside the container.
(You can find details on this approach here).
Here is an image that contains the Docker client and kubectl.
Here is a complete view of how to configure the Jenkins Plugin:
Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.
the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable.
Using the second approach you always use official images, and you use the JNLP slave to start other containers in the same pod.
Here is the full file from the image below
Here is the Jenkins Plugin documentation for doing this
As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.
These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.
The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.
Some more resources:
deploy Jenkins to Kubernetes as Helm chart, configure plugins to install
Thanks,
Radu M

Test Docker cluster in Jenkins

I have some difficulties to configure Jenkins to run test on a dockerized application.
First here is my set up: the project is on bitbucket and I have a docker-compose that run my application which is composed of 3 three conmtainers for now (one for mongo, one for redis, one for my node app).
The webhook of bitbucket works well and Jenkins is triggered when I push.
However what i would like to do for a build is:
get a repo where my docker-compose is, run the docker-compose in order to have my cluster running, and then run a "npm test" inside the repo (my test use mocha), and finally having Jenkins notified if the test have passed or not.
If someone could help me to get this chain of operation applied by Jenkins, it would be awesome.
The simplest way is use jenkins pipeline plugin or shell script.
To build docker image and run compose you could use docker-compose command. Important thing is that you need rebuild docker image from compose level (because if you run docker-compose run only jenkins can use previous bilded image). So you need run docker-compose build before.
Your dockerfile should copy all files of your application.
Next when your service is ready you could run command in docker image using: docker exec {CONTAINER_ID} {COMMAND_TO_RUN_TESTS}.

Resources