Jenkins in Docker container? - docker

I would like to know if jenkins in docker is suitable for the CI/CD pipeline im involved to develop.
I would like to run javascript commands for selenium. (Jenkins docker is not able to execute nodejs)
I would like to run selenium in docker container from the pipeline (its like running docker inside the docker or executing the command from the docker but running in the host)
Build the application Deploy new code in the Jfrog artifactory.
I dont know if the effort worth to take here? Keeping Jenkins local i can just do ssh, node and execute any command i want as from the local machine.
Thanks a lot.

Related

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

What is the difference between Docker plugin and Docker with Jenkins pipeline

I am new to jenkins. I want a way to integrate my jenkins and Docker. What is the difference between docker jenkins plugin and jenkins pipeline with docker?
I have read both this
https://wiki.jenkins.io/plugins/servlet/mobile?contentId=71434989#content/view/71434989
And this
https://jenkins.io/doc/book/pipeline/docker/
I feel like both approaches do the same thing running jenkins slaves /node on a docker container, but I am not sure.
Thanks
Update
I got this answer form Reddit post
The first link is about using docker commands in your jenkins job to build your software. For example your tools are inside docker containers and you want to run docker run --it maven:latest build against your code. It is normally a single step in the build job.
The second link is is about running a jenkins agent as a docker container and running tools inside the container against your code. Here you will run a jenkins agent, that will get the job definition from the jenkins master and the execute the jobs steps, i.e. more than one step also while being contained.

Best practice using docker inside Jenkins?

Hi I'm learning how to use Jenkins integrated with Docker and I don't understand what should I do to communicate them.
I'm running Jenkins inside a Docker container and I want to build an image in a pipeline. So I need to execute some docker commands inside the Jenkins container.
So the thing here is where docker come from. I understand that we need to bind mount the docker host daemon (socket) to the Jenkins container but this container still needs the binaries to execute Docker.
I have seen some approaches to achieve this and I'm confused what should I do. I have seen:
bind mount the docker binary (/usr/local/bin/docker:/usr/bin/docker)
installing docker in the image
if I'm not wrong the blue ocean image comes with Docker pre-installed (I have not found any documentation of this)
Also I don't understand what Docker plugins for Jenkins can do for me.
Thanks!
Docker has a client server architecture. The server is the docker deamon and the client is basically the command line interface that allows you to execute docker ... from the command line.
Thus when running Jenkins inside Docker you will need access to connect to the deamon. This is acheieved by binding the /var/run/docker.sock into the container.
At this point you need something to communicate with the Deamon which is the server. You can either do that by providing access to docker binaries. This can be achived by either mounting the docker binaries, or installing the
client binaries inside the Jenkins container.
Alternatively, you can communicate with the deamon using the Docker Rest API without having the docker client binaries inside the Jenkins container. You can for instance build an image using the API.
Also I don't understand what Docker plugins for Jenkins can do for me
The Docker plugin for Jenkins isn't useful for the use case that you described. This plugin allows you to provision Jenkins slaves using Docker. You can for instance run a compilation inside a Docker container that gets automatically provisioned by Jenkins
It is not best practice to use Docker with Jenkins. It is also not a bad practice. The relationship between Jenkins and Docker is not determined in such a manner that having Docker is good or bad.
Jenkins is a Continuous Integration Server, which is a fancy way of saying "a service that builds stuff at various times, according to predefined rules"
If your end result is a docker image to be distributed, you have Jenkins call your docker build command, collect the output, and report on the success / failure of the docker build command.
If your end result is not a docker image, you have Jenkins call your non-docker build command, collect the output, and report on the success / failure of the non-docker build.
How you have the build launched depends on how you would build the product. Makefiles are launched with make, Apache Ant with ant, Apache Maven with mvn package, docker with docker build and so on. From Jenkin's perspective, it doesn't matter, provided you provide a complete set of rules to launch the build, collect the output, and report the success or failure.
Now, for the 'Docker plugin for Jenkins'. As #yamenk stated, Jenkins uses build slaves to perform the build. That plugin will launch the build slave within a Docker container. The thing built within that container may or may not be a docker image.
Finally, running Jenkins inside a docker container just means you need to bind your Docker-ized Jenkins to the external world, as #yamenk indicates, or you'll have trouble launching builds.
Bind mounting the docker binary into the jenkins image only works if the jenkins images is "close enough" - it has to contain the required shared libraries!
So when sing a standard jenkins/jenkins:2.150.1 within an ubuntu 18.04 this is not working unfortunately. (it looked so nice and slim ;)
So the the requirement is to build or find a docker image which contains a compatible docker client for the host docker service is.
Many people seem to install docker in their jenkins image....

How to run a shell and docker executor on the same unix host?

I would like to use the same host computer to execute Docker builds using the shell executor, as described in the link below, and normal builds using the docker executor.
I would like to be able to start builds of both types on the same host.
I would like to use the debian package provided for Ubuntu and installed via ant from the repository.
https://docs.gitlab.com/ce/ci/docker/using_docker_build.html
In other words, if I run a project to build docker containers, the shell executor should run the commands against docker. If I build a source code project, the docker executor should run my build inside a docker container.
Can someone please describe the steps required to achieve such a configuration.
Run gitlab-runner register multiple times. It will always append new configurations to the same /etc/gitlab-runner/config.toml file.

Docker pipeline's "inside" not working in Jenkins slave running within Docker container

I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.
Setup
Jenkins server running in a Docker container
Jenkins slave based on custom image (https://github.com/simulogics/protokube-jenkins-slave) running in a Docker container as well
Docker daemon container based on docker:1.12-dind image
Slave started like so: docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
Basic Docker operations (pull, build and push images) are working just fine with this setup.
(Non-)Goals
I want the server to not have to know about Docker at all. This should be a characteristic of the slave/node.
I do not need dynamic allocation of slaves or ephemeral slaves. One slave started manually is quite enough for my purposes.
Ideally, I want to move away from my custom Docker image for the slave and instead use the inside function provided by the Docker pipeline plugin within a generic Docker slave.
Problem
This is a representative build step that's causing the issue:
image.inside {
stage ('Install Ruby Dependencies') {
sh "bundle install"
}
}
This would cause an error like this in the log:
sh: 1: cannot create /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q#tmp/durable-98bb4c3d/pid: Directory nonexistent
Previously, this warning would show:
71f4de289962-5790bfcc seems to be running inside container 71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a
but /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q could not be found among []
Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:
For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as
cannot create /…#tmp/durable-…/pid: Directory nonexistent
or negative exit codes.
When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.
Unfortunately, the detection described in the last paragraph doesn't seem to work.
Question
Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?
I've seen variations of this issue, also with the agents powered by the kubernetes-plugin.
I think that for it to work the agent/jnlp container needs to share workspace with the build container.
By build container I am referring to the one that will run the bundle install command.
This could be possibly work via withArgs
The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build container. What is the purpose of also using an agent?

Resources