How to setup multibranch CI with jenkins and docker-compose - docker

I have configured CI pipeline with Jenkins for app stored in git and deployed with docker-compose.
The jenkins agent used is a docker container which can run other containers including the docker-compose deployment.
To resume the jenkins CI :
push on a specific branch of the git project
jenkins multibranch job is triggered automatically
build docker images of app
run docker-compose to start app
run tests of app
stop docker-compose
How tests works :
In the docker-compose, there is specific container that run the python tests.
He can comunicate with other containers by container_name and because all containers are in the same docker network
Problems :
We want to run tests every push on git and on every branches
When two developers works on the app but on two different branches, jenkins triggers 2 jobs in multi-branch job for each branches. The problem is, the two jenkins jobs try to deploy the same docker-compose at the same time. So he can't because of container name conflict
Questions :
I want to know what is the best practices for CI when you want to run tests after a push on different branches in a git project.
because with this method, the risk of broken job is very high.

I want to know what is the best practices for CI when you want to run tests after a push on different branches in a git project.
Use docker in docker and run your docker-compose inside a dockerd running in your custom docker-container spawned for the specific CI.
https://www.jenkins.io/doc/book/pipeline/docker/
https://hub.docker.com/layers/docker/library/docker/dind/images/sha256-34c594c12507d24f66789978a90240d4addf3eb8fce22a7a9157ca9701afd46d?context=explore
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

Related

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

Cache for docker build in gitlab-ci

I want to build docker images in CI task.
With the same configuration from
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html.
Launches of CI tasks don't share docker build cache. So each launch of CI is very long.
How should I configure ci workers and volumes for use of docker build cache between CI tasks from different commits?
GitLab offers a cache-sharing mechanism, which you can use to share the docker build cache (usually /var/lib/docker) between unrelated pipeline runs.
While this sounds straight forward and easy, you may need to configure your runners depending on how exactly your runners are set up.

What is the difference between Docker plugin and Docker with Jenkins pipeline

I am new to jenkins. I want a way to integrate my jenkins and Docker. What is the difference between docker jenkins plugin and jenkins pipeline with docker?
I have read both this
https://wiki.jenkins.io/plugins/servlet/mobile?contentId=71434989#content/view/71434989
And this
https://jenkins.io/doc/book/pipeline/docker/
I feel like both approaches do the same thing running jenkins slaves /node on a docker container, but I am not sure.
Thanks
Update
I got this answer form Reddit post
The first link is about using docker commands in your jenkins job to build your software. For example your tools are inside docker containers and you want to run docker run --it maven:latest build against your code. It is normally a single step in the build job.
The second link is is about running a jenkins agent as a docker container and running tools inside the container against your code. Here you will run a jenkins agent, that will get the job definition from the jenkins master and the execute the jobs steps, i.e. more than one step also while being contained.

Is git pull, docker-compose build and docker-compose up -d a good way to deploy complete solution on an empty machine

Recently, we just finished web application solution using Docker.
https://github.com/yccheok/celery-hello-world/tree/nginx (The actual solution is hosted in private repository. This example just a quick glance on how our project structure looks like)
We plan to purchase 1 empty Linux machine on deploy on it. We might purchase more machines in the future but with current traffic right now, 1 machine will be sufficient.
My plan for deployment on the single empty machine is
git pull <from private code repository>
docker-compose build
docker-compose up -d
Since we are going to deploy to multiple machines in near future, I was wondering, is this a common practice to deploy docker application into a fresh empty machine?
Is there anything we can utilize from https://hub.docker.com/ , without requiring us to perform git pull during deployment stage?
You don't want to perform git pull in each machine - your intuition is correct.
Instead you want to use remote docker registry (as docker hub for example).
So the right flow, each time your source code (git repo) is changed:
git pull from all relevant repos.
docker-compose build to build all relevant images.
docker-compose push to push all images (diff) to remote registry.
docker-compose pull in your production machines, to get the latest updated images.
docker-compose up to start all containers.
First 3 steps should be done in your CI machine (for example, as a jenkins job). Steps 4-5 in your production machines.
EDIT: one thing to consider. I think build via docker-compose is bad. Consider building directly by docker build -f Dockerfile -t repo/image:tag . and in docker-compose just specify the image name.
My opinion is you should not BUILD images on production machines. Because the image might be different than you would expect and you should limit yourself what you do on production machines.. With that being said, i would recommend:
updating the code on your local computer (development)
when you push code to git, you should use some software to build
your images from your push. For example Gitlab-CI (Continuous
integration tool)
gitlab-ci will build the image, then it could run some tests on that
image, and then deploy it to production (this build image)
On you production machine just do docker-compose pull &&
docker-compose up -d and that is it.
I strongly recommend to build images on other machine than production machines, and use some CI tool to test your images before deploying. For example https://docs.gitlab.com/ce/ci/README.html
Deploying it on a fresh machine or the other way around would be fine.
The best way to go around is to make a private repo on https://hub.docker.com/ and push your images there.
Building and shipping the image
git pull
docker build
docker login
docker push repo/image
Pulling the shipped image and deploying
docker login on the server
docker pull repo/image
docker-compose up -d
Though i would recommend you to look at container scheduling using kubernetes and setting up your CI/CD stack with jenkins to automate this process, in case something bad happens it can be a life saver.

Test Docker cluster in Jenkins

I have some difficulties to configure Jenkins to run test on a dockerized application.
First here is my set up: the project is on bitbucket and I have a docker-compose that run my application which is composed of 3 three conmtainers for now (one for mongo, one for redis, one for my node app).
The webhook of bitbucket works well and Jenkins is triggered when I push.
However what i would like to do for a build is:
get a repo where my docker-compose is, run the docker-compose in order to have my cluster running, and then run a "npm test" inside the repo (my test use mocha), and finally having Jenkins notified if the test have passed or not.
If someone could help me to get this chain of operation applied by Jenkins, it would be awesome.
The simplest way is use jenkins pipeline plugin or shell script.
To build docker image and run compose you could use docker-compose command. Important thing is that you need rebuild docker image from compose level (because if you run docker-compose run only jenkins can use previous bilded image). So you need run docker-compose build before.
Your dockerfile should copy all files of your application.
Next when your service is ready you could run command in docker image using: docker exec {CONTAINER_ID} {COMMAND_TO_RUN_TESTS}.

Resources