Cache for docker build in gitlab-ci - docker

I want to build docker images in CI task.
With the same configuration from
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html.
Launches of CI tasks don't share docker build cache. So each launch of CI is very long.
How should I configure ci workers and volumes for use of docker build cache between CI tasks from different commits?

GitLab offers a cache-sharing mechanism, which you can use to share the docker build cache (usually /var/lib/docker) between unrelated pipeline runs.
While this sounds straight forward and easy, you may need to configure your runners depending on how exactly your runners are set up.

Related

How to setup multibranch CI with jenkins and docker-compose

I have configured CI pipeline with Jenkins for app stored in git and deployed with docker-compose.
The jenkins agent used is a docker container which can run other containers including the docker-compose deployment.
To resume the jenkins CI :
push on a specific branch of the git project
jenkins multibranch job is triggered automatically
build docker images of app
run docker-compose to start app
run tests of app
stop docker-compose
How tests works :
In the docker-compose, there is specific container that run the python tests.
He can comunicate with other containers by container_name and because all containers are in the same docker network
Problems :
We want to run tests every push on git and on every branches
When two developers works on the app but on two different branches, jenkins triggers 2 jobs in multi-branch job for each branches. The problem is, the two jenkins jobs try to deploy the same docker-compose at the same time. So he can't because of container name conflict
Questions :
I want to know what is the best practices for CI when you want to run tests after a push on different branches in a git project.
because with this method, the risk of broken job is very high.
I want to know what is the best practices for CI when you want to run tests after a push on different branches in a git project.
Use docker in docker and run your docker-compose inside a dockerd running in your custom docker-container spawned for the specific CI.
https://www.jenkins.io/doc/book/pipeline/docker/
https://hub.docker.com/layers/docker/library/docker/dind/images/sha256-34c594c12507d24f66789978a90240d4addf3eb8fce22a7a9157ca9701afd46d?context=explore
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

Automate docker pull commit and update kubernetes

I have docker images hosted external at docker hub which get updates every week.
Currently i did
Docker pull
Update some config files in the docker
Docker commit
Docker push
Then manually change the image name at kubernetes deployment yaml file.
What is the best practise for me to automate this? Can this be initiated in kubernetes?
K8s doesn't support such a functionality (yet, at least!)
but you can use GitOps tools like Flux to automate this procedure,
also you could use scheduled jobs of k8s combined to bash or python scripts to automate the task.
you better check out this post too:
Auto Update Container Image When New Build Released on Kubernetes

How to build and test application inside a Docker in GitLab CI?

Our application has a Dockerfile that describes a custom image we'd like to use for to build and test application.
Basically, for every git push we want to:
Build an image from a Docker file.
Run a container based on this image.
Run build and tests in a container.
Get test results back to GitLab.
While it seems to be absolutely doable with GitLab CI's Shell Executor, I'm wondering if there is a recommended way to do such a thing?
Also, does this plan sounds appropriate for GitLab CI + Docker combination?

Is git pull, docker-compose build and docker-compose up -d a good way to deploy complete solution on an empty machine

Recently, we just finished web application solution using Docker.
https://github.com/yccheok/celery-hello-world/tree/nginx (The actual solution is hosted in private repository. This example just a quick glance on how our project structure looks like)
We plan to purchase 1 empty Linux machine on deploy on it. We might purchase more machines in the future but with current traffic right now, 1 machine will be sufficient.
My plan for deployment on the single empty machine is
git pull <from private code repository>
docker-compose build
docker-compose up -d
Since we are going to deploy to multiple machines in near future, I was wondering, is this a common practice to deploy docker application into a fresh empty machine?
Is there anything we can utilize from https://hub.docker.com/ , without requiring us to perform git pull during deployment stage?
You don't want to perform git pull in each machine - your intuition is correct.
Instead you want to use remote docker registry (as docker hub for example).
So the right flow, each time your source code (git repo) is changed:
git pull from all relevant repos.
docker-compose build to build all relevant images.
docker-compose push to push all images (diff) to remote registry.
docker-compose pull in your production machines, to get the latest updated images.
docker-compose up to start all containers.
First 3 steps should be done in your CI machine (for example, as a jenkins job). Steps 4-5 in your production machines.
EDIT: one thing to consider. I think build via docker-compose is bad. Consider building directly by docker build -f Dockerfile -t repo/image:tag . and in docker-compose just specify the image name.
My opinion is you should not BUILD images on production machines. Because the image might be different than you would expect and you should limit yourself what you do on production machines.. With that being said, i would recommend:
updating the code on your local computer (development)
when you push code to git, you should use some software to build
your images from your push. For example Gitlab-CI (Continuous
integration tool)
gitlab-ci will build the image, then it could run some tests on that
image, and then deploy it to production (this build image)
On you production machine just do docker-compose pull &&
docker-compose up -d and that is it.
I strongly recommend to build images on other machine than production machines, and use some CI tool to test your images before deploying. For example https://docs.gitlab.com/ce/ci/README.html
Deploying it on a fresh machine or the other way around would be fine.
The best way to go around is to make a private repo on https://hub.docker.com/ and push your images there.
Building and shipping the image
git pull
docker build
docker login
docker push repo/image
Pulling the shipped image and deploying
docker login on the server
docker pull repo/image
docker-compose up -d
Though i would recommend you to look at container scheduling using kubernetes and setting up your CI/CD stack with jenkins to automate this process, in case something bad happens it can be a life saver.

Resources