How to handle Docker login to Gitlab container registry with multiple concurrent jobs? - docker

We use Gitlab CI/CD, building and deploying with runners on locally hosted machines and using the Gitlab container registry to store our Docker images. We log in to the Gitlab registry like so:
default:
before_script:
- "docker login -u \"${CI_REGISTRY_USER}\" -p \"${CI_REGISTRY_PASSWORD}\" \"${CI_REGISTRY}\""
This works fine with the runners in our deployment environments, which each only run a single job concurrently. However, our build machine is supposed to run multiple build jobs concurrently. The problem is, the ${CI_REGISTRY_PASSWORD} provided by Gitlab for each job is different and, it seems, valid only for that job. Thus, when we have multiple jobs running at once, their calls to docker login overwrite each other, causing other jobs to fail with authentication required errors.
Currently, we're working around the problem by performing a new docker login command before every docker push or docker pull to minimize the chance that another job will perform a login command of its own in between, but there's got to be a better way.
What is the recommended solution for managing Docker registry logins with concurrent jobs?

This sort of thing is best handled using a Deploy Token, it's exactly what it was intended for. This allows you to set a user, so you can tell the CI server was the one who pushed the container, and also set expiration for security reasons:
https://docs.gitlab.com/ee/user/project/deploy_tokens/

Related

Use cache docker image for gitlab-ci

I was wondering is it possible to use cached docker images in gitlab registry for gitlab-ci?
for example, I want to use node:16.3.0-alpine docker image, can I cache it in my gitlab registry and pull it from that and speed up my gitlab ci instead of pulling it from docker hub?
Yes, GitLab's dependency proxy features allow you to configure GitLab as a "pull through cache". This is also beneficial for working around rate limits of upstream sources like dockerhub.
It should be faster in most cases to use the dependency proxy, but not necessarily so. It's possible that dockerhub can be more performant than a small self-hosted server, for example. GitLab runners are also remote with respect to the registry and not necessarily any "closer" to the GitLab registry than any other registry over the internet. So, keep that in mind.
As a side note, the absolute fastest way to retrieve cached images is to self-host your GitLab runners and hold images directly on the host. That way, when jobs start, if the image already exists on the host, the job will start immediately because it does not need to pull the image (depending on your pull configuration). (that is, assuming you're using images in the image: declaration for your job)
I'm using a corporate Gitlab instance where for some reason the Dependency Proxy feature has been disabled. The other option you have is to create a new Docker image on your local machine, then push it into the Container Registry of your personal Gitlab project.
# First create a one-line Dockerfile containing "FROM node:16.3.0-alpine"
docker pull node:16.3.0-alpine
docker build . -t registry.example.com/group/project/image
docker login registry.example.com -u <username> -p <token>
docker push registry.example.com/group/project/image
where the image tag should be constructed based on the example given on your project's private Container Registry page.
Now in your CI job, you just change image: node:16.3.0-alpine to image: registry.example.com/group/project/image. You may have to run the docker login command (using a deploy token for credentials, see Settings -> Repository) in the before_script section -- I think maybe newer versions of Gitlab will have the runner authenticate to the private Container Registry using system credentials, but that could vary depending on how it's configured.

Docker compose deployment

I have a question about docker compose. I am new to docker and I can't figure out the "right" flow for deployment.
Lets assume we have a "Dockerfile" which contain a steps to build an image from project source files.
And we have a "docker-compose.yml" which is actually building this "Dockerfile" along with 2 more services.
It is not important here but lets say they are, nginx, webapi (actual project) and mongodb.
So, if i will run "docker compose up" on my machine - it will create 3 images (webapi, nginx, mongodb) and run them. Everything is perfect here.
Questions is, what i need to do to get it deployed to production. What i have tried:
I can checkout git on production server and run "docker compose up" and it will work. But i think this is not the way to go - use of production server to build projects seems silly.
I can run "docker compose build" locally, get 3 images, push them to docker repository, go to production download images from repository and start them one by one. In this case I don't see a point in "docker compose" at all, I am loosing the way to easily define volumes and relation between images, which I can do with docker compose. It will also require a lot of manual activity, or some custom scripts to automate it.
It seems like, there is a way to use "docker machine" to connect to remote server and use "docker compose up", but I was not able to make it work. For some reasons it was not possible to connect from Windows to a remote docker on Linux.
Before going further with that option I need to understand/confirm, it case of remote docker, and "docker compose up", where the build will happen? And if I have a few volumes defined in "docker-compose.yml" are they going to be created on local machine or on remote?
For my project I went with option that resembles your second proposal but bit more automatic. The CI is doing the docker build webapi as this is the only part of my system that is actually build from sources. Ci is also doing docker push to my private repository. Next step is running docker-compose up on production. The compose is not building the webapi it is only configuring it so rather than using build section its using image. Docker compose is also configuring other services that are required (nginx, mongo) and networks for them to communicate. Even if you have custom image creation for other services you do not require full dev environment to create them. For full automation you can do docker machine to remotely execute it. Note that docker will not update images if they are already downloaded on docker-compose up execution you need to docker pull them.

CD with GitLab, docker and docker private registry

we need to automate the process of deployment. Let me point out the stack we use.
We have our own GitLab CE instance and private docker registry. On production server, application is run in container. After every master commit, GitLab CI builds the image with code in it, sends it to docker registry and this is where automation ends.
Deployment on production server could be performed by a few steps - stopping current application container, pulling newer one and run it.
What is the best way to automate this process?
I read about a couple of solutions (but I believe there is much more)
docker private registry pings to a production server that does all the above steps itself (script on production machine managed by eg. supervisor or something similar)
using docker machine to remotely manage run containers
What is the preferred way? Or you can recommend something else?
No need to use tools like swarm, kubernetes, etc. It's quite simple application. Thanks in advance.
How about install Gitlab-ci runner on your production machine? And perform a job after the push to registry on master called deploy and pin it to that machine using Gitlab CI tags.
The job simply pulls the image from the registry and restarts your service or whatever you have in place.
Something like:
deploy-job:
stage: deploy
tags:
- production
script:
- docker login myprivateregistry.com -u $SECRET_USER -p $SECRET_PASS
- docker pull $CI_REGISTRY_IMAGE:latest
- docker-compose down
- docker-compose up -d
I can think of four solutions
use watchtower on production server https://github.com/v2tec/watchtower
run a webhook server which is requests by your CI after pushing the image to the registry. https://github.com/adnanh/webhook
as already mentioned, run the CI on production too which finaly triggers your update commands.
enable docker api and update the container by requesting it from the CI

Is git pull, docker-compose build and docker-compose up -d a good way to deploy complete solution on an empty machine

Recently, we just finished web application solution using Docker.
https://github.com/yccheok/celery-hello-world/tree/nginx (The actual solution is hosted in private repository. This example just a quick glance on how our project structure looks like)
We plan to purchase 1 empty Linux machine on deploy on it. We might purchase more machines in the future but with current traffic right now, 1 machine will be sufficient.
My plan for deployment on the single empty machine is
git pull <from private code repository>
docker-compose build
docker-compose up -d
Since we are going to deploy to multiple machines in near future, I was wondering, is this a common practice to deploy docker application into a fresh empty machine?
Is there anything we can utilize from https://hub.docker.com/ , without requiring us to perform git pull during deployment stage?
You don't want to perform git pull in each machine - your intuition is correct.
Instead you want to use remote docker registry (as docker hub for example).
So the right flow, each time your source code (git repo) is changed:
git pull from all relevant repos.
docker-compose build to build all relevant images.
docker-compose push to push all images (diff) to remote registry.
docker-compose pull in your production machines, to get the latest updated images.
docker-compose up to start all containers.
First 3 steps should be done in your CI machine (for example, as a jenkins job). Steps 4-5 in your production machines.
EDIT: one thing to consider. I think build via docker-compose is bad. Consider building directly by docker build -f Dockerfile -t repo/image:tag . and in docker-compose just specify the image name.
My opinion is you should not BUILD images on production machines. Because the image might be different than you would expect and you should limit yourself what you do on production machines.. With that being said, i would recommend:
updating the code on your local computer (development)
when you push code to git, you should use some software to build
your images from your push. For example Gitlab-CI (Continuous
integration tool)
gitlab-ci will build the image, then it could run some tests on that
image, and then deploy it to production (this build image)
On you production machine just do docker-compose pull &&
docker-compose up -d and that is it.
I strongly recommend to build images on other machine than production machines, and use some CI tool to test your images before deploying. For example https://docs.gitlab.com/ce/ci/README.html
Deploying it on a fresh machine or the other way around would be fine.
The best way to go around is to make a private repo on https://hub.docker.com/ and push your images there.
Building and shipping the image
git pull
docker build
docker login
docker push repo/image
Pulling the shipped image and deploying
docker login on the server
docker pull repo/image
docker-compose up -d
Though i would recommend you to look at container scheduling using kubernetes and setting up your CI/CD stack with jenkins to automate this process, in case something bad happens it can be a life saver.

Gitlab Continuous Integration on Docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?
The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.
I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.
Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Resources