Cannot pull images using authenticated user in GitlabCI - docker

I'm trying to mitigate the DockerHub pull limit by logging in to a DockerHub account in my gitlab-runner. I'm not using methods like Gitlab's Dependency proxy because I would have to edit hundreds of files. I decided to log in to Docker in gitlab-runner.
.gitlab-ci.yml:
image: docker
services:
- docker:dind
stages:
- base
docker-build:
stage: base
tags:
- experimental
script:
- docker build -t grex:alpine_${CI_PIPELINE_ID} ./alpine
- docker info
The alpine folder contains a Dockerfile containing just FROM alpine.
The config.toml of the gitlab-runner has the line pre_build_script = "docker login -u grex -p <password>"
The docker info line states that my user is logged in.
I followed all of the options from the docs but to no avail. After each pipeline run, I checked the current rate limit for my user and it remained unchanged, leaving me to infer the pipeline made an unauthenticated docker pull. Any help is appreciated!

After some experimentation, it seems Gitlab caches images and that resulted in the number of pulled images to not change.

Related

Gitlab's CI docker in docker login and test containers

I have a project that needs a TestContainers running to execute end2end tests.
The Containers 's image is another project which docker image is pushed to GitLab's Container Registry. This means that, whenever I want to do docker pull of this image, I need to do a docker login first.
Locally it works fine, I just do a login, run my tests and everything's ok.. on the pipeline is another story.
In GitLab's documentation, on the pipeline's configuration file .gitlab-ci.yml, they use image: docker:19.03.12. The problem with that is that I need to run ./gradlew, and said image doesn't have java for it to run. Otherwise, if I set the image to image: gradle:jdk14, even if I setup DockerInDocker, when I run docker login, it says docker is not recognized as a command.
I tried creating a custom image with Docker and Java14, but still get the following error:
com.github.dockerjava.api.exception.NotFoundException: {"message":"pull access denied for registry.gitlab.com/projects/projecta, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"}
As you can see in the gitlab-ci file, it's running docker login before executing the tests, and according to the pipeline's output is successful
.gitlab-ci.yml
image: gradle:jdk14
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
stages:
- build
- test
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
assemble:
stage: build
script:
- ./gradlew assemble
only:
changes:
- "**/*.gradle.kts"
- gradle.properties
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper
- .gradle/caches
policy: push
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper
- .gradle/caches
policy: pull
test:
stage: test
image: registry.gitlab.com/project/docker-jdk14:latest #<-- my custom image
dependencies:
- checkstyle
services:
- docker:dind
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- ./gradlew test
I have the feeling that I'm missing something but so far the only explanation I can come up with is that the docker login the pipeline is executing doesn't set the credentials to the inner docker instance.
Is there anyway to call the login in the inner instance instead of the outer one?
I thought about doing the login call inside the test.. but that would be my last option.
If I'm reading your question correctly you're trying to run CI for project gitlab.com/projects/projectb, which uses image built in project gitlab.com/projects/projecta during tests.
You're attempting to pull image registry.gitlab.com/projects/projecta using username and password from predefined variables $CI_DEPLOY_USER and $CI_DEPLOY_PASSWORD.
It doesn't work, because that user has only permissions to access gitlab.com/projects/projectb. What you need to do is to create deploy token for project gitlab.com/projects/projecta with permissions to access the registry, supply it to your CI in gitlab.com/projects/projectb via custom variables and use those to login to $CI_REGISTRY.

GitLab Docker Runner to reuse installed software layers

A very typical scenario with GitLab CI is to install a few packages you need for your jobs (linters, code coverage tools, deployment-specific helpers and so on) and to then run your actual stages/steps of a building, testing and deploying your software.
The Docker runner is a very neat and clean solution, but it seems very wasteful to always run the steps that install the base software. Normally, Docker is able to cache such layers, but with the way the GitLab Docker runner works, that doesn't happen.
Do we realize that setting up another project to produce pre-configured Docker images would be one solution, but are there any better ones? Basically, what we want to say is: "If the before section hasn't changed, you can reuse the image from last time, no need to reinstall wget or whatever".
Any solution like that out there?
You can use the registry of your gitlab project.
eg.
images:
stage: build
image: docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY # login
# pull the current image or in case the image does not exit, do not stop the script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
# build with the pulled image as cache:
- docker build --pull --cache-from $CI_REGISTRY_IMAGE:latest -t "$CI_REGISTRY_IMAGE:latest" .
# push the final image:
- docker push "$CI_REGISTRY_IMAGE:latest"
This way docker build will profit from the work done by the last run of the job. See the docs. Maybe you want to avoid unnecessary runs by some rules.

Keeping docker builds in Gitlab CI with docker-compose

I have a repository that includes three parts: frontend, admin and server. Each contains its own Dockerfile.
After building the image I wanted to add a test for admin. My tests go through but take a lot of time because it pulls the base image and builds everything from scratch on each stage (like 8mins per stage). This is my .gitlab-ci.yml
image: tmaier/docker-compose
services:
- docker:dind
stages:
- build
- test
build:
stage: build
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker-compose build
- docker-compose push
test:admin:
stage: test
script:
- docker-compose -f docker-compose.yml -f docker-compose.test.yml up admin
I am not quite sure if I need to push/pull images between stages or if I should do that with artifacts/cache/whatever. As I understood I only need to push/pull if I want to deploy my images to another server. But also I added a docker-compose push which runs through but Gitlab doesn't show me any images in my registry.
I have been researching a lot on this but most example code I found was only about a single docker container and they didn't make use of docker-compose.
Any ideas? :)
Gitlab currently has no way to share Docker images between stages as artifacts. They have had an outstanding feature request for this for 3 years.
You'll need to push the docker image to the docker registry and pull it in later stages that need it. (Or do everything related to the image in one stage)
Mark could you show the files docker-compose.yml docker-compose.test.yml?
May be you try to push and pull different images. BTW try place docker login at before_script section that make it works at all jobs.

Cannot connect to the Docker daemon at unix:///var/run/docker.sock in gitlab CI

I looked at any other questions but can't find my own solution! I setting up a CI in gitlab and use the gitlab's shared runner. In build stage I used docker image as base image but when i use docker command it says :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I looked at this topic but still don't understand what should I do?
.gitlab-ci.yml :
stages:
- test
- build
- deploy
job_1:
image: python:3.6
stage: test
script:
- sh ./sh_script/install.sh
- python manage.py test -k
job_2:
image: docker:stable
stage: build
before_script:
- docker info
script:
- docker build -t my-docker-image .
I know that the gitlab runner must registered to use docker and share /var/run/docker.sock! But how to do this when using the gitlab own runner?
Ahh, that's my lovely topic - using docker for gitlab ci. The problem you are experiencing is better known as docker-in-docker.
Before configuring it, you may want to read this brilliant post: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
That will give you a bit of understanding what is the problem and which solution best fits you. Generally there are 2 major approaches: actual installation of docker daemon inside docker and sharing host's daemon to containers. Which approach to choose - depends on your needs.
In gitlab you can go in several ways, I will just share our experience.
Way 1 - using docker:dind as a service.
It is pretty simple to setup. Just add docker:dind as a shared service to your gitlab-ci.yml file and use docker:latest image for your jobs.
image: docker:latest # this sets default image for jobs
services:
- docker:dind
Pros:
simple to setup.
simple to run - your source codes are available by default to your job in cwd because they are being pulled directly to your docker runner
Cons: you have to configure docker registry for that service, otherwise you will get your Dockerfiles built from scratch each time your pipeline starts. As for me, it is unacceptable, because can take more than an hour depending on the number of containers you have.
Way 2 - sharing /var/run/docker.sock of host docker daemon
We setup our own docker executor with docker daemon and shared the socket by adding it in /etc/gitlab-runner/config.toml file. Thus we made our machine's docker daemon available to docker cli inside containers. Note - you DONT need privileged mode for executor in this case.
After that we can use both docker and docker-compose in our custom docker images. Moreover, we dont need special docker registry because in this case we share executor's registry among all containers.
Cons
You need to somehow pass sources to your containers in this case, because you get them mounted only to docker executor, but not to containers, launched from it. We've stopped on cloning them with command like git clone $CI_REPOSITORY_URL --branch $CI_COMMIT_REF_NAME --single-branch /project

Docker-in-Docker with Gitlab Shared runner for building and pushing docker images to registry

Been trying to set-up Gitlab CI which can build a docker image, and came across that DinD was enabled initially only for separate runners and Blog Post suggest it would be enabled soon for shared runners,
Running DinD requires enabling privileged mode in runners, which is set as a flag while registering runner, but couldn't find an equivalent mechanism for Shared Runners
The shared runners are now capable of building Docker images. Here is the job that you can use:
stages:
- build
- test
- deploy
# ...
# other jobs here
# ...
docker:image:
stage: deploy
image: docker:1.11
services:
- docker:dind
script:
- docker version
- docker build -t $CI_REGISTRY_IMAGE:latest .
# push only for tags
- "[[ -z $CI_BUILD_TAG ]] && exit 0"
- docker tag $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
This job assumes that you are using the Container Registry provided by Gitlab. It pushes the images only when the build commit is tagged with a version number.
Documentation for Predefined variables.
Note that you will need to cache or generate as temporary artifacts of any dependencies for your service which are not committed in the repository. This is supposed to be done in other jobs. e.g. node_modules are not generally contained in the repository and must be cached from the build/test stage.

Resources