gitlab runner using wrong docker image for build container - docker

I set up an gitlab-ci-multi-runner on my VM.
In the build process I provide an docker container with all the dependencies.
But when I run it, the runner uses a different (wrong) docker image for his build container.
The messages look like that:
Running with gitlab-ci-multi-runner 9.2.1 (f0xxxx4) on runnerVM
(f5xxxxf0) Using Docker executor with image
docker.com/xxx/xxx/docker-build:stable ... Using docker image
sha256:fe32xxx...xxxa63c for predefined container... Pulling
docker image docker.com/xxx/xxx/docker-build:stable ... Using
docker image docker.com/xxx/xxx/docker-build:stable
ID=sha256:9608xxx...xxxdf09 for build container...
Can someone tell me why the runner uses a different docker image for build container?
Why is it not taking the predefined container (because that's the right one...)
Here you can see my gitlab-ci.yml:
image: docker.com/xxx/xxx/docker-build:stable
before_script:
- echo "Before script"
after_script:
- echo "After Script"
stages:
- build
- test
- deploy
build_release:
stage: build
script:
- sudo make all BUILD_TYPE=Release
only:
- master
tags:
- tag1
build_debug:
stage: build
script:
- sudo make all BUILD_TYPE=Debug
only:
- develop
- runner-test
tags:
- tag1
- tag2

In your .gitlab-ci.yml you are referencing the complete URL to your container; it should however be in the format group/container, e.g. library/nginx.
Optionally, you may use a specific version, e.g. library/nginx:1.13.9.
For more information, see: https://docs.gitlab.com/ce/ci/docker/using_docker_images.html

I assume you are using the docker executor. Therefore the gitlab-ci-runner is creating a new Image gitlab/gitlab-runner-helper which will isolate the build-steps from your VM's docker-environment. This image shall be the predefined container.
The stages themself will be performed inside containers of images you specify for the job or the image you specify globally for all jobs. This container is the build container.
The build container should be made from the image you specify with image on top of your .gitlab-ci.yml. You can verify it by doing
$ docker image ls | grep -E '(fe32|9608)'
on your VM. It shows you the image names and tags of your predefined- and build-container.

Related

Get a file out of a docker container running using Kubernetes in Gitlab

Gitlab is configured to execute multiple tasks and each task is running in a separate docker container. There are multiple tasks setup as:
stages:
- setup
- package
- cleanup
- test
All the containers are created using the command:
script:
- docker build -t $REGISTRY_URL/$COMPONENT:$CI_COMMIT_REF_SLUG --file $CI_PROJECT_DIR/resources/docker/$COMPONENT/Dockerfile $CI_PROJECT_DIR
- docker push $CI_REGISTRY/$COMPONENT:$CI_COMMIT_REF_SLUG
After the final stage some of the $COMPONENT creates reports which need to obtained as artifacts. But since they are getting executed in separate docker containers I am not aware how they can be fetched from the container.
I could try the below lines only if the path of artifact is with reference to $CI_PROJECT_DIR.
artifacts:
paths:
- output/
expire_in: 1 week
But in My case all the containers are running using Kubernetes in Gitlab and hence final test reports of my application are residing inside each containers. What will be best way to get the documents out of the docker containers?

Cannot connect to the Docker daemon at unix:///var/run/docker.sock in gitlab CI

I looked at any other questions but can't find my own solution! I setting up a CI in gitlab and use the gitlab's shared runner. In build stage I used docker image as base image but when i use docker command it says :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I looked at this topic but still don't understand what should I do?
.gitlab-ci.yml :
stages:
- test
- build
- deploy
job_1:
image: python:3.6
stage: test
script:
- sh ./sh_script/install.sh
- python manage.py test -k
job_2:
image: docker:stable
stage: build
before_script:
- docker info
script:
- docker build -t my-docker-image .
I know that the gitlab runner must registered to use docker and share /var/run/docker.sock! But how to do this when using the gitlab own runner?
Ahh, that's my lovely topic - using docker for gitlab ci. The problem you are experiencing is better known as docker-in-docker.
Before configuring it, you may want to read this brilliant post: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
That will give you a bit of understanding what is the problem and which solution best fits you. Generally there are 2 major approaches: actual installation of docker daemon inside docker and sharing host's daemon to containers. Which approach to choose - depends on your needs.
In gitlab you can go in several ways, I will just share our experience.
Way 1 - using docker:dind as a service.
It is pretty simple to setup. Just add docker:dind as a shared service to your gitlab-ci.yml file and use docker:latest image for your jobs.
image: docker:latest # this sets default image for jobs
services:
- docker:dind
Pros:
simple to setup.
simple to run - your source codes are available by default to your job in cwd because they are being pulled directly to your docker runner
Cons: you have to configure docker registry for that service, otherwise you will get your Dockerfiles built from scratch each time your pipeline starts. As for me, it is unacceptable, because can take more than an hour depending on the number of containers you have.
Way 2 - sharing /var/run/docker.sock of host docker daemon
We setup our own docker executor with docker daemon and shared the socket by adding it in /etc/gitlab-runner/config.toml file. Thus we made our machine's docker daemon available to docker cli inside containers. Note - you DONT need privileged mode for executor in this case.
After that we can use both docker and docker-compose in our custom docker images. Moreover, we dont need special docker registry because in this case we share executor's registry among all containers.
Cons
You need to somehow pass sources to your containers in this case, because you get them mounted only to docker executor, but not to containers, launched from it. We've stopped on cloning them with command like git clone $CI_REPOSITORY_URL --branch $CI_COMMIT_REF_NAME --single-branch /project

Gitlab CI docker-in-docker deployment not running commands inside of the container

I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.
Here's the contents of my gitlab-ci.yml:
stages:
- build
- analytics
variables:
TEST_IMAGE_NAME: 'registry.server.de/testimage'
build_testing_container:
stage: build
image: docker:stable
services:
- dind
script:
- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME
mess_detection:
stage: analytics
image: $TEST_IMAGE_NAME
script:
- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
paths:
- mess_detection.html
expire_in: 1 week
when: always
except:
- production
allow_failure: true
What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?
UPDATE:
It's getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project.
However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.
UPDATE2:
After some more research, I learned that gitlab executes four sub-steps for each build step:
After some more research, I found that gitlab runs 4 sub-steps for each build step:
Prepare : Create and start the services.
Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
Build : User build. This is run on the user-provided docker image.
Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.
UPDATE3
In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.
this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks.
you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
access can be done by a script or entry-points for example :
in the docker file of the image you are going to build add a script that you want to execute like that :
ADD exemple.sh /
RUN chmod +x exemple.sh
then you can add the image as a service in gitlab-ci and the script would change to :
docker exec <container_name> /exemple.sh
this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :
docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"
here's a reference :
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html

Bitbucket Pipelines - steps - docker - cant find image

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

How to rebuild docker image on push before CI script jobs

I want to generate Dockerfile in GitLab CI script and build it. Then use this newly generated image in build jobs. How can I do this? Tried to use global before_script, but it already starts in default container. I need to do this out of any containers.
before_script is run before every job so it's not something you want. But you can have a first job to do the image build and take advantage of the fact that each job can use a different Docker image. The build of the image is covered in the manual.
Option A (uhm... sort of OK)
Have 2 runners, one with a shell executor (tagged shell) and one with a Docker executor (tagged docker). You would then have a first stage with a job dedicated to building the docker image. It would use the shell runner.
image_build:
stage: image_build
script:
- # create dockerfile
- # run docker build
- # push image to a registry
tags:
- shell
The second job would then run using the runner with docker executor and use this created image:
job_1:
stage: test
image: [image you created]
script:
- # your tasks
tags:
- docker
The problem with this is that the runner would need to be part of the docker group which has security implications.
Option B (better)
The second option would do the same but would have only one runner using Docker executor. The Docker image would be built within a running container (gitlab/dind:latest image) = "docker in docker" solution.
stages:
- image_build
- test
image_build:
stage: image_build
image: gitlab/dind:latest
script:
- # create dockerfile
- # run docker build
- # push image to a registry
job_1:
stage: test
image: [image you created]
script:
- # your tasks

Resources