can not run docker latest on gitlab-ci runner - docker

I'm testing gitlab-ci and trying to generate an image on the registry from the Dockerfile.
I have the same code just to test:
#gitlab-ci
image: docker:latest
tages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test
output:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker is running the image is being pulled but I can not execute docker commands.
In my local environment if a run:
docker run -it docker:latest
I stay inside the container and run docker info i have the same problem. I had to fix it by running the container on this way:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock docker:latest
but I do not know how to fix it on gitlab-ci. I configured my runner so:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Maybe someone can put me in the right direction.
thanks

By default it is not possible to run docker-in-docker (DIND) (as a security measure).
This section in the Gitlab docs is your solution. You must use Docker-in-Docker.
After configuring your runner to use DIND your .gitlab-ci.yml will look like this:
#gitlab-ci
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
stages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test

Related

How to make Gitlab CI/CD run container with some custom variables?

i have a job that starts android UI-tests on GitLab CI/CD. It somehow runs a container from image android-uitests:1.0 from registry. I don't know where and how Gitlab CI/CD runs that image using command "docker run ...", but i need to extend that command and i want to pass some variables (or arguments) in this command.
Here below example of command that i want Gitlab to do:
docker run -d \
-t \
--privileged \
-e "SNAPSHOT_DISABLED"="true" \
-e "QTWEBENGINE_DISABLE_SANDBOX"=1 \
-e "WINDOW"="true" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-p 5555:5555 -p 5554:5554 -p 5901:5901 \
--name emulator \
android-uitest:1.0
this is a stage and its job with image
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
stage: ui-tests
only:
- merge_requests
- schedules
when: manual
script:
- bash /run-emulator.sh
- adb devices
- adb shell input keyevent 82
- adb shell settings put global window_animation_scale 0 &
- adb shell settings put global transition_animation_scale 0 &
- adb shell settings put global animator_duration_scale 0 &
- ./gradlew mobile:connectedDevDebugAndroidTest -Pandroid.testInstrumentationRunnerArguments.package=app.online.tests.uitests
tags:
- androidtest
So another words i want to configure that "under the hood docker run command" that runs my image.
Tell me please how to do that?
Considering you're using a Docker container, I'll assume you're using a Gitlab Runner on Docker executor mode, which means you're essentially running a similar script to this when you don't specify a docker image to run the CI job:
image: docker:19.03.13
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.13-dind
script:
- (your commands)
To understand what's going on, let's break it in multiple steps:
image: docker:19.03.13
(...)
services:
- docker:19.03.13-dind
docker-19.03.13-dind, what's the difference to docker:19.03.13?
why is it a service instead of an image?
DIND means Docker-in-Docker, this part of Gitlab's documentation can explain it in further technical details, but what is important to understand from this part is what is a service on Gitlab's CI context and why they have to specify an additional Docker image when already using a Docker image as a default environment. When you write a service on Gitlab CI, you are able to use its command while you're inside an existing container. e.g. when you want to connect a PostgreSQL (database) container to a backend container you're building, but without having to set up a docker-compose or multiple containers.
Using a docker service to run together with a docker image, it means you can directly use docker run within your job without any additional setup. This previous StackOverflow question explains this question further.
Back to your code, instead of deploying your registry image as a job directly:
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
You may want to first, build your container and upload it to your registry:
image: docker:19.03.12
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
This snippet will build your Dockerfile in the root folder of your repository and upload it to your Gitlab private (or public, if your project is set as public) image registry. Now you can specify an additional job specifically to do what you want:
Final example
image: docker:19.03.12
stages:
- build
- release
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
deploy:
stage: release
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker run -d -t --privileged -e "SNAPSHOT_DISABLED"="true" -e "QTWEBENGINE_DISABLE_SANDBOX"=1 -e "WINDOW"="true" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -p 5555:5555 -p 5554:5554 -p 5901:5901 --name emulator $CI_REGISTRY_IMAGE:latest
Why are you using $VARIABLES?
In case of an environment variable may sound confusing, here's the list of default environment variables Gitlab generates for every job it creates.
The last example I cited will result in a Docker container running in the same machine your executor is registered, with the environment variables you have specified.
If you need a practical example, you can use this gitlab-ci.yml of a project of mine as reference.

Can't communicate between docker containers in Gitlab-CI

In the second stage of my CI pipeline (after building a fresh docker image). I'm using other docker containers to test the new image, but I'm not able to get send any requests between these containers. My CI config is as follows:
stages:
- build
- test
services:
- docker:dind
variables:
# Enable network-per-build to allow gitlab services to see one another
FF_NETWORK_PER_BUILD: "true"
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: '/certs'
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# IMAGE SETTINGS
NODE_ENV: "development"
API_URL: "http://danielgtaylor-apisprout:8000"
PORT: 8080
build:
stage: build
image: docker
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t ${CI_REGISTRY_IMAGE}:test .
- docker push ${CI_REGISTRY_IMAGE}:test
test:
stage: test
image: docker
services:
- docker:dind
- name: ${CI_REGISTRY_IMAGE}:test
alias: server
script:
- docker run --rm --name apisprout -d -v $CI_PROJECT_DIR/v2-spec.yml:/api.yaml danielgtaylor/apisprout /api.yaml
- docker run --rm --name newman -v $CI_PROJECT_DIR:/etc/newman postman/newman run 'Micros V2.postman_collection.json'
And receive the following error ENOTFOUND server server:8080
I have also tried with a new bridge network:
test:
stage: test
image: docker
services:
- docker:dind
script:
- docker network create -d bridge mynet
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker run -d --network=mynet --hostname=server ${CI_REGISTRY_IMAGE}:test
- docker run --rm --network=mynet --hostname=apisprout --name apisprout -d -v $CI_PROJECT_DIR/v2-spec.yml:/api.yaml danielgtaylor/apisprout /api.yaml
- docker run --rm --network=mynet --name newman -v $CI_PROJECT_DIR:/etc/newman postman/newman run 'Micros V2.postman_collection.json'
But I receive the same error ENOTFOUND server server:8080.
I am unable to run the docker run containers as services as I don't believe attaching volumes is supported yet.
I'm also running this on Gitlab.com, not a private runner.

Docker doesn't work when using getsentry/sentry-cli

I want to upload my frontend to sentry, but I need to get the folder using docker commands. However when I use image: getsentry/sentry-cli
docker doesn't works and e.g. in before_script I get error that docker doesn't exist
sentry_job:
stage: sentry_job
image: getsentry/sentry-cli
services:
- docker:18-dind
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.gitlab.cz
script:
# script...
. # Get the dist folder from the image
- mkdir frontend_dist
- docker run --rm -v $PWD/frontend_dist:/mounted --entrypoint="" $IMAGE /bin/sh -c "cp /frontend/dist /mounted"
- ls frontend_dist
tags:
- dind
How do I fix that?
To achieve what you want, you need to use a single job (to have the same build context) and specify docker:stable as the job image (along with docker:stable-dind as a service).
This setup is called docker-in-docker and this is the standard way to allow a GitLab CI script to run docker commands (see doc).
Thus, you could slightly adapt your .gitlab-ci.yml code like this:
sentry_job:
stage: sentry_job
image: docker:stable
services:
- docker:stable-dind
variables:
IMAGE: "${CI_REGISTRY_IMAGE}:latest"
before_script:
- docker login -u gitlab-ci-token -p "${CI_JOB_TOKEN}" registry.gitlab.cz
script:
- git pull "$IMAGE"
- mkdir -v frontend_dist
- docker run --rm -v "$PWD/frontend_dist:/mounted" --entrypoint="" "$IMAGE" /bin/sh -c "cp -v /frontend/dist /mounted"
- ls frontend_dist
- git pull getsentry/sentry-cli
- docker run --rm -v "$PWD/frontend_dist:/work" getsentry/sentry-cli
tags:
- dind
Note: the git pull commands are optional (they ensure Docker will use the latest version of the images).
Also, you may need to change the definition of variable IMAGE.

GitLab CI docker in docker can't create volume

I'm using docker in docker to host my containers as they work through the pipeline. The container I create from my code is setup to have a volume to pass in a gcloud key to the container. This works perfectly on my local machine, but on the gitlab-runner it doesn't link correctly.
From reading this appears to be because it links the host to my container, rather than the dind host to my container.
How do I link the directory that is inside dind to my container?
(Also ignore any minor issues with tagging and such, this ci file is very early in development)
GitLab ci below
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
SPRING_PROFILES_ACTIVE: gitlab-ci
CONTAINER_TEST_IMAGE: registry.gitlab.com/fdsa
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/asdf
stages:
- build_test_image
- deploy
.docker_login: &docker_login | # This is an anchor
docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
build test image:
stage: build_test_image
script:
- *docker_login
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
test run:
stage: deploy
script:
- *docker_login
- mkdir /key
- echo $GCP_SVC_KEY > /key/application_default_credentials.json
# BROKEN LINE HERE
- docker run --rm -v "/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker
Background
Your problem is in the fact that DIND runs ALL containers on your host (or top-level Docker engine) so when you mount a directory to your $CONTAINER_TEST_IMAGE (2nd level Docker) this image in fact runs on your host through the mounted socket and thus the container is looking for that directory on your Docker host.
I've had this same issue mounting tests in containers and solved it through linking volumes between containers.
Solution
In your case I think the docker cp command could solve your need to copy the /key/application_default_credentials.json file to the container.
Something like:
- docker run --name="myContainer" -d $CONTAINER_TEST_IMAGE
- docker cp /key/application_default_credentials.json myContainer::/.config/gcloud/application_default_credentials.json
- docker exec -it myContainer 'run_tests_or_whatever_command'
- docker rm -f myContainer
The other solution given is perfectly valid but I wanted to share my solution:
Apparently dind will mount the /build directory so subcontainers can "see" its contents. So by placing the key in "./" it is viewable by those containers. I use $(pwd) because docker run doesn't accept ~ or .
test run:
stage: deploy
script:
- *docker_login
- mkdir ./key
- echo $GCP_SVC_KEY > ./key/application_default_credentials.json
- docker run --rm -v "$(pwd)/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker

docker not found with docker:dind + google/cloud-sdk

I'm getting the error docker: command not found while running the following CI script inside gitlab-ci. This error is happening during before_script for the deploy phase.
services:
- docker:dind
stages:
- build
- test
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
image: docker:latest
script:
- docker info
- docker version
- docker build --pull -t $SERVICE_NAME:$CI_COMMIT_REF_NAME .
- docker image list
- docker tag $SERVICE_NAME:$CI_COMMIT_REF_NAME $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker push $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
test:
image: docker:latest
stage: test
script:
- docker pull $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker image list
- docker run $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME npm test
deploy:
image: google/cloud-sdk
stage: deploy
environment: Production
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/GCLOUD_KEYFILE.json
- gcloud auth activate-service-account --key-file /tmp/GCLOUD_KEYFILE.json
- rm /tmp/GCLOUD_KEYFILE.json
- gcloud info
- gcloud components list
only:
- master
I'm a bit confused, because I'm runing docker-in-docker (docker:dind) as a service so the docker command should be made available to all stages (if I understand this correctly), however it's clearly not.
Is it due to an interaction with google/cloud-sdk ?
You probably misunderstood what services mean. From the doc,
The services keyword defines just another docker image that is run during your job and is linked to the docker image that the image keyword defines.
What you need is a custom docker executor that uses dind image and preinstalled with gcloud sdk. You can build such an image with this Dockerfile:
FROM docker:latest
RUN apk add --no-cache \
bash \
build-base \
curl \
git \
libffi-dev \
openssh \
openssl-dev \
python \
py-pip \
python-dev
RUN pip install docker-compose fabric
RUN curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
The question was asked almost 5 years ago, I am unsure if by that time the image google/cloud-sdk shipped without docker binaries, I can't think of anything else for a docker: command not found error more than it was not available in the standard location. Anyways, today 2022 google/cloud-sdk comes with docker and it can interact with the docker service, and since I ended up here several times after running into problems trying to use docker:dind and google/cloud-sdk, I will add the following:
Is possible to use docker from the google/cloud-sdk image, there is no need to create a custom image for your Gitlab CI. The problem is that docker in google/cloud-sdk tries to connect to the socket in /var/run/docker.sock as is presented in the logs:
$ docker build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Anyways you can also checks in your logs of the service docker:dind that docker listens in a socket (not reachable from the job container) and a tcp port (reachable via the hostname docker). So, you just need to use the tcp port in your docker commands, either by setting the env variable DOCKER_HOST or adding a -H tcp://docker:2375 as in
$ docker -H tcp://docker:2375 build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Step 1/8 : FROM python:latest
You forgot to inform the image tag at the top.
image: docker:latest
services:
- docker:dind
...
Works for me! :)
See: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html

Resources