GitlabCI run docker review app - docker

I have several php apps with similar requirements, dockerized gitlab runner and one docker image for my apps.
What is the best solution for autostart review apps?
I started runner with connected docker.sock and additionaly added volume with my projects /home/devenv/ for runner attached in gitlab runner config.toml:
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/cache", "/home/devenv:/home/devenv"]
Test and build works fine, using image: myrepo.com/group/image in .gitlab-ci.yml
Then my deploy section fails with error.
Deploy section:
deploy to review:
image: docker:latest
services:
- docker:dind
stage: deploy
script:
- rm -rf /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME
- mkdir /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME
- cp -r ./* /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME/
- docker stop $CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME
- docker rm $CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME
- docker run -d --env ENDLESS_RUN="1" --env VIRTUAL_HOST="$CI_BUILD_REF_NAME.$CI_PROJECT_NAME.$CI_PROJECT_NAMESPACE.e.mydomain.com" --name "$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME" -v /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME/httpdocs:/home/web/httpdocs -v /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME/logs:/var/logs myrepo.com/group/image
- docker exec $CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME cd /home/httpdocs/ && npm install && bower install && gulp build
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$CI_BUILD_REF_NAME.$CI_PROJECT_NAME.$CI_PROJECT_NAMESPACE.e.mydomain.com
only:
- branches
except:
- master
Error on run command:
$ docker run -d --env ENDLESS_RUN="1" --env VIRTUAL_HOST="$CI_BUILD_REF_NAME.$CI_PROJECT_NAME.$CI_PROJECT_NAMESPACE.e.mydomain.com" --name "$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME" -v /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME/httpdocs:/home/bitrix/www -v /home/devenv/$CI_PROJECT_NAMESPACE-$CI_PROJECT_NAME-$CI_BUILD_REF_NAME/logs:/var/logs myrepo.com/group/image
Unable to find image 'myrepo.com/group/image:latest' locally
latest: Pulling from group/image
90577c79babf: Pulling fs layer
a74e2caa985d: Pulling fs layer
8729c6ccfcfb: Pulling fs layer
f160b3e340fb: Pulling fs layer
9c19c344e2fa: Pulling fs layer
74a07af12073: Pulling fs layer
...
...
Status: Downloaded newer image for myrepo.com/group/image:latest
docker: An error occurred trying to connect: Post http://docker:2375/v1.24/containers/create?name=olimpia-iam-master: EOF.
See 'docker run --help'.
ERROR: Build failed: exit code 125

DIND doesn't allow mounting volumes from one container into another. For what you're trying to do you'll have to share the host docker service with the container

Related

Gitlab dind service: Use docker in yet another container

ok, I got to the point I need to do something like this in Gitlab CI :
(Note: I oversimplified it so it has actual purpose running this)
.gitlab-ci.yml
workflow:
rules:
- when: always
image: "docker:20.10.7"
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- "docker:20.10.7-dind"
integration-tests:
stage: test
script:
- bin/run
So the Gitlab runner will use docker image and the dind service.
And the bin/run script is :
#!/usr/bin/env sh
# shellcheck disable=SC1091
set -eux
apk update
apk add bind-tools
echo "${DOCKER_HOST:=}"
docker container ls
host docker || true
nc -v docker 2376 || true
docker run --rm \
-v "${DOCKER_CERT_PATH:=}:${DOCKER_CERT_PATH:=}:ro" \
-e DOCKER_HOST="${DOCKER_HOST:=}" \
-e DOCKER_CERT_PATH="${DOCKER_CERT_PATH:=}" \
-e DOCKER_TLS_VERIFY="${DOCKER_TLS_VERIFY:=}" \
--network "host" \
"docker:20.10.7" docker container ls
Here I want to be able to use docker to run a nested docker container and call nested docker functions.
Here is the result :
$ bin/run
+ apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.12-94-g0551adbecc [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.12-94-g0551adbecc [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]
OK: 13912 distinct packages available
+ apk add bind-tools
(1/17) Installing fstrm (0.6.0-r1)
...
(17/17) Installing bind-tools (9.16.33-r0)
Executing busybox-1.32.1-r6.trigger
OK: 24 MiB in 37 packages
+ echo tcp://docker:2376
tcp://docker:2376
+ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ host docker
Host docker not found: 3(NXDOMAIN)
+ true
+ nc -v docker 2376
docker (172.17.0.3:2376) open
+ docker run --rm -e 'DOCKER_HOST=tcp://docker:2376' --network host docker:20.10.7 docker container ls
Unable to find image 'docker:20.10.7' locally
20.10.7: Pulling from library/docker
...
9d806bc20361: Pull complete
Digest: sha256:bfc499cef26daa22da31b76be1752813a6921ee1fa1dd1f56d4fdf19c701d332
Status: Downloaded newer image for docker:20.10.7
error during connect: Get http://docker:2376/v1.24/containers/json: dial tcp: lookup docker on 169.254.169.254:53: no such host
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
What I don't understand :
Why is the nc command working but the host command failed in the job container ?
Why is docker service not resolved in the nested container ?
Ok I figured out my problem, I was missing the FF_NETWORK_PER_BUILD=true variable.
.gitlab-ci.yml
workflow:
rules:
- when: always
image: "docker:20.10.7"
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- "docker:20.10.7-dind"
integration-tests:
stage: test
variables:
FF_NETWORK_PER_BUILD: "true"
script:
- bin/run

alpine cannot access docker daemon when using gitlab-ci

I have a custom gitlab ci that I want to compile a Golang app and build a docker image. I have decided to use alpine docker image for the gitlab runner. I can't seam to get docker started. I have tried to manually start docker and get an error ( * WARNING: docker is already starting ) and if I don't manually start the docker service I get (Fails (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?) Any one else experience this?
This would not be a duplicate question. Gitlab runner runs the docker alpine container in root (verified by running whoami). For the sake of trying I did try usermod -aG docker $(whoami) and had the same output.
.gitlab-ci.yml
image: alpine
variables:
GO_PROJECT: linkscout
before_script:
- apk add --update go git libc-dev docker openrc
- mkdir -p ~/go/src/${GO_PROJECT}
- cp -r ${CI_PROJECT_DIR}/* ~/go/src/${GO_PROJECT}/
- cd ~/go/src/${GO_PROJECT}
- service docker start # * WARNING: docker is already starting
stages:
- compile
- build
compile:
stage: compile
script:
- go get
- go build -a
build:
stage: build
script:
- docker version # If I don't run (service docker start) I get this message: Fails (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?)
By default you cannot use Docker-in-docker. You should configure your runner like this. Then, as stated in the explanation also use docker:latest as image instead of alpine.

GitLab CI - Cannot connect to the Docker daemon from within an image

I have a node-based project and following are the first few steps that are required to be executed as part of the build:
npm install
npm run build
docker build -t client .
The last command above builds the following Dockerfile:
FROM docker.artifactory.abc.net/nginx
COPY build /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
Content of .gitlab-ci.yml:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
stage: build
script:
- npm install
- npm run build
- docker build -t client .
In the above Dockerfile, i am using a custom node image (node:1.0) which contains the proxy settings for apk to work and Artifactory configuration so all the dependencies are fetched using Artifactory. Now when i was running this build, i was getting docker: command not found error while executing the last command (docker build -t client .), which is expected because the base image is for node and doesn't contain docker. So i added docker setup instructions to the node Dockerfile based on this link except for the last 3 lines where it's configuring the ENTRYPOINT and CMD.
Now when i ran the build, i got:
$ docker build -t client .
Sending build context to Docker daemon 372.7MB
Step 1 : FROM docker.artifactory.abc.net/nginx
Get https://docker.artifactory.abc.net/v2/nginx/manifests/latest: unknown: Authentication is required
ERROR: Job failed: exit code 1
This error, as per my past experience, had to do with running docker login command. Since the docker setup in official image uses tar, i had to add docker user to /etc/group and then add current user (root) to the docker group. Also added the docker login command as shown below to the Dockerfile:
addgroup docker; \
adduser root docker; \
docker login docker.artifactory.abc.net -u svc-art -p "ZTg6#&kq"; \
After that, if i try building this Dockerfile, i get following error:
+ dockerd -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ docker -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ adduser root docker
+ tail -2 /etc/group
node:x:1000:node
docker:x:101:root
+ docker login docker.artifactory.abc.net -u svc-art -p ZTg6#&kq
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I also did an ls -ltr /var/run/docker.sock; and the docker socket file was not present inside the image. This seems to be the issue.
Any idea how i can get this working?
Well from the example you have provided I cannot see where you call your docker service, therefore I assume you are not calling it also you are not logging into the registry.
The way your pipeline should look like is something as follows:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
image: docker:latest
services:
- docker:dind
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
- docker build -t registry.example.com/group/project/image:latest .
- docker push registry.example.com/group/project/image:latest
You could also find more info here

GitLab CI docker in docker can't create volume

I'm using docker in docker to host my containers as they work through the pipeline. The container I create from my code is setup to have a volume to pass in a gcloud key to the container. This works perfectly on my local machine, but on the gitlab-runner it doesn't link correctly.
From reading this appears to be because it links the host to my container, rather than the dind host to my container.
How do I link the directory that is inside dind to my container?
(Also ignore any minor issues with tagging and such, this ci file is very early in development)
GitLab ci below
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
SPRING_PROFILES_ACTIVE: gitlab-ci
CONTAINER_TEST_IMAGE: registry.gitlab.com/fdsa
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/asdf
stages:
- build_test_image
- deploy
.docker_login: &docker_login | # This is an anchor
docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
build test image:
stage: build_test_image
script:
- *docker_login
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
test run:
stage: deploy
script:
- *docker_login
- mkdir /key
- echo $GCP_SVC_KEY > /key/application_default_credentials.json
# BROKEN LINE HERE
- docker run --rm -v "/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker
Background
Your problem is in the fact that DIND runs ALL containers on your host (or top-level Docker engine) so when you mount a directory to your $CONTAINER_TEST_IMAGE (2nd level Docker) this image in fact runs on your host through the mounted socket and thus the container is looking for that directory on your Docker host.
I've had this same issue mounting tests in containers and solved it through linking volumes between containers.
Solution
In your case I think the docker cp command could solve your need to copy the /key/application_default_credentials.json file to the container.
Something like:
- docker run --name="myContainer" -d $CONTAINER_TEST_IMAGE
- docker cp /key/application_default_credentials.json myContainer::/.config/gcloud/application_default_credentials.json
- docker exec -it myContainer 'run_tests_or_whatever_command'
- docker rm -f myContainer
The other solution given is perfectly valid but I wanted to share my solution:
Apparently dind will mount the /build directory so subcontainers can "see" its contents. So by placing the key in "./" it is viewable by those containers. I use $(pwd) because docker run doesn't accept ~ or .
test run:
stage: deploy
script:
- *docker_login
- mkdir ./key
- echo $GCP_SVC_KEY > ./key/application_default_credentials.json
- docker run --rm -v "$(pwd)/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker

docker not found with docker:dind + google/cloud-sdk

I'm getting the error docker: command not found while running the following CI script inside gitlab-ci. This error is happening during before_script for the deploy phase.
services:
- docker:dind
stages:
- build
- test
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
image: docker:latest
script:
- docker info
- docker version
- docker build --pull -t $SERVICE_NAME:$CI_COMMIT_REF_NAME .
- docker image list
- docker tag $SERVICE_NAME:$CI_COMMIT_REF_NAME $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker push $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
test:
image: docker:latest
stage: test
script:
- docker pull $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker image list
- docker run $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME npm test
deploy:
image: google/cloud-sdk
stage: deploy
environment: Production
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/GCLOUD_KEYFILE.json
- gcloud auth activate-service-account --key-file /tmp/GCLOUD_KEYFILE.json
- rm /tmp/GCLOUD_KEYFILE.json
- gcloud info
- gcloud components list
only:
- master
I'm a bit confused, because I'm runing docker-in-docker (docker:dind) as a service so the docker command should be made available to all stages (if I understand this correctly), however it's clearly not.
Is it due to an interaction with google/cloud-sdk ?
You probably misunderstood what services mean. From the doc,
The services keyword defines just another docker image that is run during your job and is linked to the docker image that the image keyword defines.
What you need is a custom docker executor that uses dind image and preinstalled with gcloud sdk. You can build such an image with this Dockerfile:
FROM docker:latest
RUN apk add --no-cache \
bash \
build-base \
curl \
git \
libffi-dev \
openssh \
openssl-dev \
python \
py-pip \
python-dev
RUN pip install docker-compose fabric
RUN curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
The question was asked almost 5 years ago, I am unsure if by that time the image google/cloud-sdk shipped without docker binaries, I can't think of anything else for a docker: command not found error more than it was not available in the standard location. Anyways, today 2022 google/cloud-sdk comes with docker and it can interact with the docker service, and since I ended up here several times after running into problems trying to use docker:dind and google/cloud-sdk, I will add the following:
Is possible to use docker from the google/cloud-sdk image, there is no need to create a custom image for your Gitlab CI. The problem is that docker in google/cloud-sdk tries to connect to the socket in /var/run/docker.sock as is presented in the logs:
$ docker build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Anyways you can also checks in your logs of the service docker:dind that docker listens in a socket (not reachable from the job container) and a tcp port (reachable via the hostname docker). So, you just need to use the tcp port in your docker commands, either by setting the env variable DOCKER_HOST or adding a -H tcp://docker:2375 as in
$ docker -H tcp://docker:2375 build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Step 1/8 : FROM python:latest
You forgot to inform the image tag at the top.
image: docker:latest
services:
- docker:dind
...
Works for me! :)
See: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html

Resources