I am struggling with the CI in gitlab. Namely, I am unable to mount a volume in docker compose.
I am using the shell executor and i dont want to use the docker-in-docker solution, cause i need access to the direct host-system with my project.
I am using the following gitlab-runner configuration:
concurrent = 1
check_interval = 0
shutdown_timeout = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "myRunner"
url = "https://git.#########.com/"
id = 6
token = "#############"
token_obtained_at = 2023-01-18T13:10:20Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
MaxUploadedArchiveSize = 0
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
I want to mount a named volume to the container, this is how i do it in docker-compose.yaml, the named volume "config" should be mounted into the container from docker compose.
# REST-API
rest-api:
image: rest-api
hostname: rest-api
restart: always
build:
context: rest-api
dockerfile: ./Dockerfile
args:
NODE_PARAM: "${NODE_PARAM}"
ports:
- 8080:8080
- 8443:8443
volumes:
- ./rest-api:/usr/src/app
- config:/usr/src/app/config
My pipeline file looks like this:
variables:
COMPOSE_FILE: backend/docker-compose.yml
stages: # List of stages for jobs, and their order of execution
- build
- test
before_script:
- sudo touch /etc/dhcpcd.conf
- sudo touch /etc/systemd/timesyncd.conf
build:
stage: build
script:
- docker compose build
test:
stage: test
script:
- whoami
- docker volume ls
- docker compose up -d --build --pull always
- docker exec -t backend-rest-api-1 ls -la /usr/src/app/
- docker exec -t backend-rest-api-1 npm run test
- docker compose down
This problem only occurs when executing trough gitlab pipeline, executing it manually on the same server is working fine!
executing "docker volume ls" -> Here it shows that the volume config is available.
executing "ls -la /usr/src/app" -> The volume is not mounted, config dir not exists
I am thankful for your help!
Related
I want to spin up a localstack docker container and run a file, create_bucket.sh, with the command
aws --endpoint-url=http://localhost:4566 s3 mb s3://my-bucket
after the container starts. I tried creating this Dockerfile
FROM: localstack/localstack:latest
COPY create_bucket.sh /usr/local/bin/
ENTRYPOINT []
and a docker-compose.yml file that has
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
environment:
...
ports:
- '4566-4583:4566-4583'
command: sh -c "/usr/local/bin/create_bucket.sh"
but when I run
docker-compose up
the container comes up, but the command isn't run. How do I execute my command against the localstack container after container startup?
You can use mount volume instead of "command" to execute your script at startup container.
volumes:
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh
Also as they specify in their documentation, localstack must be precisely configured to work with docker-compose.
Please note that there’s a few pitfalls when configuring your stack manually via docker-compose (e.g., required container name, Docker network, volume mounts, environment variables, etc.)
In your case I guess you are missing some volumes, container name and variables.
Here is an example of a docker-compose.yml found here, which I have more or less adapted to your case
version: '3.8'
services:
localstack:
image: localstack/localstack
container_name: localstack-example
hostname: localstack
ports:
- "4566-4583:4566-4583"
environment:
# Declare which aws services will be used in localstack
- SERVICES=s3
- DEBUG=1
# These variables are needed for localstack
- AWS_DEFAULT_REGION=<region>
- AWS_ACCESS_KEY_ID=<id>
- AWS_SECRET_ACCESS_KEY=<access_key>
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- /var/run/docker.sock:/var/run/docker.sock
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh
Other sources:
Running shell script against Localstack in docker container
https://docs.localstack.cloud/localstack/configuration/
If you exec into the container, the create_bucket.sh is not copied. I'm not sure why and I couldn't get it to work either.
However, I have a working solution if you're okay to have a startup script as your goal is to bring up the container and execute the creation of the bucket in a single command.
Assign a name to your container in docker-compose.yml
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- '4566-4583:4566-4583'
Update your create_bucket.sh to use awslocal instead, it is already available in the container. Using aws cli with an endpoint-url needs aws configure as a pre-req.
awslocal s3 mb s3://my-bucket
Finally, create a startup script that runs the list of commands to complete the initial setup.
docker-compose up -d
docker cp create_bucket.sh localstack:/usr/local/bin/
docker exec -it localstack sh -c "chmod +x /usr/local/bin/create_bucket.sh"
docker exec -it localstack sh -c "/usr/local/bin/create_bucket.sh"
Execute the startup script
sh startup.sh
To verify, if you now exec into the running container, the bucket would have been created.
docker exec -it localstack /bin/sh
awslocal s3 ls
Try by executing below
docker exec Container_ID Your_Command
I am trying to maintain dev, test and master environments on the same host by using Gitlab CI.
.gitlab-ci.yml:
workflow:
rules:
- if: $CI_COMMIT_MESSAGE =~ /-no-ci$/
when: never
- if: $CI_COMMIT_BRANCH == "master"
- if: $CI_COMMIT_BRANCH == "test"
- if: $CI_COMMIT_BRANCH == "dev"
before_script:
- cp /home/gitlab-runner/my-app/.env.$CI_COMMIT_BRANCH ./.env.$CI_COMMIT_BRANCH
- echo '' >> ./.env
- echo 'export CI_COMMIT_BRANCH="${CI_COMMIT_BRANCH}"' >> ./.env.$CI_COMMIT_BRANCH
- "cat .env.$CI_COMMIT_BRANCH | envsubst > .env"
- source .env
- "cat Dockerfile.template | envsubst > Dockerfile"
- rm .env.$CI_COMMIT_BRANCH Dockerfile.template
build-master:
stage: build
script:
- sudo docker-compose build
- sudo docker-compose up -d
only:
- master
tags:
- master
build-test:
stage: build
script:
- sudo docker-compose build
- sudo docker-compose up -d
only:
- test
tags:
- test
build-dev:
stage: build
script:
- sudo docker-compose build
- sudo docker-compose up -d
only:
- dev
tags:
- dev
Since I have to start three containers (webserver, db and pgadmin), I use docker-compose.yml:
version: '3.1'
services:
webserver:
build:
context: ./
restart: always
image: my-app/${CI_COMMIT_BRANCH}
container_name: my-app-webserver-${CI_COMMIT_BRANCH}
volumes:
- ${PATH}:/app/${PATH}
ports:
- ${WEBSERVER_PORT}:${WEBSERVER_PORT}
depends_on:
- db
db:
image: postgres:12.2
restart: always
container_name: my-app-db-${CI_COMMIT_BRANCH}
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- ${HOST_DB_DATA}:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4:4.18
restart: always
container_name: my-app-pgadmin-${CI_COMMIT_BRANCH}
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
PGADMIN_LISTEN_PORT: 80
ports:
- ${PGADMIN_PORT}:80
volumes:
- ${PGADMIN_DATA}:/var/lib/pgadmin/
links:
- "db:pgsql-server"
depends_on:
- db
networks:
default:
name: my-app-network-${CI_COMMIT_BRANCH}
On the host I have /etc/gitlab-runner/config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "my-app-shell-prod"
url = "https://gitlab.my-app.com/"
token = "abcdefgabcdefg"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
privileged = true
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[[runners]]
name = "my-app-shell-test"
url = "https://gitlab.my-app.com/"
token = "abcdefgabcdefg"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
privileged = true
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[[runners]]
name = "my-app-shell-dev"
url = "https://gitlab.my-app.com/"
token = "abcdefgabcdefg"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
privileged = true
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
My initial thought was to use three different shell runners with tags master, test and dev to maintain independent builds, for each having a corresponding docker-compose.yml.
When I push to dev, pipeline is passed successfully and I have three containers with names ...-dev running. However, if I merge dev to master, another pipeline is passed, but now I have only three ...-master containers running (but in the same time job logs show that -dev have been restarted).
...
134 $ sudo docker-compose up -d
135 Recreating my-app-db-dev ...
136 Recreating my-app-db-dev ... done
137 Recreating my-app-webserver-dev ...
138 Recreating my-app-pgadmin-dev ...
139 Recreating my-app-pgadmin-dev ... done
140 Recreating my-app-webserver-dev ... done
I found an article on overriding docker-compose.yml files. However, when I try to use -f in .gitlab-ci.yml script section, it shows help message in job logs where no -f option is presented.
# docker-compose -v
docker-compose version 1.29.2, build 5becea4c
The question is: how to maintain docker-compose logic without using -f override.yml? Just create three versions of docker-compose.yml in three folders and use [entrypoint] in .gitlab-ci.yml?
Update
Why do I refuse to use docker:dind? I can not find the right way to run a job that starts docker-in-a-docker since again you need three docker-compose.yml files just to start docker:dind containers. Am I missing something?
I have been breaking my head over this problem the past few hours. My setup is as follows:
Running Ubuntu 19.10 hosting docker
Running this docker-compose.yml file:
version: '3.7'
services:
jenkins:
command: --httpPort=80 --httpsPort=443 --httpsKeyStore=/var/jenkins_ssl/jenkins.jks --httpsKeyStorePassword=very-secret-password
build: .
privileged: true
user: root
ports:
- 80:8080
- 50000:50000
- 443:443
container_name: jenkins
volumes:
- ~/jenkins:/var/jenkins_home
- ~/jenkins-ssl:/var/jenkins_ssl
- ~/app_config/:/var/app_config/
- ~/app_logs/:/var/app_logs/
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
The host has the group docker with id 998. My Dockerfile for the jenkins container is as follows:
USER root
ARG DOCKER_GID=998
RUN groupadd -g ${DOCKER_GID} docker
RUN usermod -aG docker jenkins
However, when I try to build something in a Jenkins job, the following error pops up:
/var/jenkins_home/workspace/cicd-test-backend#tmp/durable-79b7d7d6/script.sh: 1: /var/jenkins_home/workspace/cicd-test-backend#tmp/durable-79b7d7d6/script.sh: docker: Permission denied
script returned exit code 127
The build stage is:
steps {
script {
docker.withRegistry('localhost:5000') {
def image = docker.build("url-shortener-backend:${env.BUILD_ID}")
image.push()
}
}
}
Can anyone spot what is going wrong here? Thanks in advance :)
I'm trying to build the CI pipeline in GitLab. I'd like to ask about making the docker work in GitLab CI.
From this issue: https://gitlab.com/gitlab-org/gitlab-runner/issues/4501#note_195033385
I'm follow the instruction for both ways. With TLS and not used TLS.
But It's still stuck. Which in same error
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running
I've try to troubleshooting this problem. follow by below,
enable TLS
Which used .gitlab-ci.yml and config.toml for enable TLS in Runner.
This my .gitlab-ci.yml:
image: docker:19.03
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
IMAGE_NAME: image_name
services:
- docker:19.03-dind
stages:
- build
publish:
stage: build
script:
- docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) .
- docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10)
only:
- master
And this my config.toml:
[[runners]]
name = MY_RUNNER
url = MY_HOST
token = MY_TOKEN_RUNNER
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
Disable TLS
.gitlab-ci.yml:
image: docker:18.09
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
IMAGE_NAME: image_name
services:
- docker:18.09-dind
stages:
- build
publish:
stage: build
script:
- docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) .
- docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10)
only:
- master
And this my config.toml:
[[runners]]
environment = ["DOCKER_TLS_CERTDIR="]
Anyone have idea?
Solution
You can see at the accepted answer. Moreover, In my case and
another one. Looks like the root cause it from the Linux server that
GitLab hosted doesn't has permission to connect Docker. Let's check
the permission connectivity between GitLab and Docker on your server.
You want to set DOCKER_HOST to tcp://docker:2375. It's a "service", i.e. running in a separate container, by default named after the image name, rather than localhost.
Here's a .gitlab-ci.yml snippet that should work:
# Build and push the Docker image off of merges to master; based off
# of Gitlab CI support in https://pythonspeed.com/products/pythoncontainer/
docker-build:
stage: build
image:
# An alpine-based image with the `docker` CLI installed.
name: docker:stable
# This will run a Docker daemon in a container (Docker-In-Docker), which will
# be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker
# (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000.
services:
- name: docker:dind
alias: thedockerhost
variables:
# Tell docker CLI how to talk to Docker daemon; see
# https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
DOCKER_HOST: tcp://thedockerhost:2375/
# Use the overlayfs driver for improved performance:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
# GitLab has a built-in Docker image registry, whose parameters are set automatically.
# See https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-the-gitlab-contai
#
# CHANGEME: You can use some other Docker registry though by changing the
# login and image name.
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
# Only build off of master branch:
only:
- master
You can try to disable tls
services:
- name: docker:dind
entrypoint: ["dockerd-entrypoint.sh", "--tls=false"]
script:
- export DOCKER_HOST=tcp://127.0.0.1:2375 && docker build --pull -t ${CI_REGISTRY_IMAGE} .
As there is an interesting reading https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27300
docker:dind v20 sleeps for 16 seconds if you don't have TLS explicitly disabled, and that causes race condition where build container starts earlier than dockerd container
Try with this .gitlab-ci.yml file. It worked for me when I specified the DOCKER_HOST
docker-build:
stage: build
image:
# An alpine-based image with the `docker` CLI installed.
name: docker:stable
# This will run a Docker daemon in a container (Docker-In-Docker), which will
# be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker
# (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000.
services:
- name: docker:dind
alias: thedockerhost
variables:
DOCKER_HOST: tcp://thedockerhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
For me the accepted answer didn't work. Instead I configured the TLS certificate volume for the runner
[[runners]]
...
[runners.docker]
...
volumes = ["/certs/client", "/cache"]
and I added a variable for the certificate directory in my .gitlab-ci.yaml
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
according to this article:
https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03/#configure-tls
and this one:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker-in-docker-with-tls-enabled-in-the-docker-executor
You can remove the DOCKER_HOST from the .gitlab-ci file. That trick will do magic.
The problem
I have made a project with docker compose. It works well on localhost. I want to use this base to test or analyze code with Gitlab Runner. I solved a lot of problems, like install docker compose, run and build selected containers and run commands in container. The first job ran and success (!!!), but the following jobs failed before "before_script":
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
...
Error response from daemon: Conflict.
...
Error response from daemon: Conflict.
I don't understand why. What do I do wrong? I repeat: the first job of the pipeline runs well with "success" message! Each other jobs of the pipeline fail.
Full output:
Running with gitlab-ci-multi-runner 9.4.0 (ef0b1a6)
on XXX Runner (fdc0d656)
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9ffor docker service...
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Files
.gitlab-ci.yml
# Select image from https://hub.docker.com/r/_/php/
image: docker:latest
# Services
services:
- docker:dind
stages:
- build
- test
- deploy
cache:
key: ${CI_BUILD_REF_NAME}
untracked: true
paths:
- vendor
- var
variables:
DOCKER_CMD: docker exec --user user bin
COMPOSE_HTTP_TIMEOUT: 300
before_script:
- apk add --no-cache py-pip bash
- pip install docker-compose
- touch ~/.gitignore
- bin/docker-init.sh
- cp app/config/parameters.gitlab-ci.yml app/config/parameters.yml
- cp app/config/nodejs_parameters.yml.dist app/config/nodejs_paramteres.yml
- chmod -R 777 app/cache app/logs var
# Load only binary and mysql
- docker-compose up -d binary mysql
build:
stage: build
script:
- ${DOCKER_CMD} composer install -n
- ${DOCKER_CMD} php app/console doctrine:database:create --env=test --if-not-exists
- ${DOCKER_CMD} php app/console doctrine:migrations:migrate --env=test
codeSniffer:
stage: test
script:
- ${DOCKER_CMD} bin/php-cs-fixer fix --dry-run --config-file=.php_cs
database:
stage: test
script:
- ${DOCKER_CMD} php app/console doctrine:mapping:info --env=test
- ${DOCKER_CMD} php app/console doctrine:schema:validate --env=test
- ${DOCKER_CMD} php app/console doctrine:fixtures:load --env=test
unittest:
stage: test
script:
- ${DOCKER_CMD} bin/phpunit -c app --debug
deploy_demo:
stage: deploy
script:
- echo "Deploy to staging server"
environment:
name: staging
url: https://staging.example.com
only:
- develop
deploy_prod:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: https://example.com
when: manual
only:
- master
docker-compose.yml
version: "2"
services:
web:
image: nginx:latest
ports:
- "${HTTP_PORT}:80"
depends_on:
- mysql
- elasticsearch
- binary
links:
- binary:php
volumes:
- ".:/var/www"
- "./app/config/docker/vhost.conf:/etc/nginx/conf.d/site.conf"
- "${BASE_LOG_DIR}/nginx:/var/log/nginx"
mysql:
image: mysql:5.6
environment:
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
ports:
- "${MYSQL_PORT}:3306"
volumes:
- "${BASE_LOG_DIR}/mysql:/var/log/mysql"
- "${BASE_MYSQL_DATA_DIR}:/var/lib/mysql"
- "./app/config/docker/mysql.cnf:/etc/mysql/conf.d/mysql.cnf"
elasticsearch:
image: elasticsearch:1.7.6
ports:
- "${ELASTICSEARCH_PORT}:9200"
volumes:
- "${BASE_ELASTICSEARCH_DATA_DIR}:/usr/share/elasticsearch/data"
binary:
image: fchris82/kunstmaan-test
container_name: bin
volumes:
- ".:/var/www"
- "${BASE_LOG_DIR}/php:/var/log/php"
- "~/.ssh:/home/user/.ssh"
tty: true
environment:
LOCAL_USER_ID: ${LOCAL_USER_ID}
config.toml
[[runners]]
name = "XXX Runner"
url = "https://gitlab.xxx.xx/"
token = "xxxxxxxxxxx"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
OK, I found the problem. I spoilt the configuration. If you use dind service in .gitlab-ci.yml then don't use /var/run/docker.sock volume in config.toml file OR vica versa if you use "socket" method, don't use the dind service.
More informations: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html