The problem
I have made a project with docker compose. It works well on localhost. I want to use this base to test or analyze code with Gitlab Runner. I solved a lot of problems, like install docker compose, run and build selected containers and run commands in container. The first job ran and success (!!!), but the following jobs failed before "before_script":
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
...
Error response from daemon: Conflict.
...
Error response from daemon: Conflict.
I don't understand why. What do I do wrong? I repeat: the first job of the pipeline runs well with "success" message! Each other jobs of the pipeline fail.
Full output:
Running with gitlab-ci-multi-runner 9.4.0 (ef0b1a6)
on XXX Runner (fdc0d656)
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9ffor docker service...
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Files
.gitlab-ci.yml
# Select image from https://hub.docker.com/r/_/php/
image: docker:latest
# Services
services:
- docker:dind
stages:
- build
- test
- deploy
cache:
key: ${CI_BUILD_REF_NAME}
untracked: true
paths:
- vendor
- var
variables:
DOCKER_CMD: docker exec --user user bin
COMPOSE_HTTP_TIMEOUT: 300
before_script:
- apk add --no-cache py-pip bash
- pip install docker-compose
- touch ~/.gitignore
- bin/docker-init.sh
- cp app/config/parameters.gitlab-ci.yml app/config/parameters.yml
- cp app/config/nodejs_parameters.yml.dist app/config/nodejs_paramteres.yml
- chmod -R 777 app/cache app/logs var
# Load only binary and mysql
- docker-compose up -d binary mysql
build:
stage: build
script:
- ${DOCKER_CMD} composer install -n
- ${DOCKER_CMD} php app/console doctrine:database:create --env=test --if-not-exists
- ${DOCKER_CMD} php app/console doctrine:migrations:migrate --env=test
codeSniffer:
stage: test
script:
- ${DOCKER_CMD} bin/php-cs-fixer fix --dry-run --config-file=.php_cs
database:
stage: test
script:
- ${DOCKER_CMD} php app/console doctrine:mapping:info --env=test
- ${DOCKER_CMD} php app/console doctrine:schema:validate --env=test
- ${DOCKER_CMD} php app/console doctrine:fixtures:load --env=test
unittest:
stage: test
script:
- ${DOCKER_CMD} bin/phpunit -c app --debug
deploy_demo:
stage: deploy
script:
- echo "Deploy to staging server"
environment:
name: staging
url: https://staging.example.com
only:
- develop
deploy_prod:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: https://example.com
when: manual
only:
- master
docker-compose.yml
version: "2"
services:
web:
image: nginx:latest
ports:
- "${HTTP_PORT}:80"
depends_on:
- mysql
- elasticsearch
- binary
links:
- binary:php
volumes:
- ".:/var/www"
- "./app/config/docker/vhost.conf:/etc/nginx/conf.d/site.conf"
- "${BASE_LOG_DIR}/nginx:/var/log/nginx"
mysql:
image: mysql:5.6
environment:
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
ports:
- "${MYSQL_PORT}:3306"
volumes:
- "${BASE_LOG_DIR}/mysql:/var/log/mysql"
- "${BASE_MYSQL_DATA_DIR}:/var/lib/mysql"
- "./app/config/docker/mysql.cnf:/etc/mysql/conf.d/mysql.cnf"
elasticsearch:
image: elasticsearch:1.7.6
ports:
- "${ELASTICSEARCH_PORT}:9200"
volumes:
- "${BASE_ELASTICSEARCH_DATA_DIR}:/usr/share/elasticsearch/data"
binary:
image: fchris82/kunstmaan-test
container_name: bin
volumes:
- ".:/var/www"
- "${BASE_LOG_DIR}/php:/var/log/php"
- "~/.ssh:/home/user/.ssh"
tty: true
environment:
LOCAL_USER_ID: ${LOCAL_USER_ID}
config.toml
[[runners]]
name = "XXX Runner"
url = "https://gitlab.xxx.xx/"
token = "xxxxxxxxxxx"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
OK, I found the problem. I spoilt the configuration. If you use dind service in .gitlab-ci.yml then don't use /var/run/docker.sock volume in config.toml file OR vica versa if you use "socket" method, don't use the dind service.
More informations: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html
Related
I am using bitbucket as repository. I created a docker file and I setup a runner to execute things on my machine.
The issue is that when I want to run the docker build command, I am getting below error:
+ docker build -t my_app .
failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial tcp 127.0.0.1:2375: connect: connection refused
here is my pipeline file:
# definitions:
# services:
# docker:
# image: docker:dind
# options:
# docker: true
pipelines:
default:
- step:
runs-on:
- self.hosted
- linux.shell
# services:
# - docker
script:
- echo $HOSTNAME
- export DOCKER_BUILDKIT=1
- docker build -t my_app .
I tried to use :
definitions:
services:
docker:
image: docker:find
But I was getting this error: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
I tried to add
services:
- docker
But again no luck...
Would you mind help me how setup/build my docker file when I have a local PC runner? is it possible at all?
I solved my problem by changing my runner type from linux.shell to linux docker and my pipeline also changed accordingly:
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
- step:
runs-on:
- self.hosted
- linux
services:
- docker
script:
- echo $HOSTNAME
- docker version
- docker build -t my_app .
I have an issue with gitlab runner using docker:dind service.
I'm trying to run a docker-compose file with simple volume on a job, here the job :
test_e2e:
image: tmaier/docker-compose
stage: test
services:
- docker:dind
variables:
GIT_STRATEGY: none
GIT_CHECKOUT: "false"
DOCKER_DRIVER: overlay2
before_script:
- ls
script:
- cp .env.dist .env
- docker-compose -f docker-compose.yml -f docker-compose-ci.yml up -d
The job start normally but a container in docker-compose-ci.yml doesn't seem to mount the volume as specified in it, here docker-compose-ci.yml
version: '3.3'
services:
wait_app:
image: dadarek/wait-for-dependencies
networks:
- internal
depends_on:
- traefik
- webapp
command: webapp:3000
cypress:
# the Docker image to use from https://github.com/cypress-io/cypress-docker-images
image: "cypress/included:6.5.0"
networks:
- internal
depends_on:
- traefik
- webapp
- api
- mysql
- redis
environment:
# pass base url to test pointing at the web application
- CYPRESS_baseUrl=http://app.localhost:3000
working_dir: /cypress
volumes:
- ./cypress/:/cypress
Here if I make an "docker exec app_cypress_1 sh -c "ls -al" || 1" of /cypress folder inside the container cypress, I will have nothing even though I do have files in there on the host.
But I tried on a different version of the runner 13.7.0 instead of 13.5.0, and it work as expected.
Where could be the issue ? Is it the gitlab runner are maybe there is another parameter that I can change to make it work ?
Thank you
I want to deploy a docker stack on my own server. I've written a .gitlab-ci.yml file that currently builds the images in my stack and pushes them to my gitlab registry:
build:
stage: build
image: docker:stable
services:
- docker:dind
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
script:
- docker build -t $DOCKER_IMAGE1_TAG -f dir1/Dockerfile ./dir1
- docker push $DOCKER_IMAGE1_TAG
- docker build -t $DOCKER_IMAGE2_TAG -f dir2/Dockerfile ./dir2
- docker push $DOCKER_IMAGE2_TAG
I'm struggling for a way to run the docker deploy command on my own server with the docker-compose.yml file I've written, that successfully pulls the images from my gitlab registry. I figure I could use sshpass to ssh into my server and then copy the docker-compose.yml file across and run docker deploy from there, but I'm not sure what's the best way to allow my server to access the images now located in my gitlab registry:
# Need to ssh into the server, transfer over docker-stack file and run docker swarm deploy
deploy:
stage: deploy
environment:
name: production
image: trion/ng-cli-karma
before_script:
- apt-get update -qq && apt-get install -y -qq sshpass
- eval $(ssh-agent -s)
This is a section of my docker-compse file:
version: "3.2"
services:
octeditor:
image: image # how to set this to the image in my container registry?
ports:
- "3000:3000"
networks:
- front-tier
deploy:
replicas: 1
update_config:
parallelism: 1
failure_action: rollback
placement:
constraints:
- 'node.role == manager'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
How can I pull the images from my gitlab registry? Is this the preferred way of creating a docker deployment on a remote server, via gitlab ci?
I equally had this difficulty recently, finally I found out that the solution is just to insert the link to the image in the private registry as is the case for me with gitlab.
version: "3.2"
services:
octeditor:
image: registry.gitlab.com/project-or-group/project-name/image-name:tag
ports:
- "3000:3000"
networks:
- front-tier
In GitLab, I have this .gitlab-ci.yml configuration to build a Docker image:
build:
stage: build
image: docker:stable
services:
- docker:stable-dind
script:
- docker build --tag example .
and it works. When I replace the image I'm using to build with google/cloud-sdk:latest:
build:
stage: build
image: google/cloud-sdk:latest
services:
- docker:stable-dind
script:
- docker build --tag example .
I get this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I've seen plenty of articles talking about this but they all offer one of three solutions:
Run the dind service
Define DOCKER_HOST to tcp://localhost:2375/
Define DOCKER_HOST to tcp://docker:2375/
I'm already doing 1, so I tried 2 and 3:
build:
stage: build
image: google/cloud-sdk:latest
services:
- docker:stable-dind
variables:
DOCKER_HOST: tcp://localhost:2375/
script:
- docker build --tag example .
Both failed with this error:
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running?
What am I missing?
tcp://docker:2375 actually works, but when I was trying I had - export DOCKER_HOST=tcp://localhost:2375 in the script from a previous experiment so my changes in the variables section had no effect.
I am trying to set up a job with gitlab CI to build a docker image from a dockerfile, but I am behind a proxy.
My .gitlab-ci.yml is as follows:
image: docker:stable
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
HTTP_PROXY: $http_proxy
HTTPS_PROXY: $http_proxy
http_proxy: $http_proxy
https_proxy: $http_proxy
services:
- docker:dind
before_script:
- wget -O - www.google.com # just to test
- docker search node # just to test
- docker info # just to test
build:
stage: build
script:
- docker build -t my-docker-image .
wget works, meaning that proxy setup is correct, in theory
But the commands docker search, docker info and docker build do not work, apparently because of a proxy issue.
An excerpt from the job output:
$ docker search node
Warning: failed to get default registry endpoint from daemon (Error response from daemon:
[and here comes a huge raw HTML output including the following message: "504 - server did not respond to proxy"]
It appears docker does not read from the environment variables to setup proxy.
Note: I am indeed using a runner in --privileged mode, as the documentation instructs to do.
How do I fix this?
If you want to be able to use docker-in-docker (dind) in gitlab CI behind proxy, you will also need to setup no_proxy variable in your gitlab-ci.yml file. NO_PROXY for host "docker".
This is the gitlab-ci.yml that works with my dind:
image: docker:19.03.12
variables:
DOCKER_TLS_CERTDIR: "/certs"
HTTPS_PROXY: "http://my_proxy:3128"
HTTP_PROXY: "http://my_proxy:3128"
NO_PROXY: "docker"
services:
- docker:19.03.12-dind
before_script:
- docker info
build:
stage: build
script:
- docker run hello-world
Good luck!
Oddly, the solution was to use a special dind (docker-in-docker) image provided by gitlab instead, and it works without setting up services and anything. The .gitlab-ci.yml that worked was as follows:
image: gitlab/dind:latest
before_script:
- wget -O - www.google.com
- docker search node
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
Don't forget that the gitlab-runner must be registered with the --privileged flag.
I was unable to get docker-in-docker (dind) working behind our corporate proxy.
In particular, even when following the instructions here a docker build command would still fail when executing FROM <some_image> as it was not able to download the image.
I had far more success using kaniko which appears to be Gitlabs current recommendation for doing Docker builds.
A simple build script for a .NET Core project then looks like:
build:
stage: build
image: $BUILD_IMAGE
script:
- dotnet build
- dotnet publish Console--output publish
artifacts:
# Upload all build artifacts to make them available for the deploy stage.
when: always
paths:
- "publish/*"
expire_in: 1 week
kaniko:
stage: dockerise
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
# Construct a docker-file
- echo "FROM $RUNTIME_IMAGE" > Dockerfile
- echo "WORKDIR /app" >> Dockerfile
- echo "COPY /publish ." >> Dockerfile
- echo "CMD [\"dotnet\", \"Console.dll\"]" >> Dockerfile
# Authenticate against the Gitlab Docker repository.
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# Run kaniko
- /kaniko/executor --context . --dockerfile Dockerfile --destination $CI_REGISTRY_IMAGE:$VersionSuffix