I have the address of my docker registry in an GitLab-CI environment variable.
How can I use it in the context of a service command in my .gitlab-ci.yml?
services:
- name: docker:dind
command: ["--insecure-registry=$CI_REGISTRY"] # this does not work
build:
stage: build
script:
- docker build -t "$CI_REGISTRY_IMAGE" . # this works properly
- docker push "$CI_REGISTRY_IMAGE"
Related
I am using bitbucket as repository. I created a docker file and I setup a runner to execute things on my machine.
The issue is that when I want to run the docker build command, I am getting below error:
+ docker build -t my_app .
failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial tcp 127.0.0.1:2375: connect: connection refused
here is my pipeline file:
# definitions:
# services:
# docker:
# image: docker:dind
# options:
# docker: true
pipelines:
default:
- step:
runs-on:
- self.hosted
- linux.shell
# services:
# - docker
script:
- echo $HOSTNAME
- export DOCKER_BUILDKIT=1
- docker build -t my_app .
I tried to use :
definitions:
services:
docker:
image: docker:find
But I was getting this error: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
I tried to add
services:
- docker
But again no luck...
Would you mind help me how setup/build my docker file when I have a local PC runner? is it possible at all?
I solved my problem by changing my runner type from linux.shell to linux docker and my pipeline also changed accordingly:
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
- step:
runs-on:
- self.hosted
- linux
services:
- docker
script:
- echo $HOSTNAME
- docker version
- docker build -t my_app .
Use case is to build and image and deploy to Rancher 2.5.5 with gitlab-ci.yml. Since envs couldn't be passed directly in my situation I'm trying to build-in envs to docker image with docker-compose build (dev/stage things is the next thing, just let's leave it for now). docker-compose run --env-file works, but docker-compose build ignores envs.
Any advice will be appreciated
P.S. if you know the way to pass envs to rancher2 container somehow from gitlab-ci it also resolves the problem
I've tried the following:
set it in docker-compose
version: '3'
services:
testproject:
build:
context: .
env_file: .env-dev
image: example.registry/testimage:latest
set it in gitlab-ci
variables:
IMAGE: "$CI_REGISTRY_IMAGE:latest"
build-image:
stage: build
allow_failure: false
tags:
- docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker-compose --env-file .env-dev build
- docker-compose push
deploy:
stage: deploy
image: kpolszewski/rancher2-gitlab-deploy
tags:
- docker
script:
- upgrade --cluster $CLUSTER --environment $ENV --stack $NAMESPACE --service $SERVICE --new-image $IMAGE
source it in Dockerfile entrypoint
set it in .env file
nothing works
I can see new image in the registry and local (when I test it locally ) but no env inside when I run container
If you want to set env values on the build stage, you can use build-args as follows:
FROM ...
ARG SOME_ARG
ENV SOME_ENV=$SOME_ARG
Then, in your docker-compose.yml:
version: '3'
services:
testproject:
build:
context: .
args:
SOME_ARG: "SOME_VALUE"
env_file: .env-dev
image: example.registry/testimage:latest
But think twice, are you sure you want your ENV variables be dynamically set on the build stage?
I'm trying to build the CI pipeline in GitLab. I'd like to ask about making the docker work in GitLab CI.
From this issue: https://gitlab.com/gitlab-org/gitlab-runner/issues/4501#note_195033385
I'm follow the instruction for both ways. With TLS and not used TLS.
But It's still stuck. Which in same error
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running
I've try to troubleshooting this problem. follow by below,
enable TLS
Which used .gitlab-ci.yml and config.toml for enable TLS in Runner.
This my .gitlab-ci.yml:
image: docker:19.03
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
IMAGE_NAME: image_name
services:
- docker:19.03-dind
stages:
- build
publish:
stage: build
script:
- docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) .
- docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10)
only:
- master
And this my config.toml:
[[runners]]
name = MY_RUNNER
url = MY_HOST
token = MY_TOKEN_RUNNER
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
Disable TLS
.gitlab-ci.yml:
image: docker:18.09
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
IMAGE_NAME: image_name
services:
- docker:18.09-dind
stages:
- build
publish:
stage: build
script:
- docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) .
- docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10)
only:
- master
And this my config.toml:
[[runners]]
environment = ["DOCKER_TLS_CERTDIR="]
Anyone have idea?
Solution
You can see at the accepted answer. Moreover, In my case and
another one. Looks like the root cause it from the Linux server that
GitLab hosted doesn't has permission to connect Docker. Let's check
the permission connectivity between GitLab and Docker on your server.
You want to set DOCKER_HOST to tcp://docker:2375. It's a "service", i.e. running in a separate container, by default named after the image name, rather than localhost.
Here's a .gitlab-ci.yml snippet that should work:
# Build and push the Docker image off of merges to master; based off
# of Gitlab CI support in https://pythonspeed.com/products/pythoncontainer/
docker-build:
stage: build
image:
# An alpine-based image with the `docker` CLI installed.
name: docker:stable
# This will run a Docker daemon in a container (Docker-In-Docker), which will
# be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker
# (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000.
services:
- name: docker:dind
alias: thedockerhost
variables:
# Tell docker CLI how to talk to Docker daemon; see
# https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
DOCKER_HOST: tcp://thedockerhost:2375/
# Use the overlayfs driver for improved performance:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
# GitLab has a built-in Docker image registry, whose parameters are set automatically.
# See https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-the-gitlab-contai
#
# CHANGEME: You can use some other Docker registry though by changing the
# login and image name.
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
# Only build off of master branch:
only:
- master
You can try to disable tls
services:
- name: docker:dind
entrypoint: ["dockerd-entrypoint.sh", "--tls=false"]
script:
- export DOCKER_HOST=tcp://127.0.0.1:2375 && docker build --pull -t ${CI_REGISTRY_IMAGE} .
As there is an interesting reading https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27300
docker:dind v20 sleeps for 16 seconds if you don't have TLS explicitly disabled, and that causes race condition where build container starts earlier than dockerd container
Try with this .gitlab-ci.yml file. It worked for me when I specified the DOCKER_HOST
docker-build:
stage: build
image:
# An alpine-based image with the `docker` CLI installed.
name: docker:stable
# This will run a Docker daemon in a container (Docker-In-Docker), which will
# be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker
# (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000.
services:
- name: docker:dind
alias: thedockerhost
variables:
DOCKER_HOST: tcp://thedockerhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
For me the accepted answer didn't work. Instead I configured the TLS certificate volume for the runner
[[runners]]
...
[runners.docker]
...
volumes = ["/certs/client", "/cache"]
and I added a variable for the certificate directory in my .gitlab-ci.yaml
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
according to this article:
https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03/#configure-tls
and this one:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker-in-docker-with-tls-enabled-in-the-docker-executor
You can remove the DOCKER_HOST from the .gitlab-ci file. That trick will do magic.
In GitLab, I have this .gitlab-ci.yml configuration to build a Docker image:
build:
stage: build
image: docker:stable
services:
- docker:stable-dind
script:
- docker build --tag example .
and it works. When I replace the image I'm using to build with google/cloud-sdk:latest:
build:
stage: build
image: google/cloud-sdk:latest
services:
- docker:stable-dind
script:
- docker build --tag example .
I get this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I've seen plenty of articles talking about this but they all offer one of three solutions:
Run the dind service
Define DOCKER_HOST to tcp://localhost:2375/
Define DOCKER_HOST to tcp://docker:2375/
I'm already doing 1, so I tried 2 and 3:
build:
stage: build
image: google/cloud-sdk:latest
services:
- docker:stable-dind
variables:
DOCKER_HOST: tcp://localhost:2375/
script:
- docker build --tag example .
Both failed with this error:
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running?
What am I missing?
tcp://docker:2375 actually works, but when I was trying I had - export DOCKER_HOST=tcp://localhost:2375 in the script from a previous experiment so my changes in the variables section had no effect.
I am trying to set up a job with gitlab CI to build a docker image from a dockerfile, but I am behind a proxy.
My .gitlab-ci.yml is as follows:
image: docker:stable
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
HTTP_PROXY: $http_proxy
HTTPS_PROXY: $http_proxy
http_proxy: $http_proxy
https_proxy: $http_proxy
services:
- docker:dind
before_script:
- wget -O - www.google.com # just to test
- docker search node # just to test
- docker info # just to test
build:
stage: build
script:
- docker build -t my-docker-image .
wget works, meaning that proxy setup is correct, in theory
But the commands docker search, docker info and docker build do not work, apparently because of a proxy issue.
An excerpt from the job output:
$ docker search node
Warning: failed to get default registry endpoint from daemon (Error response from daemon:
[and here comes a huge raw HTML output including the following message: "504 - server did not respond to proxy"]
It appears docker does not read from the environment variables to setup proxy.
Note: I am indeed using a runner in --privileged mode, as the documentation instructs to do.
How do I fix this?
If you want to be able to use docker-in-docker (dind) in gitlab CI behind proxy, you will also need to setup no_proxy variable in your gitlab-ci.yml file. NO_PROXY for host "docker".
This is the gitlab-ci.yml that works with my dind:
image: docker:19.03.12
variables:
DOCKER_TLS_CERTDIR: "/certs"
HTTPS_PROXY: "http://my_proxy:3128"
HTTP_PROXY: "http://my_proxy:3128"
NO_PROXY: "docker"
services:
- docker:19.03.12-dind
before_script:
- docker info
build:
stage: build
script:
- docker run hello-world
Good luck!
Oddly, the solution was to use a special dind (docker-in-docker) image provided by gitlab instead, and it works without setting up services and anything. The .gitlab-ci.yml that worked was as follows:
image: gitlab/dind:latest
before_script:
- wget -O - www.google.com
- docker search node
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
Don't forget that the gitlab-runner must be registered with the --privileged flag.
I was unable to get docker-in-docker (dind) working behind our corporate proxy.
In particular, even when following the instructions here a docker build command would still fail when executing FROM <some_image> as it was not able to download the image.
I had far more success using kaniko which appears to be Gitlabs current recommendation for doing Docker builds.
A simple build script for a .NET Core project then looks like:
build:
stage: build
image: $BUILD_IMAGE
script:
- dotnet build
- dotnet publish Console--output publish
artifacts:
# Upload all build artifacts to make them available for the deploy stage.
when: always
paths:
- "publish/*"
expire_in: 1 week
kaniko:
stage: dockerise
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
# Construct a docker-file
- echo "FROM $RUNTIME_IMAGE" > Dockerfile
- echo "WORKDIR /app" >> Dockerfile
- echo "COPY /publish ." >> Dockerfile
- echo "CMD [\"dotnet\", \"Console.dll\"]" >> Dockerfile
# Authenticate against the Gitlab Docker repository.
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# Run kaniko
- /kaniko/executor --context . --dockerfile Dockerfile --destination $CI_REGISTRY_IMAGE:$VersionSuffix