I've tried pull a image to dependency proxy from GitLab, I've read the documentation https://docs.gitlab.com/14.10/ee/user/packages/dependency_proxy/
# .gitlab-ci.yml
image: docker:19.03.12
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
before_script:
- docker login -u $TOKEN_USERNAME -p $TOKEN_PASSWORD $CI_DEPENDENCY_PROXY_SERVER
script:
- docker pull ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/php:7-fpm-alpine3.15
I've used a token created in my group but in console show that error
Error response from daemon: unauthorized: authentication required
Are $TOKEN_USERNAME and $TOKEN_PASSWORD defined? The documentation says to use the predefined variables $CI_DEPENDENCY_PROXY_USER and $CI_DEPENDENCY_PROXY_PASSWORD.
docker login -u $CI_DEPENDENCY_PROXY_USER -p $CI_DEPENDENCY_PROXY_PASSWORD $CI_DEPENDENCY_PROXY_SERVER
Related
I have images on google container registry moved from docker hub. I have my docker-compose.yml. compose file is successfully pull the images from docker hub. But I can't pull from google container registry.
step to login to container registry
gcloud auth revoke --all
gcloud auth login
gcloud config set project projectId
gcloud auth activate-service-account deploy#projectId.iam.gserviceaccount.com --key-file=service-account.json
gcloud auth configure-docker
(a) gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://asia.gcr.io
Login Result is success
docker-compose up
ERROR: pull access denied for [my_image_name], repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I can pull the image with below command
docker pull asia.gcr.io/projectid/myimagename/data-api:latest
docker compose
version: "3.3"
services:
data_api:
container_name: myimagename-data-api
image: myimagename/data-api
expose:
- 4000
ports:
- "4001:4000"
depends_on:
- db
environment:
DATABASE_URL: mysql://root:root#db:3306/myimagename
ACCESS_TOKEN_SECRET: xxxxxxxxxx
REFRESH_TOKEN_SECRET: xxxxxxxxx
networks:
- db-api
db:
container_name: myimagename-db
image: myimagename/db
restart: always
volumes:
- ./db/data/:/var/lib/mariadb/data
environment:
MARIADB_ROOT_PASSWORD: root
MARIADB_DATABASE: myimagename
expose:
- 3306
ports:
- "3307:3306"
networks:
- db-api
networks:
db-api:
If you look at the service-account.json file, you will see that it's not your "password" in the traditional sense. Hence piping it in as a stdin password will not work. EDIT: TIL - you can pipe a credentials file in as a password as per doc
I would recommend using the gcloud credential helper -- you can login as yourself if you have the perms or you can use a service account with its credentials.json file -- which appears to be your case there. Be sure to have the correct IAM perms on your service account.
Pull (read) only:
roles/storage.objectViewer
Push (write) and Pull:
roles/storage.legacyBucketWriter
Ok, Finally, I found the issue. It is image name. We can not use same image name as docker hub. we need full path.
image: asia.gcr.io/projectid/myimagename/data-api:latest
instead of myimagename/data-api
I'm trying to build the CI pipeline in GitLab. I'd like to ask about making the docker work in GitLab CI.
From this issue: https://gitlab.com/gitlab-org/gitlab-runner/issues/4501#note_195033385
I'm follow the instruction for both ways. With TLS and not used TLS.
But It's still stuck. Which in same error
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running
I've try to troubleshooting this problem. follow by below,
enable TLS
Which used .gitlab-ci.yml and config.toml for enable TLS in Runner.
This my .gitlab-ci.yml:
image: docker:19.03
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
IMAGE_NAME: image_name
services:
- docker:19.03-dind
stages:
- build
publish:
stage: build
script:
- docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) .
- docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10)
only:
- master
And this my config.toml:
[[runners]]
name = MY_RUNNER
url = MY_HOST
token = MY_TOKEN_RUNNER
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
Disable TLS
.gitlab-ci.yml:
image: docker:18.09
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
IMAGE_NAME: image_name
services:
- docker:18.09-dind
stages:
- build
publish:
stage: build
script:
- docker build -t$IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10) .
- docker push $IMAGE_NAME:$(echo $CI_COMMIT_SHA | cut -c1-10)
only:
- master
And this my config.toml:
[[runners]]
environment = ["DOCKER_TLS_CERTDIR="]
Anyone have idea?
Solution
You can see at the accepted answer. Moreover, In my case and
another one. Looks like the root cause it from the Linux server that
GitLab hosted doesn't has permission to connect Docker. Let's check
the permission connectivity between GitLab and Docker on your server.
You want to set DOCKER_HOST to tcp://docker:2375. It's a "service", i.e. running in a separate container, by default named after the image name, rather than localhost.
Here's a .gitlab-ci.yml snippet that should work:
# Build and push the Docker image off of merges to master; based off
# of Gitlab CI support in https://pythonspeed.com/products/pythoncontainer/
docker-build:
stage: build
image:
# An alpine-based image with the `docker` CLI installed.
name: docker:stable
# This will run a Docker daemon in a container (Docker-In-Docker), which will
# be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker
# (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000.
services:
- name: docker:dind
alias: thedockerhost
variables:
# Tell docker CLI how to talk to Docker daemon; see
# https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
DOCKER_HOST: tcp://thedockerhost:2375/
# Use the overlayfs driver for improved performance:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
# GitLab has a built-in Docker image registry, whose parameters are set automatically.
# See https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-the-gitlab-contai
#
# CHANGEME: You can use some other Docker registry though by changing the
# login and image name.
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
# Only build off of master branch:
only:
- master
You can try to disable tls
services:
- name: docker:dind
entrypoint: ["dockerd-entrypoint.sh", "--tls=false"]
script:
- export DOCKER_HOST=tcp://127.0.0.1:2375 && docker build --pull -t ${CI_REGISTRY_IMAGE} .
As there is an interesting reading https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27300
docker:dind v20 sleeps for 16 seconds if you don't have TLS explicitly disabled, and that causes race condition where build container starts earlier than dockerd container
Try with this .gitlab-ci.yml file. It worked for me when I specified the DOCKER_HOST
docker-build:
stage: build
image:
# An alpine-based image with the `docker` CLI installed.
name: docker:stable
# This will run a Docker daemon in a container (Docker-In-Docker), which will
# be available at thedockerhost:2375. If you make e.g. port 5000 public in Docker
# (`docker run -p 5000:5000 yourimage`) it will be exposed at thedockerhost:5000.
services:
- name: docker:dind
alias: thedockerhost
variables:
DOCKER_HOST: tcp://thedockerhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
For me the accepted answer didn't work. Instead I configured the TLS certificate volume for the runner
[[runners]]
...
[runners.docker]
...
volumes = ["/certs/client", "/cache"]
and I added a variable for the certificate directory in my .gitlab-ci.yaml
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
according to this article:
https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03/#configure-tls
and this one:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker-in-docker-with-tls-enabled-in-the-docker-executor
You can remove the DOCKER_HOST from the .gitlab-ci file. That trick will do magic.
my pipeline cant login into my private registry that doesnt have ssl, so when the pipeline does docker login it tries to connect with https.
added the command option as seen in other answers on stackoverflow
services:
- name: docker:dind
command: ["--insecure-registry=$REGISTRY_URL"]
added this thing on /etc/docker/daemon.json
{
"insecure-registries" : ["myregistry:5000"]
}
stages:
- build
- test
- build_container
- deploy
variables:
REGISTRY_URL: myregistry:5000
CONTAINER_TAG: latest
REGISTRY_PROJECT: hello-world
TEST_TAG: teste
services:
- name: docker:dind
command: ["--insecure-registry=$REGISTRY_URL"]
before_script:
- uname -a
build:
stage: build
image: gcc
script:
- make -f Makefile
artifacts:
paths:
- i386/hello-world/
expire_in: 1 week
deploy: <---- PROBLEM STARTS HERE
stage: deploy
image: docker:latest
environment:
name: deploy
script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin $REGISTRY_URL
- docker pull $REGISTRY_URL/$REGISTRY_PROJECT:$CONTAINER_TAG
- docker tag $REGISTRY_URL/$REGISTRY_PROJECT:$TEST_TAG
- docker push REGISTRY_URL/$REGISTRY_PROJECT:$TEST_TAG
i'm getting this error message:
time="2019-05-07T14:08:47Z" level=info msg="Error logging in to v2 endpoint, trying next endpoint: Get https://myregistry:5000/v2/: dial tcp: lookup myregistry on 193.XX.XX.XX:53: no such host"
if i remove $REGISTRY_URL from:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin $REGISTRY_URL
then i can login, but i then i cant do a pull because the docker image doesnt reach the registry? i think
Regarding the command insecure registry, you have to define the variable DOCKER_TLS_CERTDIR with an empty string: DOCKER_TLS_CERTDIR: ''
I am trying to set up a job with gitlab CI to build a docker image from a dockerfile, but I am behind a proxy.
My .gitlab-ci.yml is as follows:
image: docker:stable
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
HTTP_PROXY: $http_proxy
HTTPS_PROXY: $http_proxy
http_proxy: $http_proxy
https_proxy: $http_proxy
services:
- docker:dind
before_script:
- wget -O - www.google.com # just to test
- docker search node # just to test
- docker info # just to test
build:
stage: build
script:
- docker build -t my-docker-image .
wget works, meaning that proxy setup is correct, in theory
But the commands docker search, docker info and docker build do not work, apparently because of a proxy issue.
An excerpt from the job output:
$ docker search node
Warning: failed to get default registry endpoint from daemon (Error response from daemon:
[and here comes a huge raw HTML output including the following message: "504 - server did not respond to proxy"]
It appears docker does not read from the environment variables to setup proxy.
Note: I am indeed using a runner in --privileged mode, as the documentation instructs to do.
How do I fix this?
If you want to be able to use docker-in-docker (dind) in gitlab CI behind proxy, you will also need to setup no_proxy variable in your gitlab-ci.yml file. NO_PROXY for host "docker".
This is the gitlab-ci.yml that works with my dind:
image: docker:19.03.12
variables:
DOCKER_TLS_CERTDIR: "/certs"
HTTPS_PROXY: "http://my_proxy:3128"
HTTP_PROXY: "http://my_proxy:3128"
NO_PROXY: "docker"
services:
- docker:19.03.12-dind
before_script:
- docker info
build:
stage: build
script:
- docker run hello-world
Good luck!
Oddly, the solution was to use a special dind (docker-in-docker) image provided by gitlab instead, and it works without setting up services and anything. The .gitlab-ci.yml that worked was as follows:
image: gitlab/dind:latest
before_script:
- wget -O - www.google.com
- docker search node
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
Don't forget that the gitlab-runner must be registered with the --privileged flag.
I was unable to get docker-in-docker (dind) working behind our corporate proxy.
In particular, even when following the instructions here a docker build command would still fail when executing FROM <some_image> as it was not able to download the image.
I had far more success using kaniko which appears to be Gitlabs current recommendation for doing Docker builds.
A simple build script for a .NET Core project then looks like:
build:
stage: build
image: $BUILD_IMAGE
script:
- dotnet build
- dotnet publish Console--output publish
artifacts:
# Upload all build artifacts to make them available for the deploy stage.
when: always
paths:
- "publish/*"
expire_in: 1 week
kaniko:
stage: dockerise
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
# Construct a docker-file
- echo "FROM $RUNTIME_IMAGE" > Dockerfile
- echo "WORKDIR /app" >> Dockerfile
- echo "COPY /publish ." >> Dockerfile
- echo "CMD [\"dotnet\", \"Console.dll\"]" >> Dockerfile
# Authenticate against the Gitlab Docker repository.
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# Run kaniko
- /kaniko/executor --context . --dockerfile Dockerfile --destination $CI_REGISTRY_IMAGE:$VersionSuffix
The problem
I have made a project with docker compose. It works well on localhost. I want to use this base to test or analyze code with Gitlab Runner. I solved a lot of problems, like install docker compose, run and build selected containers and run commands in container. The first job ran and success (!!!), but the following jobs failed before "before_script":
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
...
Error response from daemon: Conflict.
...
Error response from daemon: Conflict.
I don't understand why. What do I do wrong? I repeat: the first job of the pipeline runs well with "success" message! Each other jobs of the pipeline fail.
Full output:
Running with gitlab-ci-multi-runner 9.4.0 (ef0b1a6)
on XXX Runner (fdc0d656)
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9ffor docker service...
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Files
.gitlab-ci.yml
# Select image from https://hub.docker.com/r/_/php/
image: docker:latest
# Services
services:
- docker:dind
stages:
- build
- test
- deploy
cache:
key: ${CI_BUILD_REF_NAME}
untracked: true
paths:
- vendor
- var
variables:
DOCKER_CMD: docker exec --user user bin
COMPOSE_HTTP_TIMEOUT: 300
before_script:
- apk add --no-cache py-pip bash
- pip install docker-compose
- touch ~/.gitignore
- bin/docker-init.sh
- cp app/config/parameters.gitlab-ci.yml app/config/parameters.yml
- cp app/config/nodejs_parameters.yml.dist app/config/nodejs_paramteres.yml
- chmod -R 777 app/cache app/logs var
# Load only binary and mysql
- docker-compose up -d binary mysql
build:
stage: build
script:
- ${DOCKER_CMD} composer install -n
- ${DOCKER_CMD} php app/console doctrine:database:create --env=test --if-not-exists
- ${DOCKER_CMD} php app/console doctrine:migrations:migrate --env=test
codeSniffer:
stage: test
script:
- ${DOCKER_CMD} bin/php-cs-fixer fix --dry-run --config-file=.php_cs
database:
stage: test
script:
- ${DOCKER_CMD} php app/console doctrine:mapping:info --env=test
- ${DOCKER_CMD} php app/console doctrine:schema:validate --env=test
- ${DOCKER_CMD} php app/console doctrine:fixtures:load --env=test
unittest:
stage: test
script:
- ${DOCKER_CMD} bin/phpunit -c app --debug
deploy_demo:
stage: deploy
script:
- echo "Deploy to staging server"
environment:
name: staging
url: https://staging.example.com
only:
- develop
deploy_prod:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: https://example.com
when: manual
only:
- master
docker-compose.yml
version: "2"
services:
web:
image: nginx:latest
ports:
- "${HTTP_PORT}:80"
depends_on:
- mysql
- elasticsearch
- binary
links:
- binary:php
volumes:
- ".:/var/www"
- "./app/config/docker/vhost.conf:/etc/nginx/conf.d/site.conf"
- "${BASE_LOG_DIR}/nginx:/var/log/nginx"
mysql:
image: mysql:5.6
environment:
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
ports:
- "${MYSQL_PORT}:3306"
volumes:
- "${BASE_LOG_DIR}/mysql:/var/log/mysql"
- "${BASE_MYSQL_DATA_DIR}:/var/lib/mysql"
- "./app/config/docker/mysql.cnf:/etc/mysql/conf.d/mysql.cnf"
elasticsearch:
image: elasticsearch:1.7.6
ports:
- "${ELASTICSEARCH_PORT}:9200"
volumes:
- "${BASE_ELASTICSEARCH_DATA_DIR}:/usr/share/elasticsearch/data"
binary:
image: fchris82/kunstmaan-test
container_name: bin
volumes:
- ".:/var/www"
- "${BASE_LOG_DIR}/php:/var/log/php"
- "~/.ssh:/home/user/.ssh"
tty: true
environment:
LOCAL_USER_ID: ${LOCAL_USER_ID}
config.toml
[[runners]]
name = "XXX Runner"
url = "https://gitlab.xxx.xx/"
token = "xxxxxxxxxxx"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
OK, I found the problem. I spoilt the configuration. If you use dind service in .gitlab-ci.yml then don't use /var/run/docker.sock volume in config.toml file OR vica versa if you use "socket" method, don't use the dind service.
More informations: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html