Docker error during connect: Post http://docker:2375/v1.40/build? - docker

I am using docker+machine to run my gitlab ci/cd jobs.
So my .gitlab-ci.yml looks like below:
stages:
- RUN_TESTS
image:
name: docker:stable
services:
- name: docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
build-docker:
stage: RUN_TESTS
script:
- echo "Running the tests..."
- docker build -t run-tests .
This works totally fine with docker:dind image set as the service block as shown above.
Now here comes the fun part, I need some other packages inside the docker:dind image. So I wrote the Dockerfile as below:
FROM docker:dind
RUN apk update
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
RUN apk add groff
RUN pip3 install awscli
RUN apk --purge -v del py-pip
RUN rm /var/cache/apk/*
So, I built the above image and pushed it into my dockerhub.
As of now, everything is cool. Image built successfully and pushed successfully.
And then I changed the services in the .gitlab-ci.yml to my new images as below:
services:
- name: 199508/dind-new:latest
And I ran the pipeline and I get the error below.
This error I am getting below is strange:
error during connect: Post http://docker:2375/v1.40/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&session=n6fvaaoisom3ny2cfozrlom50&shmsize=0&t=run-tests&target=&ulimits=null&version=1: dial tcp: lookup docker on : no such host
The only change I did was installing some applications/dependencies in the above Dockerfile but why it is not working? How come when I use docker:dind it is working and when I created a new Dockerfile with the same docker:dind base image and it doesn't work?
Can someone please help me?

Actually I just run into this problem yesterday
The main thing is to switch to docker image version
In Your case in the Dockerfile not like here
FROM docker:18.09
And change the port:
The lines commented out are the once the didn't work for me.
image: 199508/dind-new:v5
services:
# - docker:19.03.12-dind
- docker:18.09-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2375/
# DOCKER_HOST: tcp://docker:2376
# DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
DOCKER_DRIVER: overlay2

Related

no such host error while doing docker build from gitlab CI

I am trying to build CI pipeline to build and publish my application docker image, however during build i am getting following error:
.gitlab-ci.yml:
image: "docker:dind"
before_script:
- apk add --update python3 py3-pip
- pip3 install -r requirements.txt
- python3 --version
...
docker-build:
stage: Docker
script:
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker ps
However, this gets me following error:
$ docker build -t "$CI_REGISTRY_IMAGE" .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=registry.gitlab.com%2Fmaven123%2Frest-api&target=&ulimits=null&version=1": dial tcp: lookup docker on 169.254.169.xxx:53: no such host
Any idea, whats the issue here?
You are missing the docker:dind service.
The image you should use for the job is the normal docker:latest image.
image: docker
services:
- "docker:dind"
variables: # not strictly needed, depending on runner configuration
DOCKER_HOST: "tcp://docker:2375"
DOCKER_TLS_CERTDIR: ""

Is it possible to copy a file from docker container to gitlab repository by gitlab-ci.yml

I created a docker image with automated tests that generates a report XML file. After the test run, this file is generated. I want to copy this file to the repository because the pipeline needs this file to show result tests:
My gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxxx" -p "yyyy" docker.io
script:
- docker run --name authContainer "xxxx/dockerImage:0.0.1"
after_script:
- docker cp authContainer:/artifacts/test-result.xml .
artifacts:
when: always
paths:
- test-result.xml
reports:
junit:
- test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
You're .gitlab-ci file is looking fine. You can have the XML report as artifact and gitlab will populate the results from that. Below is the script that i've used and could see the results.
script:
- pytest -o junit_family=xunit2 --junitxml=report.xml --cov=. --cov-report html
- coverage report
coverage: '/^TOTAL.+?(\d+\%)$/'
artifacts:
paths:
- coverage
reports:
junit: report.xml
when: always

How to install docker-compose along with openjdk in gitlab-ci file?

I have a spring boot application I want to test via .gitlab-ci.yml.
It's set up already like this:
image: openjdk:12
# services:
# - docker:dind
stages:
- build
before_script:
# - apk add --update python py-pip python-dev && pip install docker-compose
# - docker version
# - docker-compose version
- chmod +x mvnw
build:
stage: build
script:
# - docker-compose up -d
- ./mvnw package
artifacts:
paths:
- target/rest-SNAPSHOT.jar
The commented out portions are from the answer to Run docker-compose build in .gitlab-ci.yml which I noticed has a fully distinct docker image.
Obviously I need java installed to run my spring boot application, so does that mean docker is just not an option?

apt not found when I use apt in gitlab ci before_script

I use gitlab ci to build docker image and I want to install python. When I build, the following is my gitlab-ci.yml:
image: docker:stable
stages:
- test
- build
before-script:
- apt install -y python-dev python pip
test1:
stage: test
script:
...
- pytest
build:
stage: build
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
but i got a Job failed
/bin/sh: eval: line : apt: not found
ERROR: Job failed: exit code 127
I also tried to apt-get install but the result is the same.
How do I install python??
It's actually not a problem but you can say it featured by package-manager with Alpine you are using image: docker:stable or any such images like tomcat or Django they are on Alpine Linux. with minimal in the size .
image: docker:stable
stages:
- test
- build
before-script:
- apk add python python-dev python pip
test1:
stage: test
script:
...
- pytest
build:
stage: build
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
apk is Alpine Linux package management
The image that you are using docker: stable is based on Alpine Linux which uses apk as its package manager. installation with apk will look like that: apk add python
The error you see is because apt doesn’t exist in alpine docker.
This line solved the problem for me:
apk update && apk add python

Run docker-compose build in .gitlab-ci.yml

I have a .gitlab-ci.yml file which contains following:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
But in ci-log I receive message:
$ docker-compose --version
/bin/sh: eval: line 46: docker-compose: not found
What am I doing wrong?
Docker also provides an official image: docker/compose
This is the ideal solution if you don't want to install it every pipeline.
Note that in the latest version of GitLab CI/Docker you will likely need to give privileged access to your GitLab CI Runner and configure/disable TLS. See Use docker-in-docker workflow with Docker executor
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# Official docker compose image.
image:
name: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose version
build:
stage: build
script:
- docker-compose down
- docker-compose build
- docker-compose up tester-image
Note that in versions of docker-compose earlier than 1.25:
Since the image uses docker-compose-entrypoint.sh as entrypoint you'll need to override it back to /bin/sh -c in your .gitlab-ci.yml. Otherwise your pipeline will fail with No such command: sh
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
Following the official documentation:
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache docker-compose
- docker-compose up -d
Sample docker-compose.yml:
version: "3.7"
services:
foo:
image: alpine
command: sleep 3
bar:
image: alpine
command: sleep 3
We personally do not follow this flow anymore, because you loose control about the running containers and they might end up running endless. This is because of the docker-in-docker executor. We developed a python-script as a workaround to kill all old containers in our CI, which can be found here. But I do not suggest to start containers like this anymore.
I created a simple docker container which has docker-compose installed on top of docker:latest. See https://hub.docker.com/r/tmaier/docker-compose/
Your .gitlab-ci.yml file would look like this:
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
EDIT I added another answer providing a minimal example for a .gitlab-ci.yml configuration supporting docker-compose.
docker-compose can be installed as a Python package, which is not shipped with your image. The image you chose does not even provide an installation of Python:
$ docker run --rm -it docker sh
/ # find / -iname "python"
/ #
Looking for Python gives an empty result. So you have to choose a different image, which fits to your needs and ideally has docker-compose installed or you maually create one.
The docker image you chose uses Alpine Linux. You can use it as a base for your own image or try a different one first if you are not familiar with Alpine Linux.
I had the same issue and created a Dockerfile in a public GitHub repository and connected it with my Docker Hub account and chose an automated build to build my image on each push to the GitHub repository. Then you can easily access your own images with the GitLab CI.
If you don't want to provide a custom docker image with docker-compose preinstalled, you can get it working by installing Python during build time. With Python installed you can finally install docker-compose ready for spinning up your containers.
image: docker:latest
services:
- docker:dind
before_script:
- apk add --update python py-pip python-dev && pip install docker-compose # install docker-compose
- docker version
- docker-compose version
test:
cache:
paths:
- vendor/
script:
- docker-compose up -d
- docker-compose exec -T php-fpm composer install --prefer-dist
- docker-compose exec -T php-fpm vendor/bin/phpunit --coverage-text --colors=never --whitelist src/ tests/
Use docker-compose exec with -T if you receive this or a similar error:
$ docker-compose exec php-fpm composer install --prefer-dist
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.8.1', 'console_scripts', 'docker-compose')()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 62, in main
command()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 114, in perform_command
handler(command, command_options)
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 442, in exec_command
pty.start()
File "/usr/lib/python2.7/site-packages/dockerpty/pty.py", line 338, in start
io.set_blocking(pump, flag)
File "/usr/lib/python2.7/site-packages/dockerpty/io.py", line 32, in set_blocking
old_flag = fcntl.fcntl(fd, fcntl.F_GETFL)
ValueError: file descriptor cannot be a negative integer (-1)
ERROR: Build failed: exit code 1
I think most of the above are helpful, however i needed to collectively apply them to solve this problem, below is the script which worked for me
I hope it works for you too
Also note, in your docker compose this is the format you have to provide for the image name
<registry base url>/<username>/<repo name>/<image name>:<tag>
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build_images
before_script:
- docker version
- docker-compose version
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build_images
script:
- docker-compose down
- docker-compose build
- docker-compose push
there is tiangolo/docker-with-compose which works:
image: tiangolo/docker-with-compose
stages:
- build
- test
- release
- clean
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
build:
stage: build
script:
- docker-compose -f docker-compose-ci.yml build --pull
test1:
stage: test
script:
- docker-compose -f docker-compose-ci.yml up -d
- docker-compose -f docker-compose-ci.yml exec -T php ...
It really took me some time to get it working with Gitlab.com shared runners.
I'd like to say "use docker/compose:latest and that's it", but unfortunately I was not able to make it working, I was getting Cannot connect to the Docker daemon at tcp://docker:2375/. Is the docker daemon running? error even when all the env variables were set.
Neither I like an option to install five thousands of dependencies to install docker-compose via pip.
Fortunately, for the recent Alpine versions (3.10+) there is docker-compose package in Alpine repository. It means that #n2o's answer can be simplified to:
test:
image: docker:19.03.0
variables:
DOCKER_DRIVER: overlay2
# Create the certificates inside this directory for both the server
# and client. The certificates used by the client will be created in
# /certs/client so we only need to share this directory with the
# volume mount in `config.toml`.
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.0-dind
before_script:
- apk --no-cache add docker-compose # <---------- Mind this line
- docker info
- docker-compose --version
stage: test
script:
- docker-compose build
This worked perfectly from the first try for me. Maybe the reason other answers didn't was in some configuration of Gitlab.com shared runners, I don't know...
Alpine linux now has a docker-compose package in their "edge" branch, so you can install it this way in .gitlab-ci.yml
a-job-with-docker-compose:
image: docker
services:
- docker:dind
script:
- apk add docker-compose --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
- docker-compose -v

Resources