I am trying to build CI pipeline to build and publish my application docker image, however during build i am getting following error:
.gitlab-ci.yml:
image: "docker:dind"
before_script:
- apk add --update python3 py3-pip
- pip3 install -r requirements.txt
- python3 --version
...
docker-build:
stage: Docker
script:
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker ps
However, this gets me following error:
$ docker build -t "$CI_REGISTRY_IMAGE" .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=registry.gitlab.com%2Fmaven123%2Frest-api&target=&ulimits=null&version=1": dial tcp: lookup docker on 169.254.169.xxx:53: no such host
Any idea, whats the issue here?
You are missing the docker:dind service.
The image you should use for the job is the normal docker:latest image.
image: docker
services:
- "docker:dind"
variables: # not strictly needed, depending on runner configuration
DOCKER_HOST: "tcp://docker:2375"
DOCKER_TLS_CERTDIR: ""
I'm trying to run a pipeline in Gitlab using gitlab-ci.yml file and a runner which can run docker images, but I got an error because the runner cannot find the right path to the Dockerfile
this is my yml file
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
variables:
TEST_IMAGE: 193.206.43.98:5555/apfeed/apserver:$CI_COMMIT_REF_NAME
RELEASE_IMAGE: 193.206.43.98:5555/ap:latest
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
build:
stage: build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
test:
stage: test
services:
- mongo:bionic
script:
- docker pull $TEST_IMAGE
- docker run $TEST_IMAGE npm test
release:
stage: release
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
only:
And this is the error I get
$ docker build --pull -t $TEST_IMAGE .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/gitlab-runner/builds/WsYiLtmC/0/al/apfeed/Dockerfile: no such file or directory
ERROR: Job failed: exit status 1
I tried several different ways of write the path in the line TEST IMAGE but none seems to work
You must have Dockerfile in the project root directory
OR
You can pass the relative path to your Dockerfile if it exists in a subdirectory in the project repo.
e.g. docker build --pull -t $TEST_IMAGE -f ./some-dir/Dockerfile .
some-dir == the directory inside your project repo where Dockerfile is located.
The project repo is first cloned into CI_PROJECT_DIR before each job is executed and
CI_PROJECT_DIR is the dir where the .gitlab-ci.yml is gonna exist and the job scripts also run from that directory as well.
https://docs.gitlab.com/ee/ci/variables/README.html
I'm setting up travis to push images to docker hub after running a test script
sudo: required
services:
- docker
before_install:
- docker build -t oskygh/react-test -f ./client/Dockerfile.dev ./client
script:
- docker run oskygh/react-test npm test -- --coverage
after_success:
- docker build -t osbee/client ./client
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- docker push osbee/client
dockerfile.dev
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm","run","start"]
As explained here you could use the travis_wait function. Adding it before the command, which failed. You could also read this stackoverflow, which added it in another way.
I'm practicing with Gitlab CI to understand how to build an application and then use that within a Docker image. For now, my repo consists simply of helloworld.txt, dockerfile, and gitlab-ci.yml.
PROBLEM: During the build stage, I use a shell executor to 'zip helloworld.zip helloworld.txt". Then, I "docker build -t myproject/myapp ." where I expect to COPY helloworld.zip /" but it seems that the zip file I created is not available during the docker build context. Am I not saving the helloworld.zip file to the right location? Or something else? My long term intent is to write a python application, and during the build stage to compile into a single executable and copy into a docker container.
#cat helloworld.txt
hello world
#cat dockerfile
FROM centos:7
COPY helloworld.zip /
CMD ["/bin/bash"]
#cat gitlab-ci.yml
stages:
- build
- test
- release
- deploy
variables:
IMAGE_TEST_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
IMAGE_RELEASE_NAME: $CI_REGISTRY_IMAGE:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
build:
stage: build
script:
- echo "compile the program"
- zip zipfile.zip helloworld.txt
- docker build --pull -t $IMAGE_TEST_NAME .
- docker push $IMAGE_TEST_NAME
test:
stage: test
script:
- docker pull $IMAGE_TEST_NAME
- docker run $IMAGE_TEST_NAME yum install unzip -y && unzip /helloworld.zip && cat /helloworld.txt
release:
stage: release
script:
- docker pull $IMAGE_TEST_NAME
- docker tag $IMAGE_TEST_NAME $IMAGE_RELEASE_NAME
- docker push $IMAGE_RELEASE_NAME
only:
- master
deploy:
stage: deploy
script:
- ./deploy.sh
only:
- master
when: manual
I expect that within the same stage (in this case build), I can run a program such as zip and then COPY that zip file into a given directory within a newly built docker image during the docker build process.
EDIT
After learning that I can't do this, I've created two different stages: build_app and build_container. Also knowing that artifacts are used by default in following stages, I didn't add an artifacts to the first stage or a dependancies to the next stage. This is the gitlab-ci.yml below and is still producing the same error.
stages:
- build_app
- build_container
- test
- release
- deploy
# you can delete this line if you're not using Docker
#image: centos:latest
variables:
IMAGE_TEST_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
IMAGE_RELEASE_NAME: $CI_REGISTRY_IMAGE:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
build_app:
stage: build_app
script:
- echo "compile the program"
- zip zipfile.zip helloworld.txt
build_container:
stage: build_container
script:
- docker build --pull -t $IMAGE_TEST_NAME .
- docker push $IMAGE_TEST_NAME
test:
stage: test
script:
- docker pull $IMAGE_TEST_NAME
- docker run $IMAGE_TEST_NAME yum install unzip -y && unzip /helloworld.zip && cat /helloworld.txt
release:
stage: release
script:
- docker pull $IMAGE_TEST_NAME
- docker tag $IMAGE_TEST_NAME $IMAGE_RELEASE_NAME
- docker push $IMAGE_RELEASE_NAME
only:
- master
deploy:
stage: deploy
script:
- ./deploy.sh
only:
- master
when: manual
Job Status:
Build App: Passed
Build Container: Failed
Running with gitlab-runner 11.6.1 (8d829975)
on gitrunner-shell trtHcQTS
Using Shell executor...
Running on gitrunner.example.com...
Fetching changes...
Removing zipfile.zip
HEAD is now at e0a0a95 Update .gitlab-ci.yml
Checking out e0a0a952 as newFeature...
Skipping Git submodules setup
$ echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
WARNING! Your password will be stored unencrypted in /home/gitlab-runner/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker build --pull -t $IMAGE_TEST_NAME .
Sending build context to Docker daemon 112.1kB
Step 1/3 : FROM centos:7
7: Pulling from library/centos
Digest: sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
Status: Image is up to date for centos:7
---> 1e1148e4cc2c
Step 2/3 : COPY helloworld.zip /
COPY failed: stat /var/lib/docker/tmp/docker-builder312764301/helloworld.zip: no such file or directory
ERROR: Job failed: exit status 1
This is not possible. Gitlab CI's job model assumes that jobs of the same stage are independent.
See the manual for the dependencies keyword in gitlab-ci.yml:
This feature [...] allows you to define the artifacts to pass between different jobs.
Note that artifacts from all previous stages are passed by default.
[...] You can only define jobs from stages that are executed before the current one.
I have a .gitlab-ci.yml file which contains following:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
But in ci-log I receive message:
$ docker-compose --version
/bin/sh: eval: line 46: docker-compose: not found
What am I doing wrong?
Docker also provides an official image: docker/compose
This is the ideal solution if you don't want to install it every pipeline.
Note that in the latest version of GitLab CI/Docker you will likely need to give privileged access to your GitLab CI Runner and configure/disable TLS. See Use docker-in-docker workflow with Docker executor
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# Official docker compose image.
image:
name: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose version
build:
stage: build
script:
- docker-compose down
- docker-compose build
- docker-compose up tester-image
Note that in versions of docker-compose earlier than 1.25:
Since the image uses docker-compose-entrypoint.sh as entrypoint you'll need to override it back to /bin/sh -c in your .gitlab-ci.yml. Otherwise your pipeline will fail with No such command: sh
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
Following the official documentation:
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache docker-compose
- docker-compose up -d
Sample docker-compose.yml:
version: "3.7"
services:
foo:
image: alpine
command: sleep 3
bar:
image: alpine
command: sleep 3
We personally do not follow this flow anymore, because you loose control about the running containers and they might end up running endless. This is because of the docker-in-docker executor. We developed a python-script as a workaround to kill all old containers in our CI, which can be found here. But I do not suggest to start containers like this anymore.
I created a simple docker container which has docker-compose installed on top of docker:latest. See https://hub.docker.com/r/tmaier/docker-compose/
Your .gitlab-ci.yml file would look like this:
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
EDIT I added another answer providing a minimal example for a .gitlab-ci.yml configuration supporting docker-compose.
docker-compose can be installed as a Python package, which is not shipped with your image. The image you chose does not even provide an installation of Python:
$ docker run --rm -it docker sh
/ # find / -iname "python"
/ #
Looking for Python gives an empty result. So you have to choose a different image, which fits to your needs and ideally has docker-compose installed or you maually create one.
The docker image you chose uses Alpine Linux. You can use it as a base for your own image or try a different one first if you are not familiar with Alpine Linux.
I had the same issue and created a Dockerfile in a public GitHub repository and connected it with my Docker Hub account and chose an automated build to build my image on each push to the GitHub repository. Then you can easily access your own images with the GitLab CI.
If you don't want to provide a custom docker image with docker-compose preinstalled, you can get it working by installing Python during build time. With Python installed you can finally install docker-compose ready for spinning up your containers.
image: docker:latest
services:
- docker:dind
before_script:
- apk add --update python py-pip python-dev && pip install docker-compose # install docker-compose
- docker version
- docker-compose version
test:
cache:
paths:
- vendor/
script:
- docker-compose up -d
- docker-compose exec -T php-fpm composer install --prefer-dist
- docker-compose exec -T php-fpm vendor/bin/phpunit --coverage-text --colors=never --whitelist src/ tests/
Use docker-compose exec with -T if you receive this or a similar error:
$ docker-compose exec php-fpm composer install --prefer-dist
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.8.1', 'console_scripts', 'docker-compose')()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 62, in main
command()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 114, in perform_command
handler(command, command_options)
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 442, in exec_command
pty.start()
File "/usr/lib/python2.7/site-packages/dockerpty/pty.py", line 338, in start
io.set_blocking(pump, flag)
File "/usr/lib/python2.7/site-packages/dockerpty/io.py", line 32, in set_blocking
old_flag = fcntl.fcntl(fd, fcntl.F_GETFL)
ValueError: file descriptor cannot be a negative integer (-1)
ERROR: Build failed: exit code 1
I think most of the above are helpful, however i needed to collectively apply them to solve this problem, below is the script which worked for me
I hope it works for you too
Also note, in your docker compose this is the format you have to provide for the image name
<registry base url>/<username>/<repo name>/<image name>:<tag>
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build_images
before_script:
- docker version
- docker-compose version
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build_images
script:
- docker-compose down
- docker-compose build
- docker-compose push
there is tiangolo/docker-with-compose which works:
image: tiangolo/docker-with-compose
stages:
- build
- test
- release
- clean
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
build:
stage: build
script:
- docker-compose -f docker-compose-ci.yml build --pull
test1:
stage: test
script:
- docker-compose -f docker-compose-ci.yml up -d
- docker-compose -f docker-compose-ci.yml exec -T php ...
It really took me some time to get it working with Gitlab.com shared runners.
I'd like to say "use docker/compose:latest and that's it", but unfortunately I was not able to make it working, I was getting Cannot connect to the Docker daemon at tcp://docker:2375/. Is the docker daemon running? error even when all the env variables were set.
Neither I like an option to install five thousands of dependencies to install docker-compose via pip.
Fortunately, for the recent Alpine versions (3.10+) there is docker-compose package in Alpine repository. It means that #n2o's answer can be simplified to:
test:
image: docker:19.03.0
variables:
DOCKER_DRIVER: overlay2
# Create the certificates inside this directory for both the server
# and client. The certificates used by the client will be created in
# /certs/client so we only need to share this directory with the
# volume mount in `config.toml`.
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.0-dind
before_script:
- apk --no-cache add docker-compose # <---------- Mind this line
- docker info
- docker-compose --version
stage: test
script:
- docker-compose build
This worked perfectly from the first try for me. Maybe the reason other answers didn't was in some configuration of Gitlab.com shared runners, I don't know...
Alpine linux now has a docker-compose package in their "edge" branch, so you can install it this way in .gitlab-ci.yml
a-job-with-docker-compose:
image: docker
services:
- docker:dind
script:
- apk add docker-compose --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
- docker-compose -v