Run docker commands from gitlab-ci - docker

I have this gitlab-ci file:
services:
- docker:18.09.7-dind
variables:
SONAR_TOKEN: "$PROJECT_SONAR_TOKEN"
GIT_DEPTH: 0
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
sonarqube-check:
image: maven:latest
stage: test
before_script:
- "docker version"
- "mkdir $PWD/.m2"
- "cp -f /cache/settings.xml $PWD/.m2/settings.xml"
script:
- mvn $MAVEN_CLI_OPTS clean verify sonar:sonar -Dsonar.qualitygate.wait=true -Dsonar.login=$SONAR_TOKEN -Dsonar.projectKey="project-key"
after_script:
- "rm -rf $PWD/.m2"
allow_failure: false
only:
- merge_requests
For some reason docker in docker service does not find the binaries for docker (the docker version command, line 16):
/bin/bash: line 111: docker: command not found
I'm wondering if there is a way of doing this inside of the gitlab-ci file because I need to run docker for the tests, if there is an image that contains both maven and docker binaries or if I'll have to create my own docker image.
It has to be all in one stage, I cannot divide it in two stages (or at least I don't know how to compile in maven in one stage and run the tests witha docker image in another stage)
Thank you!

As you correctly pointed out. You need mvn and docker binaries in the image you are using for that GitLab-CI job.
The quickest win is probably to install docker in your maven:latest build image during run time in the before_script section.
before_script:
- apt-get update && apt-get install -y docker.io
- docker version
If that's slowing down your job too much you might want to build your own custom docker image that contains both Maven and Docker.
Also have a look at the article about dind on Gitlab if you end up moving to Docker 19.03+

Related

docker: command not found in gitlab-ci

Background
in my gitlab-ci file I am trying to build a docker image, however even though I have docker:dind as a service, it is failing.
.gitlab-ci
---
stages:
- build
- docker
build:
stage: build
image: fl4m3ph03n1x/my-app:1.0
variables:
MIX_ENV: prod
script:
- mix deps.get
- mix deps.compile
- mix compile
artifacts:
paths:
- .hex/
- _build/
- deps/
- mix.lock
build_image:
stage: docker
image: fl4m3ph03n1x/my-app:1.0
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: tcp://docker:2375/
services:
- docker:dind
script:
- echo ${CI_JOB_TOKEN} | docker login --password-stdin -u ${CI_REGISTRY_USER} ${CI_REGISTRY}
- docker build . -t ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:latest
The problematic stage is docker.
As you can see I am trying to:
login into docker
build an image from gitlab's registry
push that image
Error
However, I am getting the following error:
$ echo ${CI_JOB_TOKEN} | docker login --password-stdin -u
${CI_REGISTRY_USER} ${CI_REGISTRY} /bin/bash: line 110: docker:
command not found
Which is confusing, because docker:dind is supposed to actually prevent this from happening:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#enable-registry-mirror-for-dockerdind-service
Question
So clearly I am missing something here. What am I doing wrong?
EDIT
This is my Dockerfile
FROM elixir:1.10
# Install Hex + Rebar
RUN mix do local.hex --force, local.rebar --force
COPY . /
WORKDIR /
ENV MIX_ENV=prod
RUN mix do deps.get --only $MIX_ENV, deps.compile
RUN mix release
EXPOSE 8080
ENV PORT=8080
ENV SHELL=/bin/bash
CMD ["_build/prod/rel/my_app/bin/my_app", "start"]
image is used to specify the image in which to run the script. You want to run the script in a docker image, to build your image.
The image keyword is the name of the Docker image the Docker executor runs to perform the CI tasks.
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
After all, isn't your application image CI_REGISTRY_IMAGE in this? You don't want to build the image in itself.
- docker build . -t ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:latest

Caching Maven dependencies in Gitlab-CI between different stages and images

I have built the following CI in gitlab in order to execute unit-tests and integration tests.
stages:
- build
- test
variables:
MAVEN_OPTS: "-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository"
cache:
key: "$CI_BUILD_REF"
paths:
- .m2/repository/
unit-tests:
image: maven:latest
stage: test
script:
- cd source_code
- mvn test -P test
intergration-tests:
image: docker
stage: test
services:
- docker:dind
script:
- apk add --no-cache docker-compose
- docker-compose up -d
- docker exec -t account_service_container sh "integration_tests.sh"
- docker-compose down --rmi all
The point is that when I use maven image between stages then I am able to cache maven dependencies in m2 repository. However in case of intergration test I use different image and the container with docker dind creates an isolated set of containers using docker-compose and then there is no acess to previously defined and cached mvn repository. Is there any solution to this case. Should I created customized image including all fetched and required mvn dependencies and keep it on docker hub and then use that image between each stage and in docker-compose.

How to conditionally update a CI/CD job image?

I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.
You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.
As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...
Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.

Gitlab Mocha tests and Docker Tag problems

I am trying to create a correct .gitlab-ci.yml file. This is for the online gitlab.com not for a self hosted Gitlab. Most (if not all) documentation is about a self hosted gitlab instance.
What I want is to run my Mocha-Chai tests on the built container and when the tests pass I want to build an image and store it in the Gitlab Registry with a tag that matches my latest git tag.
Test part
I cannot get the tests running, whatever I try I always get Mocha not found.
Below is my .yml file. The build section is working.
The problem is in the test section and in the docker tag part of the release-image. I got the yml file from the official gitlab documentation the official gitlab documentation
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
- deploy
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com/edelacruz/cloudtrader-microservices
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE npm install && npm test
I also tried
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE npm test
and
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE npm install mocha -g
- docker run $CONTAINER_TEST_IMAGE npm install chai -g
- docker run $CONTAINER_TEST_IMAGE npm test
all the the same result:
sh: mocha: not found
the test script in package.json is
"test": "mocha ./Test",
I tried both putting mocha and chai in the devDependencies and in dependencies.
"devDependencies": {
"chai": "^4.0.2",
"mocha": "^3.4.2"
}
"dependencies": {
"chai": "^4.0.2",
"mocha": "^3.4.2"
},
Tag part
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com/edelacruz/cloudtrader-microservices
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE:$CI_COMMIT_TAG
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
The release-image works if I leave out the tag part.
But I really want to have my image tagged with the git tag, not with latests or master.
$ docker tag $CONTAINER_TEST_IMAGE
$CONTAINER_RELEASE_IMAGE:$CI_COMMIT_TAG Error parsing reference:
"registry.gitlab.com/edelacruz/cloudtrader-microservices:" is not a
valid repository/tag: invalid reference format ERROR: Job failed: exit
code 1
Use this in the first approach:
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE sh -c 'npm install -g mocha && npm install && npm test'
I've added the globally installed mocha. What you tried in the later approaches didn't work because every docker run is a new container based on the image and not on the previous container.
In your first try (with the line docker run $CONTAINER_TEST_IMAGE npm install && npm test), the gitlab runner separates the command into docker run $CONTAINER_TEST_IMAGE npm install and npm test. As you may notice, the second command isn't run within a docker container.
For your second try, docker run $CONTAINER_TEST_IMAGE npm test requires that mocha be already installed in the docker image.
For your third try:
docker run $CONTAINER_TEST_IMAGE npm install mocha -g
docker run $CONTAINER_TEST_IMAGE npm install chai -g
docker run $CONTAINER_TEST_IMAGE npm test
Each of the commands is actually run on a separate docker container (ie. there's nothing indicating that the commands need to be run within the same docker container).
So, what's the easiest way to resolve this? Your first try is actually pretty close. You just have to make sure that the gitlab runner does not split the command into two.
Something like the following should work:
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "npm install --only=dev; npm test"

Run docker-compose build in .gitlab-ci.yml

I have a .gitlab-ci.yml file which contains following:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
But in ci-log I receive message:
$ docker-compose --version
/bin/sh: eval: line 46: docker-compose: not found
What am I doing wrong?
Docker also provides an official image: docker/compose
This is the ideal solution if you don't want to install it every pipeline.
Note that in the latest version of GitLab CI/Docker you will likely need to give privileged access to your GitLab CI Runner and configure/disable TLS. See Use docker-in-docker workflow with Docker executor
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# Official docker compose image.
image:
name: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose version
build:
stage: build
script:
- docker-compose down
- docker-compose build
- docker-compose up tester-image
Note that in versions of docker-compose earlier than 1.25:
Since the image uses docker-compose-entrypoint.sh as entrypoint you'll need to override it back to /bin/sh -c in your .gitlab-ci.yml. Otherwise your pipeline will fail with No such command: sh
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
Following the official documentation:
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache docker-compose
- docker-compose up -d
Sample docker-compose.yml:
version: "3.7"
services:
foo:
image: alpine
command: sleep 3
bar:
image: alpine
command: sleep 3
We personally do not follow this flow anymore, because you loose control about the running containers and they might end up running endless. This is because of the docker-in-docker executor. We developed a python-script as a workaround to kill all old containers in our CI, which can be found here. But I do not suggest to start containers like this anymore.
I created a simple docker container which has docker-compose installed on top of docker:latest. See https://hub.docker.com/r/tmaier/docker-compose/
Your .gitlab-ci.yml file would look like this:
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
buildJob:
stage: build
tags:
- docker
script:
- docker-compose build
EDIT I added another answer providing a minimal example for a .gitlab-ci.yml configuration supporting docker-compose.
docker-compose can be installed as a Python package, which is not shipped with your image. The image you chose does not even provide an installation of Python:
$ docker run --rm -it docker sh
/ # find / -iname "python"
/ #
Looking for Python gives an empty result. So you have to choose a different image, which fits to your needs and ideally has docker-compose installed or you maually create one.
The docker image you chose uses Alpine Linux. You can use it as a base for your own image or try a different one first if you are not familiar with Alpine Linux.
I had the same issue and created a Dockerfile in a public GitHub repository and connected it with my Docker Hub account and chose an automated build to build my image on each push to the GitHub repository. Then you can easily access your own images with the GitLab CI.
If you don't want to provide a custom docker image with docker-compose preinstalled, you can get it working by installing Python during build time. With Python installed you can finally install docker-compose ready for spinning up your containers.
image: docker:latest
services:
- docker:dind
before_script:
- apk add --update python py-pip python-dev && pip install docker-compose # install docker-compose
- docker version
- docker-compose version
test:
cache:
paths:
- vendor/
script:
- docker-compose up -d
- docker-compose exec -T php-fpm composer install --prefer-dist
- docker-compose exec -T php-fpm vendor/bin/phpunit --coverage-text --colors=never --whitelist src/ tests/
Use docker-compose exec with -T if you receive this or a similar error:
$ docker-compose exec php-fpm composer install --prefer-dist
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.8.1', 'console_scripts', 'docker-compose')()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 62, in main
command()
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 114, in perform_command
handler(command, command_options)
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 442, in exec_command
pty.start()
File "/usr/lib/python2.7/site-packages/dockerpty/pty.py", line 338, in start
io.set_blocking(pump, flag)
File "/usr/lib/python2.7/site-packages/dockerpty/io.py", line 32, in set_blocking
old_flag = fcntl.fcntl(fd, fcntl.F_GETFL)
ValueError: file descriptor cannot be a negative integer (-1)
ERROR: Build failed: exit code 1
I think most of the above are helpful, however i needed to collectively apply them to solve this problem, below is the script which worked for me
I hope it works for you too
Also note, in your docker compose this is the format you have to provide for the image name
<registry base url>/<username>/<repo name>/<image name>:<tag>
image:
name: docker/compose:latest
entrypoint: ["/bin/sh", "-c"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build_images
before_script:
- docker version
- docker-compose version
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build_images
script:
- docker-compose down
- docker-compose build
- docker-compose push
there is tiangolo/docker-with-compose which works:
image: tiangolo/docker-with-compose
stages:
- build
- test
- release
- clean
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
build:
stage: build
script:
- docker-compose -f docker-compose-ci.yml build --pull
test1:
stage: test
script:
- docker-compose -f docker-compose-ci.yml up -d
- docker-compose -f docker-compose-ci.yml exec -T php ...
It really took me some time to get it working with Gitlab.com shared runners.
I'd like to say "use docker/compose:latest and that's it", but unfortunately I was not able to make it working, I was getting Cannot connect to the Docker daemon at tcp://docker:2375/. Is the docker daemon running? error even when all the env variables were set.
Neither I like an option to install five thousands of dependencies to install docker-compose via pip.
Fortunately, for the recent Alpine versions (3.10+) there is docker-compose package in Alpine repository. It means that #n2o's answer can be simplified to:
test:
image: docker:19.03.0
variables:
DOCKER_DRIVER: overlay2
# Create the certificates inside this directory for both the server
# and client. The certificates used by the client will be created in
# /certs/client so we only need to share this directory with the
# volume mount in `config.toml`.
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.0-dind
before_script:
- apk --no-cache add docker-compose # <---------- Mind this line
- docker info
- docker-compose --version
stage: test
script:
- docker-compose build
This worked perfectly from the first try for me. Maybe the reason other answers didn't was in some configuration of Gitlab.com shared runners, I don't know...
Alpine linux now has a docker-compose package in their "edge" branch, so you can install it this way in .gitlab-ci.yml
a-job-with-docker-compose:
image: docker
services:
- docker:dind
script:
- apk add docker-compose --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
- docker-compose -v

Resources