Check docker run in Gitlab CICD pipeline - docker

I'm using Gitlab CI/CD to build Docker images of our Node server.
I am wondering if there is a way to test that docker run of the image was ok.
We've had few occasions where the Docker builds but it is missing some files/env variables and it fails to start the server.
Is there any way to run the docker image and test if it is starting up correctly in the CI/CD pipeline?
Cheers.

With Gitlab you are able to use a docker-runner.
When you use the docker-runner, and not a shell runner, a docker-like
image and its services have to initiate, it should give an error if
something fails.
Chek this docs from gitlab:
This is a classic yml from that web:
default:
image:
name: ruby:2.2
entrypoint: ["/bin/bash"]
services:
- name: my-postgres:9.4
alias: db-postgres
entrypoint: ["/usr/local/bin/db-postgres"]
command: ["start"]
before_script:
- bundle install
test:
script:
- bundle exec rake spec
As you see, the test sections will be executed after building the image, so, you should not have to worry about. Gitlab should detect any errors when loading the image
If you are doing it with the shell gitlab-runner, you should call the
docker image start like this:
stages:
- dockerStartup
- build
- test
- deploy
- dockerStop
job 0:
stage: dockerStartup
script:
- docker build -t my-docker-image .
- docker run my-docker-image /script/to/run/tests
[...] //your jobs here
job 5:
stage: dockerStop
script: docker stop whatever

Related

Recover docker image after a gitlab-ci run

Let's say I build a docker image and then run some CI build like this:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: build
script:
# build some stuff in the image
Then I want to push the resulting image, with the builded stuff inside
docker-package:
stage: package
script:
- docker commit ?
- docker push dockerhub:latest
That may not be possible at all.
Similar to In Gitlab CI/CD, how to commit and publish the docker container that is running our stages

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

Testing Node server (docker) with GitLab CI

So I wrote a simple one-page server with node and express. I wrote a dockerfile for this and ran it locally. Then I made a postman collection and tested the endpoints.
I want to do this with gitlab ci using newman so I came up with the following .gitlab-ci.yml:
image: docker:latest
services:
- docker:dind
before_script:
- docker build -t test_img .
- docker run -d -p 3039:3039 test_img
stages:
- test
# test
api-test:
image:
name: postman/newman:alpine
entrypoint: [""]
stage: test
script:
- newman run pdfapitest.postman_collection.json
It fails saying:
docker build -t test_img .
/bin/sh: eval: line 86: docker: not found
ERROR: Job failed: exit code 127
full output: https://pastebin.com/raw/C3mmUXKa
what am I doing wrong here? this seems to me like a very common use case but I haven't found anything useful about this.
The issue is that your api-test job uses the image postman/newman:alpine to run the script.
This means that when GitLab tries to run the before_script section, it has no docker command available.
What you should do is to provide the docker command in the image you're using to run the job. You can do that either by installing docker as the first step of your script, or starting from a custom image which contains the software you're using inside the job plus the docker client itself.

Gitlab CI - docker: command not found

I am trying to build my docker image within the gitlab ci pipeline.
However it is not able to find the docker command.
/bin/bash: line 69: docker: command not found ERROR: Job failed: error
executing remote command: command terminated with non-zero exit code:
Error executing in Docker Container: 1
.gitlab-ci.yml
stages:
- quality
- test
- build
- deploy
image: node:8.11.3
services:
- mongo
- docker:dind
before_script:
- npm install
quality:
stage: quality
script:
- npm run-script lint
test:
stage: test
script:
- npm run-script test
build:
stage: build
script:
- docker build -t server .
deploy:
stage: deploy
script:
- echo "TODO deploy push docker image"
you need to choose an image including docker binaries
image: gitlab/dind
services:
- docker:dind
You have 2 options to fix this. You will need to edit your config.toml file (located wherever you installed your gitlab runner).
OPTION 1
in config.toml:
privileged = true
in .gitlab-ci.yml:
myjob:
stage: myjob
image: docker:latest
services:
- docker:18.09.7-dind # older version that does not need demand TLS (see below)
OPTION 2
in config.toml:
privileged = true
volumes = ["/certs/client", "/cache"]
in .gitlab-ci.yml:
myjob:
stage: myjob
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2 # not sure if this is needed
DOCKER_TLS_CERTDIR: "/certs"
IMPORTANT: ONCE YOU HAVE MADE THE CHANGES TO config.toml YOU WILL PROBABLY NEED TO RESTART THE GITLAB RUNNER (which may vary depending on OS) - I DID RESTART MINE, NOT SURE WHAT WOULD HAPPEN IF YOU DID NOT RESTART IT!
Instructions for restarting gitlab runner are here ... https://docs.gitlab.com/runner/commands/ ... basically gitlab-runner restart but on Windows I had to use Windows "Services" to restart it
Why this problem?
priviledged=true gets rid of the docker: command not found problem
However, docker:dind now requires TLS certs (whatever they are). If you are happy with an older docker version then you can use OPTION 1. If you want the latest you need to setup Gitlab CLI to use them which is OPTION 2. J.E.S.U.S loves you :)
For more info ... https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03
Problem here is that node docker image does not embed docker binaries.
Two possibilities :
split stages to two jobs. One using node images for quality and test, one using docker image for building and deploying. See jobs documentation.
build a custom docker image that embed both node and docker and use this image to build your repo.
Note that in both case you will have to enable docker inside your agent. See documentation.

How to rebuild docker image on push before CI script jobs

I want to generate Dockerfile in GitLab CI script and build it. Then use this newly generated image in build jobs. How can I do this? Tried to use global before_script, but it already starts in default container. I need to do this out of any containers.
before_script is run before every job so it's not something you want. But you can have a first job to do the image build and take advantage of the fact that each job can use a different Docker image. The build of the image is covered in the manual.
Option A (uhm... sort of OK)
Have 2 runners, one with a shell executor (tagged shell) and one with a Docker executor (tagged docker). You would then have a first stage with a job dedicated to building the docker image. It would use the shell runner.
image_build:
stage: image_build
script:
- # create dockerfile
- # run docker build
- # push image to a registry
tags:
- shell
The second job would then run using the runner with docker executor and use this created image:
job_1:
stage: test
image: [image you created]
script:
- # your tasks
tags:
- docker
The problem with this is that the runner would need to be part of the docker group which has security implications.
Option B (better)
The second option would do the same but would have only one runner using Docker executor. The Docker image would be built within a running container (gitlab/dind:latest image) = "docker in docker" solution.
stages:
- image_build
- test
image_build:
stage: image_build
image: gitlab/dind:latest
script:
- # create dockerfile
- # run docker build
- # push image to a registry
job_1:
stage: test
image: [image you created]
script:
- # your tasks

Resources