GitLab CI template job with different names - docker

Is there a way in GitLab CI to use a generic job template and run multiple instances of that job with different names?
Below I have a generic job definition in build.yml that specifies that the job will run in stage build and it will run docker build with the given Dockerfile.
I want to be able to run several builds (in parallel, but I excluded that for now to keep the configuration as clean as possible) with different names that are based on this build template. Example in build_a.yml and build_b.yml where I set a specific variable for the job that specifies which Dockerfile to use.
And in the main file gitlab-ci.yml i want to include these specific jobs (build_a and build_b).
common.yml
image:
name: docker:dind
services:
- docker:dind
stages:
- build
build.yml
build:
stage: build
script:
- docker build -f $DOCKER_FILE
build_a.yml
include: 'build.yml'
build:
variables:
DOCKER_FILE: DockerfileA
build_b.yml
include: 'build.yml'
build:
variables:
DOCKER_FILE: DockerfileB
.gitlab-ci.yml
include:
- 'common.yml'
- 'build_a.yml'
- 'build_b.yml'
The problem is that when I include these build jobs they have the same job name, and the first job (build_a) will be overwritten by the second job (build_b) in the resulting Yaml file.
Is there another way of doing this that I have missed in the documentation or other similar issues?

A way to solve the above issue is to use the extends keyword to extend the job configuration in the specific build files.
The below update will create 3 build jobs (.build, build_a and build_b), and the runner will exclude the job named ".build", so that only build_a and build_b will be executed.
common.yml
image:
name: docker:dind
services:
- docker:dind
stages:
- build
build.yml
.build:
stage: build
script:
- docker build -f $DOCKER_FILE
build_a.yml
include: 'build.yml'
build_a:
extends: .build
variables:
DOCKER_FILE: DockerfileA
build_b.yml
include: 'build.yml'
build_b:
extends: .build
variables:
DOCKER_FILE: DockerfileB
.gitlab-ci.yml
include:
- 'common.yml'
- 'build_a.yml'
- 'build_b.yml'

Related

How to build Nx monorepo apps in Gitlab CI Runner

I am trying to have a gitlab CI that performs the following actions:
Install yarn dependencies and cache them in order to don't have to yarn install in every jobs
Test all of my modified apps with the nx affected command
Build all of my modified apps with the nx affected command
Build my docker images with my modified apps
I tried many ways to do it in my CI and no one of them worked. I'm very stuck actually.
This is my actual CI :
default:
image: registry.gitlab.com/xxxx/xxxx/xxxx
stages:
- setup
- test
- build
- forge
.distributed:
interruptible: true
only:
- main
- develop
cache:
key:
files:
- yarn.lock
paths:
- node_modules
- .yarn
before_script:
- yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
- NX_HEAD=$CI_COMMIT_SHA
- NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
artifacts:
paths:
- node_modules
test:
stage: test
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
forge-docker-landing-staging:
stage: forge
services:
- docker:20.10.16-dind
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- docker build -f Dockerfile.landing -t landing:staging .
Currently here is what works and what doesn't :
❌ Caching don't work, it's doing yarn install in every jobs that got extends: .distributed
✅ Nx affected commands work as expected (test and build)
❌ Building the apps with docker is not working, i have some trouble with docker in docker.
Problem #1: You don't cache your .yarn-cache directory, while you explicitly set in in your yarn install in before_script section. So solution is simple - add .yarn-cache to your cache.paths section
Regarding
it's doing yarn install in every jobs that got extends: .distributed
It is intended behavior in your pipeline, since "extends" basically merges sections of your gitlab-ci config, so test stage basically uses the following bash script in runner image:
yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
NX_HEAD=$CI_COMMIT_SHA
NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
and build stage differs only in one last line
When you'll cache your build folder - install phase will be way faster.
Also in this case
artifacts:
paths:
- node_modules
is not needed, since it will come from cache. Removing it from artifacts will also ease the load on your gitlab instance, node_modules is usually huge and doesn't really make sense as an artifact.
Problem #2: What is your artifact?
You haven't provided your dockerfile or any clue on what is exactly produced by your build steps, so i assume your build stage produces something in dist directory. If you want to use that in your docker build stage - you should specify it in artifacts section of your build job:
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
artifacts:
paths:
- dist
After that, your forge-docker-landing-staging job will have an access to your build artifacts.
Problem #3: Docker is not working!
Without any logs from your CI system, it's impossible to help you, and also violates SO "one question per question" policy. If your other stages are running fine - consider using kaniko instead of docker in docker, since using DinD is actually a security nightmare (you are basically giving root rights on your builder machine to anyone, who can edit .gitlab-ci.yml file). See https://docs.gitlab.com/ee/ci/docker/using_kaniko.html , and in your case something like job below (not tested) should work:
forge-docker-landing-staging:
stage: forge
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile.landing"
--destination "${CI_REGISTRY_IMAGE}:landing:staging"

GitLab CI run docker container of other Repository

I am generally still relatively new to the GitLab CI topic and unfortunately I cannot test this myself yet, so this is more of a theoretical attempt.
I want to start a Docker container from one of my other projects in Gitlab in the CI pipeline of my main project.
This Container (I now call it Mock-Container) is created and published in the GitLab CI pipeline of the corresponding project and contains various mocked services.
In the project in which I want to run the Mock-Container, it should be able to start that container in the GitLab CI.
I know it is possible to use a build of the project in a different stage in the same pipeline, like here for example:
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
is it for example possible if the $CI_REGISTRY_IMAGE used in CONTAINER-IMAGE-Variables is like:
registry.gitlab.com/foo/bar/mainproject
to add a variable here like:
MOCK_CONTAINER_IMAGE: registry.gitlab.com/foo/bar/mockproject:latest
so I could for example could use it in the services list in the test stage:
build:
stage: build
image: quay.io/podman/stable
script:
- podman login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY --log-level=debug
- podman build --format docker --pull -t $CONTAINER_TEST_IMAGE .
- podman push $CONTAINER_TEST_IMAGE
test:
stage: test
image:
name: postman/newman
entrypoint: [ "" ]
services:
- name: $CONTAINER_TEST_IMAGE
alias: main-project
- name: $MOCK_CONTAINER_IMAGE
alias: mock-container
...
Is this possible or is there a better way to achieve this.
If you're asking that you want to set a variable in the .gitlab-ci.yml file with the registry URL of the other container like this:
variables:
MOCK_CONTAINER_IMAGE: registry.gitlab.com/foo/bar/mockproject:latest
then yes you can. And you can use the variable in different stages in your file as you please. If you want to pull this image here from the registry, you can do that in a stage as well.
Check this reference for more info: https://docs.gitlab.com/ee/ci/yaml/#variables

Publishing image with docker from gitlab ci

I am trying to create my war artifact with gradle and push it to my remote image repo. But the problem is it I am getting
COPY failed: stat /var/lib/docker/tmp/docker-builder756634785/build/libs/myartifact.war: no such file or directory.
So, It cannot reach to my artifact
how can I point to the correct location?
//gitlab-ci.yaml
stages:
- build
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
GRADLE_OPTS: "-Dorg.gradle.caching=true"
build:
image: gradle:alpine
stage: build
script:
- ./gradlew clean build -i
docker_build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build --pull -t myrepo.io/myimage:latest .
- docker login myrepo.io -u username -p pass
- docker push myrepo.io/myimage:latest
You need to export your artifact that you generated in the build job and after that you will be able to download it on the docker_build job (using dependency)
In this doc you have a lot of examples about how to handle it https://docs.gitlab.com/ee/ci/yaml/#artifacts
and look at this example: https://docs.gitlab.com/ee/ci/yaml/#dependencies

Gitlab CI - Build Docker Image With Shared Runner (cannot connect to Docker Daemon)

I am currently using Gitlab Shared Runners to build and deploy my project (at least I'm trying too !).
I have the gitlab-ci.yml below :
image: java:8-jdk
stages:
- build
- package
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- docker info
cache:
paths:
- .gradle/wrapper
- .gradle/caches
build:
stage: build
script:
- ./gradlew build
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
only:
- master
docker-build:
image: docker:stable
services:
- docker:dind
stage: package
script:
docker build -t registry.gitlab.com/my-project .
docker push registry.gitlab.com/my-project
after_script:
- echo "End CI"
First, build stage is doing great, but there is a problem with the second stage when I'm trying to build and push my docker image.
I get this log :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
It seems that Gitlab is using a shared runner that can't build a docker image, but I don't know how I can change that. I cannot change the configuration of my runner, because I'm using shared runners. I also tried to put some tags to my second stages, in hope that a more suitable runner would have to take care of my job, but I'm still getting this error.
Thank you for your help.
I believe you need to set DOCKER_HOST to connect to the DinD running in another container:
docker-build:
image: docker:stable
services:
- docker:dind
stage: package
script:
- export DOCKER_HOST=tcp://docker:2375/
- docker build -t registry.gitlab.com/my-project .
- docker push registry.gitlab.com/my-project
If your shared runner executor is of type docker you may try this setup :
stages:
- build
- package
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- docker info
cache:
paths:
- .gradle/wrapper
- .gradle/caches
build:
image: java:8-jdk
stage: build
script:
- ./gradlew build
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
only:
- master
docker-build:
stage: package
script:
docker build -t registry.gitlab.com/my-project .
docker push registry.gitlab.com/my-project
after_script:
- echo "End CI"
Even we have faced the same problem in our org. We found that there is a long standing issue with the docker in docker area for gitlab which can be tracked in these issues #3612, #2408 and #2890 as well.
We have found that in our case using docker binding was much suitable for our usecase than the docker-in-docker one. so, we used the solution in their official page.
I know this has been already answered but this may help some one who have a similar usecase :)

Gitlab CI - docker: command not found

I am trying to build my docker image within the gitlab ci pipeline.
However it is not able to find the docker command.
/bin/bash: line 69: docker: command not found ERROR: Job failed: error
executing remote command: command terminated with non-zero exit code:
Error executing in Docker Container: 1
.gitlab-ci.yml
stages:
- quality
- test
- build
- deploy
image: node:8.11.3
services:
- mongo
- docker:dind
before_script:
- npm install
quality:
stage: quality
script:
- npm run-script lint
test:
stage: test
script:
- npm run-script test
build:
stage: build
script:
- docker build -t server .
deploy:
stage: deploy
script:
- echo "TODO deploy push docker image"
you need to choose an image including docker binaries
image: gitlab/dind
services:
- docker:dind
You have 2 options to fix this. You will need to edit your config.toml file (located wherever you installed your gitlab runner).
OPTION 1
in config.toml:
privileged = true
in .gitlab-ci.yml:
myjob:
stage: myjob
image: docker:latest
services:
- docker:18.09.7-dind # older version that does not need demand TLS (see below)
OPTION 2
in config.toml:
privileged = true
volumes = ["/certs/client", "/cache"]
in .gitlab-ci.yml:
myjob:
stage: myjob
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2 # not sure if this is needed
DOCKER_TLS_CERTDIR: "/certs"
IMPORTANT: ONCE YOU HAVE MADE THE CHANGES TO config.toml YOU WILL PROBABLY NEED TO RESTART THE GITLAB RUNNER (which may vary depending on OS) - I DID RESTART MINE, NOT SURE WHAT WOULD HAPPEN IF YOU DID NOT RESTART IT!
Instructions for restarting gitlab runner are here ... https://docs.gitlab.com/runner/commands/ ... basically gitlab-runner restart but on Windows I had to use Windows "Services" to restart it
Why this problem?
priviledged=true gets rid of the docker: command not found problem
However, docker:dind now requires TLS certs (whatever they are). If you are happy with an older docker version then you can use OPTION 1. If you want the latest you need to setup Gitlab CLI to use them which is OPTION 2. J.E.S.U.S loves you :)
For more info ... https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03
Problem here is that node docker image does not embed docker binaries.
Two possibilities :
split stages to two jobs. One using node images for quality and test, one using docker image for building and deploying. See jobs documentation.
build a custom docker image that embed both node and docker and use this image to build your repo.
Note that in both case you will have to enable docker inside your agent. See documentation.

Resources