How to share folder with host when using `gitlab-runner` with docker? - docker

On a Linux system I am running a simple test job from the command line using the following command:
gitlab-runner exec docker --builds-dir /home/project/buildsdir test_job
with the following job definition in .gitlab-ci.yml:
test_job:
image: python:3.8-buster
script:
- date > time.dat
However, the build folder is empty. after having run the job. I only can imaging that build-dir means a location inside the docker image.
Also after having run the job successfully I am doing
docker image ls
and I do not see a recent image.
So how can I "share"/"mount" the actual build folder for the docker gitlab job to the hosts system so I can access all the output files?
I looked at the documentation and I found nothing, the same for
gitlab-runner exec docker --help
I also tried to use artifcats
test_job:
image: python:3.8-buster
script:
- pwd
- date > time.dat
artifacts:
paths:
- time.dat
but that also did not help. I was not able to find the file time.dat anywhere after the completion of the job.
I also tried to use docker-volumes:
gitlab-runner exec docker --docker-volumes /home/project/buildsdir/:/builds/project-0 test_job
gitlab-runner exec docker --docker-volumes /builds/project-0:/home/project/buildsdir/ test_job
but neither worked (job failed in both cases).

you have to configure your config.toml file located at /etc/gitlab-runner/
here's the doc: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section
first add a build_dir and mention it in the volumes at the end bind it with a
directory on your host machine like this:
build_dir = "(Your build dir)"
[runners.docker]
volumes = ["/tmp/build-dir:/(build_dir):rw"]

Related

Use Paketo.io / CloudNativeBuildpacks (CNB) in GitLab CI with Kubernetes executor & unprivileged Runners (without pack CLI & docker)

We want to use Paketo.io / CloudNativeBuildpacks (CNB) GitLab CI in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging the Kubernetes executor. We also don't want to introduce security risks by using Docker in our builds. So we don't have our host’s /var/run/docker.sock exposed nor want to use docker:dind.
We found some guides on how to use Paketo with GitLab CI like this https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/ . But as described beneath the headline Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template, the approach relies on Docker and pack CLI. We tried to resemble this in our .gitlab-ci.yml which looks like this:
image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
But as outlined our setup does not support docker and we end up with the following error inside our logs:
...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding kpack.
TLDR;
Use the Buildpack's lifecycle directly inside your .gitlab-ci.yml here's a fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
The details: "using the lifecycle directly"
There are ongoing discussions about this topic. Especially have a look into https://github.com/buildpacks/pack/issues/564 and https://github.com/buildpacks/pack/issues/413#issuecomment-565165832. As stated there:
If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:
The link to the example is broken, but it refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment. Here we can get a first glue about what Stephen Levine referred to as "to use the lifecycle directly". Inside it the crucial point is the usage of command: ["/cnb/lifecycle/creator"]. So this is the lifecycle everyone is talking about! And there's good documentaion about this command that could be found in this CNB RFC.
Choosing a good image: paketobuildpacks/builder:base
So how to develop a working .gitlab-ci.yml? Let's start simple. Digging into the Tekton implementation you'll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE, which itself is documented as The image on which builds will run (must include lifecycle and compatible buildpacks). That sound's familiar! Can't we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at gitlab.com/jonashackt/microservice-api-spring-boot you can clone) and run:
docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
Now inside the paketobuildpacks/builder image powered container try to run the Paketo lifecycle directly with:
/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
I only used the -app parameter of the many possible parameters for the creator command, since most of them have quite good defaults. But as the default app directory path is not the default /workspace - but the current directory, I configured it. Also we need to define an <image-name> at the end, which will simply be used as the resulting container image name.
The first .gitlab-ci.yml
Both commands did work at my local workstation, so let's finally create a .gitlab-ci.yml using this approach (here's a fully working example .gitlab-ci.yml):
image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
docker login without docker
As we don't have docker available inside our Kubernetes Runners, we can't login into GitLab Container Registry as described in the docs. So the following error occured to me using this first approach:
===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Using the approach described in this so answer fixed the problem. We need to create a ~/.docker/config.json containing the GitLab Container Registry login information - and then the Paketo build will pick them up, as stated in the docs:
If CNB_REGISTRY_AUTH is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.
Inside our .gitlab-ci.yml this could look like:
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
Our final .gitlab-ci.yml
As we're using the image: paketobuildpacks/builder at the top of our .gitlab-ci.yml, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <image-name> like this:
/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our .gitlab-ci.yml looks like this (here's the fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:
See the full log of the example project here.

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

Problem running test command inside Docker container in a Gitlab runner

I'm just getting started with docker and continuous integration with Gitlab. I've added the following gitlab-ci.yml file to the root of my repository:
# Official docker image
image: docker:latest
services:
- docker:dind
build-dev:
stage: build
script:
- docker build -t obikerui/project -f app/Dockerfile.dev ./app
test:
stage: test
script:
- docker run obikerui/project npm run test -- --coverage
The build-dev stage runs and passes but the test stage fails with the following error message:
$ docker run obikerui/project npm run test -- --coverage
Unable to find image 'obikerui/project:latest' locally
docker: Error response from daemon: pull access denied for obikerui/project, repository does not exist or may require 'docker login'.
See 'docker run --help'.
ERROR: Job failed: exit code 125
Can anyone explain what's going wrong and suggest a fix? The repository is private, so do I need to provide some extra configuration to accommodate this?
Each job runs in a different container. You build and you tag your image correctly but that stays in that container.
For the test job a new container starts and that one does not have the image build by the previous job.
You should push your image to a registry (after you tag it accordingly) and then the test job should use the image from the repository.
You can use a public registry like the one offered by Docker or you can run a local container based on the image registry:2 provided by docker. In this case you have to make sure that the domain name pointing to the registry is available on your network (it can be an nginx with reverse proxy)

Gitlab CI docker-in-docker deployment not running commands inside of the container

I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.
Here's the contents of my gitlab-ci.yml:
stages:
- build
- analytics
variables:
TEST_IMAGE_NAME: 'registry.server.de/testimage'
build_testing_container:
stage: build
image: docker:stable
services:
- dind
script:
- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME
mess_detection:
stage: analytics
image: $TEST_IMAGE_NAME
script:
- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
paths:
- mess_detection.html
expire_in: 1 week
when: always
except:
- production
allow_failure: true
What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?
UPDATE:
It's getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project.
However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.
UPDATE2:
After some more research, I learned that gitlab executes four sub-steps for each build step:
After some more research, I found that gitlab runs 4 sub-steps for each build step:
Prepare : Create and start the services.
Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
Build : User build. This is run on the user-provided docker image.
Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.
UPDATE3
In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.
this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks.
you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
access can be done by a script or entry-points for example :
in the docker file of the image you are going to build add a script that you want to execute like that :
ADD exemple.sh /
RUN chmod +x exemple.sh
then you can add the image as a service in gitlab-ci and the script would change to :
docker exec <container_name> /exemple.sh
this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :
docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"
here's a reference :
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html

gitlab-runner locally - No such command sh

I have gitlab-runner installed locally.
km#Karls-MBP ~ $ gitlab-runner --version
Version: 10.4.0
Git revision: 857480b6
Git branch: 10-4-stable
GO version: go1.8.5
Built: Mon, 22 Jan 2018 09:47:12 +0000
OS/Arch: darwin/amd64
Docker:
km#Karls-MBP ~ $ docker --version
Docker version 17.12.0-ce, build c97c6d6
.gitlab-ci.yml:
image: docker/compose:1.19.0
before_script:
- echo wtf
test:
script:
- echo test
Results:
km#Karls-MBP ~ $ sudo gitlab-runner exec docker --docker-privileged test
WARNING: Since GitLab Runner 10.0 this command is marked as DEPRECATED and will be removed in one of upcoming releases
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
Running with gitlab-runner 10.4.0 (857480b6)
on ()
Using Docker executor with image docker/compose:1.19.0 ...
Using docker image sha256:be4b46f2adbc8534c7f6738279ebedd6106969695f5e596079e89e815d375d9c for predefined container...
Pulling docker image docker/compose:1.19.0 ...
Using docker image docker/compose:1.19.0 ID=sha256:e06b58ce9de2ea3f11634e022ec814984601ea3a5180440c2c28d9217b713b30 for build container...
Running on runner--project-0-concurrent-0 via x.x.x...
Cloning repository...
Cloning into '/builds/project-0'...
done.
Checking out b5a262c9 as km/ref...
Skipping Git submodules setup
No such command: sh
Commands:
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
Don't really know what the issue is.
It seems that the docker/compose image is configured with docker-compose as an entrypoint.
You can override the default entrypoint of the docker/compose image in your .gitlab-ci.yml file :
image:
name: docker/compose:1.19.0
entrypoint: [""]
before_script:
- echo wtf
test:
script:
- echo test
The docker/compose image has the command docker-compose as its entrypoint (until version 1.24.x), which enables a usage similar to this (assuming a compatible volume mount):
docker run --rm -t docker/compose -f some-dir/compose-file.yml up
Unfortunately that same feature makes it incompatible with usage within GitLab CI’s Docker Runner. Theoretically you could have a construct like this:
job-name:
image: docker/compose:1.24.1
script:
- up
- --build
- --force-recreate
But the GitLab Docker Runner assumes the entrypoint is /bin/bash - or at least functions likewise (many Docker images thoughtfully use a shell script with "$#" as its final line for the entrypoint) - and from the array elements that you specify for the script, it creates its own temporary shell script on the fly. It starts with statements like set -e and set -o pipeline and will be used in a statement like sh temporary-script.sh as the container command. That’s what causes the unexpected error message you got.
This behaviour was recently documented more clearly:
The Docker executor doesn’t overwrite the ENTRYPOINT of a Docker image.
That means that if your image defines the ENTRYPOINT and doesn’t allow to run scripts with CMD, the image will not work with the Docker executor.
Overriding the entrypoint with [""] will allow usage of docker/docker-compose (before version 1.25.x) with the Docker Runner, but the script that GitLab will create on the fly is not going to run as process 1 and because of that the container will not stop at the end of the script. Example:
job-name:
image:
name: docker/docker-compose
entrypoint: [""]
script:
- docker-compose
- up
- --build
- --force-recreate
At the time I write this the latest version of docker/docker-compose is 1.25.0-rc2. Your mileage may vary, but it suffices for my purposes and entirely resolves both problems.

Resources