I am trying to create a build pipeline for a small project I do on my free time. For this, I utilize Spring-Boot and Angular. Locally I build it with ./gradlew clean build. This works perfectly fine on my local machine, but I run into issues I can't pinpoint on gitlab. The build is done on gitlab, utilizing it's own shared runners.
My .gitlab-ci.yml looks like this:
default:
image: oasisat777/openjdk-and-node:latest
# If I comment out above line and comment in the line below, everything works fine & dandy
# image: openjdk:17-jdk-bullseye
stages:
- build
build-job:
stage: build
script:
- whoami
- java -version
- npm -v
- ./gradlew clean compileTestJava compileJava --stacktrace
In the above example I use a docker image based on openjdk:17-jdk-bullseye but extended to have npm available. The corresponding Dockerfile:
# goal: build microservices with spring-boot backend and angular frontend in gitlab
# req'd images: latest stable openjdk and latest node
# unfortunately there's not openjdk-with-node:latest available, so i have to build it by hand
# this ought to do the trick, using bullseye as base and then install node additionally
FROM openjdk:17-jdk-bullseye
# note: for production use node LTS (even numbers)
# https://github.com/nodesource/distributions/blob/master/README.md#deb
RUN curl -fsSL https://deb.nodesource.com/setup_17.x | bash - \
&& apt-get install -y nodejs
USER root
CMD ["bash"]
I tried to build my project utilizing the resulting docking container by adding my code as a volume then run ./gradlew build - which worked on my machine. My assumption is that by this I basically simulated the behavior of what the gitlab-runner does when starting the build.
docker run -it -v .:/project
cd project
./gradlew clean build
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
[...]
BUILD SUCCESSFUL
This is the output of the build-pipeline:
$ whoami
root
$ java -version
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment (build 17.0.2+8-86)
OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing)
$ npm -v
8.5.5
$ ./gradlew clean compileTestJava compileJava --stacktrace
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Could not set executable permissions for: /root/.gradle/wrapper/dists/gradle-7.4.1-bin/58kw26xllvsiedyf3nujyarhn/gradle-7.4.1/bin/gradle
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred starting process 'Gradle build daemon'
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Exception is:
org.gradle.process.internal.ExecException: A problem occurred starting process 'Gradle build daemon'
at org.gradle.process.internal.DefaultExecHandle.execExceptionFor(DefaultExecHandle.java:241)
at org.gradle.process.internal.DefaultExecHandle.setEndStateInfo(DefaultExecHandle.java:218)
at org.gradle.process.internal.DefaultExecHandle.failed(DefaultExecHandle.java:369)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:87)
at org.gradle.internal.operations.CurrentBuildOperationPreservingRunnable.run(CurrentBuildOperationPreservingRunnable.java:38)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: net.rubygrapefruit.platform.NativeException: Could not start '/usr/local/openjdk-17/bin/java'
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:27)
at net.rubygrapefruit.platform.internal.WrapperProcessLauncher.start(WrapperProcessLauncher.java:36)
at org.gradle.process.internal.ExecHandleRunner.startProcess(ExecHandleRunner.java:98)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:71)
... 6 more
Caused by: java.io.IOException: Cannot run program "/usr/local/openjdk-17/bin/java" (in directory "/root/.gradle/daemon/7.4.1"): error=0, Failed to exec spawn helper: pid: 106, exit value: 1
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1143)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1073)
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:25)
... 9 more
Caused by: java.io.IOException: error=0, Failed to exec spawn helper: pid: 106, exit value: 1
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110)
... 11 more
* Get more help at https://help.gradle.org
Cleaning up project directory and file based variables
Now, I made the following observations
When using the openjdk:17-jdk-bullseye image my build works as intended.
Whenever I use the openjdk:17-jdk-bullseye, I don't see this line in the output:
Could not set executable permissions for: /root/.gradle/wrapper/dists/gradle-7.4.1-bin/58kw26xllvsiedyf3nujyarhn/gradle-7.4.1/bin/gradle
I know that I am root, so I should be able to set +x on .../bin/gradle
When running ll on my project, this is what I see on gradlew: -rwxr-xr-x 1 alex staff [ ... ] gradlew
Unfortunately I ran out of ideas and would be thankful for any follow up questions or observations that I have lost. The most common answer to this problem seems to be "Make sure that gradlew is executable!" - well it is.
While typing this answer, I was wondering whether this could be an x86/x64/arm64 related issue? I just noticed the OS/ARCH field is set to linux/arm64/v8 on docker hub.
It worked as sytech suggested - I've just built & pushed the docker-image using gitlab and pushed it into its container repository. I then used it in my application build - and it works as expected.
The .gitlab-ci.yml in the Dockerfile repository looks like this:
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# Tell 'docker:dind' to enable TLS (recommended)
# and generate certificates in the specified directory.
DOCKER_TLS_CERTDIR: "/certs"
build-push-docker-image-job:
# Specify a Docker image to run the job in.
image: docker:latest
# Specify an additional image 'docker:dind' ("Docker-in-Docker") that
# will start up the Docker daemon when it is brought up by a runner.
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- master
(=> source: https://www.shellhacks.com/gitlab-ci-cd-build-docker-image-push-to-registry)
This then publishes the build into my repository's container archive.
In the other build I simply reference the build:
default:
image: registry.gitlab.com/<GROUP>/<PROJECT>/<SOME_NAME>:master
start a new build - and then the build finally works:
BUILD SUCCESSFUL in 3m 7s
11 actionable tasks: 8 executed, 3 up-to-date
Cleaning up project directory and file based variables
00:00
Job succeeded
I suspect the architecture to be the culprit.
Related
We want to use Paketo.io / CloudNativeBuildpacks (CNB) GitLab CI in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging the Kubernetes executor. We also don't want to introduce security risks by using Docker in our builds. So we don't have our host’s /var/run/docker.sock exposed nor want to use docker:dind.
We found some guides on how to use Paketo with GitLab CI like this https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/ . But as described beneath the headline Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template, the approach relies on Docker and pack CLI. We tried to resemble this in our .gitlab-ci.yml which looks like this:
image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
But as outlined our setup does not support docker and we end up with the following error inside our logs:
...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding kpack.
TLDR;
Use the Buildpack's lifecycle directly inside your .gitlab-ci.yml here's a fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
The details: "using the lifecycle directly"
There are ongoing discussions about this topic. Especially have a look into https://github.com/buildpacks/pack/issues/564 and https://github.com/buildpacks/pack/issues/413#issuecomment-565165832. As stated there:
If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:
The link to the example is broken, but it refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment. Here we can get a first glue about what Stephen Levine referred to as "to use the lifecycle directly". Inside it the crucial point is the usage of command: ["/cnb/lifecycle/creator"]. So this is the lifecycle everyone is talking about! And there's good documentaion about this command that could be found in this CNB RFC.
Choosing a good image: paketobuildpacks/builder:base
So how to develop a working .gitlab-ci.yml? Let's start simple. Digging into the Tekton implementation you'll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE, which itself is documented as The image on which builds will run (must include lifecycle and compatible buildpacks). That sound's familiar! Can't we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at gitlab.com/jonashackt/microservice-api-spring-boot you can clone) and run:
docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
Now inside the paketobuildpacks/builder image powered container try to run the Paketo lifecycle directly with:
/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
I only used the -app parameter of the many possible parameters for the creator command, since most of them have quite good defaults. But as the default app directory path is not the default /workspace - but the current directory, I configured it. Also we need to define an <image-name> at the end, which will simply be used as the resulting container image name.
The first .gitlab-ci.yml
Both commands did work at my local workstation, so let's finally create a .gitlab-ci.yml using this approach (here's a fully working example .gitlab-ci.yml):
image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
docker login without docker
As we don't have docker available inside our Kubernetes Runners, we can't login into GitLab Container Registry as described in the docs. So the following error occured to me using this first approach:
===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Using the approach described in this so answer fixed the problem. We need to create a ~/.docker/config.json containing the GitLab Container Registry login information - and then the Paketo build will pick them up, as stated in the docs:
If CNB_REGISTRY_AUTH is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.
Inside our .gitlab-ci.yml this could look like:
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
Our final .gitlab-ci.yml
As we're using the image: paketobuildpacks/builder at the top of our .gitlab-ci.yml, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <image-name> like this:
/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our .gitlab-ci.yml looks like this (here's the fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:
See the full log of the example project here.
I am new to gitlab ci. I want to run integrated tests of my app using cypress. E2E tests require frontend (ionic, cypress) and backend (django, postgis). Running in my local machine everything works fine. First launch the backend server, and then run tests. Now I wanted to use gitLab CI but I don't know how to launch the backend server (i've been googling around but found nothing to my case), before launching the tests in the docker container (I'm also new to docker). I'm using a gitlab-ci.yaml I got from the cypress-gitlab network
stages:
- build
- test
# to cache both npm modules and Cypress binary we use environment variables
# to point at the folders we can list as paths in "cache" job settings
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
# this job installs NPM dependencies and Cypress
install:
image: cypress/base:10
stage: build
script:
- npm ci
# show where the Cypress test runner binaries are cached
- $(npm bin)/cypress cache path
# show all installed versions of Cypress binary
- $(npm bin)/cypress cache list
- $(npm bin)/cypress verify
# two jobs that run after "install" job finishes
# NPM dependencies and Cypress binary should be already installed
cypress-e2e:
image: cypress/base:10
stage: test
script:
- $(npm bin)/cypress run
artifacts:
expire_in: 1 week
when: always
paths:
- cypress/screenshots
- cypress/videos
reports:
junit:
- results/TEST-*.xml
cypress-e2e-chrome:
image: cypress/browsers:chrome67
stage: test
script:
- $(npm bin)/cypress run --browser chrome
artifacts:
expire_in: 1 week
when: always
paths:
- cypress/screenshots
- cypress/videos
reports:
junit:
- results/TEST-*.xml
The install job runs fine. The error occur on the job cypress-e2e. The error seems auto-explanatory and logic to me - no server running. My problem is I don't know how to introduce in the script the launch of the service.
The output of the pipeline:
Running with gitlab-runner 13.8.0-rc1 (28e2e34a)
on docker-auto-scale 0277ea0f
Preparing the "docker+machine" executor
00:37
Using Docker executor with image cypress/base:10 ...
Pulling docker image cypress/base:10 ...
Using docker image sha256:071155d6ed07a321ae5c7a453c1fd1f04ff65bd5eeae97281c1a2088c26acf0a for cypress/base:10 with digest cypress/base#sha256:7fb73651d4a48762d5f370a497a155891eba90605ea395a4d86cafdefb153f8c ...
Preparing environment
00:03
Running on runner-0277ea0f-project-23892759-concurrent-0 via runner-0277ea0f-srm-1611586309-ee0dc093...
Getting source from Git repository
00:03
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/ctavar01/advisor_app/.git/
Created fresh repository.
Checking out 9e8a00a2 as master...
Skipping Git submodules setup
Restoring cache
00:26
Checking cache for master...
Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/23892759/master
Successfully extracted cache
Executing "step_script" stage of the job script
00:23
$ $(npm bin)/cypress run
Cypress could not verify that this server is running:
> http://localhost:8100
We are verifying this server because it has been configured as your `baseUrl`.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Uploading artifacts for failed job
00:05
Uploading artifacts...
cypress/screenshots: found 14 matching files and directories
cypress/videos: found 8 matching files and directories
Uploading artifacts as "archive" to coordinator... ok id=984717879 responseStatus=201 Created token=FCQbsg-q
Uploading artifacts...
WARNING: results/TEST-*.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
Thanks for any help
I'm using gitlab-ci for my simple project.
And everything is ok my runner is working on my local machine(ubuntu18-04) and I tested it with simple .gitlab-ci.yml.
Now I try to use the following yml:
image: ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- sudo apt-get update
but I get the following error:
/bin/bash: line 110: sudo: command not found
How can I use sudo?
You shouldn't have to worry about updating the Ubuntu image used in a Gitlab CI pipeline job because the docker container is destroyed when the job is finished. Furthermore, the docker images are frequently updated. If you look at ubuntu:18.04's docker hub page, it was just updated 2 days ago: https://hub.docker.com/_/ubuntu?tab=tags&page=1&ordering=last_updated
Since you're doing an update here, I'm going to assume that next you might want to install some packages. It's possible to do so, but not advised since every pipeline that you run will have to install those packages, which can really slow them down. Instead, you can create a custom docker image based on a parent image and customize it that way. Then you can either upload that docker image to docker hub, Gitlab's registry (if using self-hosted Gitlab, it has to be enabled by an admin), or built on all of your gitlab-runners.
Here's a dumb example:
# .../custom_ubuntu:18.04/Dockerfile
FROM ubuntu:18.04
RUN apt-get install git
Next you can build the image: docker build /path/to/directory/that/has/dockerfile, tag it so you can reference it in your pipeline config file: docker tag aaaaafffff59 my_org/custom_ubuntu:18.04. Then if needed you can upload the tagged image docker push my_org/custom_ubuntu:18.04.
In your .gitlab-ci.yml file, reference this custom Ubuntu image:
image: my_org/custom_ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- git --version # ensures the package you need is available
You can read more about using custom images in Gitlab CI here: https://docs.gitlab.com/charts/advanced/custom-images/
I have configured my project to run on a pipeline
Here is my .git.yml file content:
image: markhobson/maven-chrome:latest
stages:
- flow1
execute job1:
stage: flow1
tags:
- QaFunctional
script:
- export
- set +e -x
- mvn --update-snapshots --define updateResults=${updateResults} clean test
Error after executing the pipeline :
bash: line 328: mvn: command not found
Running after_script
00:00
Uploading artifacts for failed job
00:00
ERROR: Job failed: exit status 127
Can anyone help me spot the error please ?
Is that not able to load the docker image?
When I use a shared runner I am able to execute the same.
Error you get means there is no maven installed on job executor mvn: command not found
Looks like image you specified image: markhobson/maven-chrome:latest has maven command:
# docker run markhobson/maven-chrome:latest mvn --version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Other thing you specified is tags:
...
tags:
- QaFunctional
...
So when both image and tags are specified in your yaml then tags takes precedence and image is ignored.
Looks like your custom runner tagged with QaFunctional is shell runner without mvn configured.
As a solution either install mvn on QaFunctional or run job on docker runner (shared runners should do). To avoid such confusion don't specify image when you want to run your job on tagged shell runner.
I am trying to run the Robot Framework Tests in a Gitlab CI and download the generated report as an artifact. So far, I have succeeded running the tests in the pipeline and generate the artifact, but the generated zip is empty. What do I miss?
This is my Dockerfile:
FROM ppodgorsek/robot-framework:latest
COPY resources /opt/robotframework/resources
COPY tests /opt/robotframework/tests
COPY libs /opt/robotframework/libs
And this is my stage in the gitlab-ci.yml:
run robot tests dev:
variables:
# more variables
ROBOT_OPTIONS: "--variable ENV:dev -e FAIL -e PENDING"
allow_failure: true
services:
- name: docker:dind
stage: run-robot-tests
image: docker:latest
script:
- mkdir -p reports
# mode docker run commands
- docker -H $DOCKER_HOST run --rm --network localnet --env "ROBOT_OPTIONS=${ROBOT_OPTIONS}" -v reports:/opt/robotframework/reports --name robot $CONTAINER_DEV_IMAGE
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- reports/
when: always
tags:
- d-i-d
only:
refs:
- dev
I have omitted some details that are specific to our project.
But just to give you an idea of our set-up, we are pulling the docker image ppodgorsek/robot-framework and we run with it the tests against another docker container that runs the front-end of our project. To make sure that all containers are on the same network we are using docker-in-docker. In the same network lives also our back-end container and our db.
This is the tail of my job's output.
==============================================================================
Tests | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Output: /opt/robotframework/reports/output.xml
Log: /opt/robotframework/reports/log.html
Report: /opt/robotframework/reports/report.html
Uploading artifacts...
reports/: found 1 matching files
Trying to load /builds/automation/system-tests.tmp/CI_SERVER_TLS_CA_FILE ...
Dialing: tcp gitlab.surfnet.nl:443 ...
Uploading artifacts to coordinator... ok id=42435 responseStatus=201 Created token=g8cWYYun
Job succeeded
You can see the console output from running the tests and then you can see where robot stores the generated output.
Next it shows that the artifact is generated, which it is, only problem is that is empty.
Ok, I was indeed very close. People from the Robot Framework community pointed me to the right direction! :D
The problem was in the command:
- docker -H $DOCKER_HOST run --rm --network localnet --env "ROBOT_OPTIONS=${ROBOT_OPTIONS}" -v reports:/opt/robotframework/reports --name robot $CONTAINER_DEV_IMAGE
and more specifically, on the relative path for the volume:
-v reports:/opt/robotframework/reports
Thus, the solution was using an absolute path:
-v $PWD/reports:/opt/robotframework/reports