I am trying to run the Robot Framework Tests in a Gitlab CI and download the generated report as an artifact. So far, I have succeeded running the tests in the pipeline and generate the artifact, but the generated zip is empty. What do I miss?
This is my Dockerfile:
FROM ppodgorsek/robot-framework:latest
COPY resources /opt/robotframework/resources
COPY tests /opt/robotframework/tests
COPY libs /opt/robotframework/libs
And this is my stage in the gitlab-ci.yml:
run robot tests dev:
variables:
# more variables
ROBOT_OPTIONS: "--variable ENV:dev -e FAIL -e PENDING"
allow_failure: true
services:
- name: docker:dind
stage: run-robot-tests
image: docker:latest
script:
- mkdir -p reports
# mode docker run commands
- docker -H $DOCKER_HOST run --rm --network localnet --env "ROBOT_OPTIONS=${ROBOT_OPTIONS}" -v reports:/opt/robotframework/reports --name robot $CONTAINER_DEV_IMAGE
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- reports/
when: always
tags:
- d-i-d
only:
refs:
- dev
I have omitted some details that are specific to our project.
But just to give you an idea of our set-up, we are pulling the docker image ppodgorsek/robot-framework and we run with it the tests against another docker container that runs the front-end of our project. To make sure that all containers are on the same network we are using docker-in-docker. In the same network lives also our back-end container and our db.
This is the tail of my job's output.
==============================================================================
Tests | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Output: /opt/robotframework/reports/output.xml
Log: /opt/robotframework/reports/log.html
Report: /opt/robotframework/reports/report.html
Uploading artifacts...
reports/: found 1 matching files
Trying to load /builds/automation/system-tests.tmp/CI_SERVER_TLS_CA_FILE ...
Dialing: tcp gitlab.surfnet.nl:443 ...
Uploading artifacts to coordinator... ok id=42435 responseStatus=201 Created token=g8cWYYun
Job succeeded
You can see the console output from running the tests and then you can see where robot stores the generated output.
Next it shows that the artifact is generated, which it is, only problem is that is empty.
Ok, I was indeed very close. People from the Robot Framework community pointed me to the right direction! :D
The problem was in the command:
- docker -H $DOCKER_HOST run --rm --network localnet --env "ROBOT_OPTIONS=${ROBOT_OPTIONS}" -v reports:/opt/robotframework/reports --name robot $CONTAINER_DEV_IMAGE
and more specifically, on the relative path for the volume:
-v reports:/opt/robotframework/reports
Thus, the solution was using an absolute path:
-v $PWD/reports:/opt/robotframework/reports
Related
We have spring boot project with integration tests running on local. To run them on GITLAB CI, we have a docker compose file which spins up a few images, one of which hosts a graphQL engine on 8080.
The integration tests work fine on local, but when we run it as a part of pipeline, the test fail to connect to the graphQL image, even though docker -ps says the image is up and running.
gilab Ci file - Integration test
integration-test:
stage: test
artifacts:
when: always
reports:
junit: build/test-results/test/**/TEST-*.xml
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker-compose --version
- docker-compose -f docker-compose.yml up -d --force-recreate
- docker-compose ps
- docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' springboot-streamengine_metadata-db_1
- docker logs springboot-streamengine_graphql-engine_1
tags:
- docker
services:
- alias: docker
command: [ "dockerd", "--host=tcp://0.0.0.0:2375" ]
name: docker:19.03-dind
script:
- ./gradlew integrationTest -Pgraphql_url=http://docker:8080/v1/graphql -Pmodelservice_url=docker:8010 -Ptimeseries_db_url=jdbc:postgresql://docker:5435/postgres --info
after_script:
- docker-compose down -v
Logs showing the image is up -
enter image description here
Error while connecting to the graphQl engine :
enter image description here
Has anyone seen this error before? We have played around with different versions of dind, all report same issue.
Any leads will be helpful.
TIA
I am trying to create a build pipeline for a small project I do on my free time. For this, I utilize Spring-Boot and Angular. Locally I build it with ./gradlew clean build. This works perfectly fine on my local machine, but I run into issues I can't pinpoint on gitlab. The build is done on gitlab, utilizing it's own shared runners.
My .gitlab-ci.yml looks like this:
default:
image: oasisat777/openjdk-and-node:latest
# If I comment out above line and comment in the line below, everything works fine & dandy
# image: openjdk:17-jdk-bullseye
stages:
- build
build-job:
stage: build
script:
- whoami
- java -version
- npm -v
- ./gradlew clean compileTestJava compileJava --stacktrace
In the above example I use a docker image based on openjdk:17-jdk-bullseye but extended to have npm available. The corresponding Dockerfile:
# goal: build microservices with spring-boot backend and angular frontend in gitlab
# req'd images: latest stable openjdk and latest node
# unfortunately there's not openjdk-with-node:latest available, so i have to build it by hand
# this ought to do the trick, using bullseye as base and then install node additionally
FROM openjdk:17-jdk-bullseye
# note: for production use node LTS (even numbers)
# https://github.com/nodesource/distributions/blob/master/README.md#deb
RUN curl -fsSL https://deb.nodesource.com/setup_17.x | bash - \
&& apt-get install -y nodejs
USER root
CMD ["bash"]
I tried to build my project utilizing the resulting docking container by adding my code as a volume then run ./gradlew build - which worked on my machine. My assumption is that by this I basically simulated the behavior of what the gitlab-runner does when starting the build.
docker run -it -v .:/project
cd project
./gradlew clean build
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
[...]
BUILD SUCCESSFUL
This is the output of the build-pipeline:
$ whoami
root
$ java -version
openjdk version "17.0.2" 2022-01-18
OpenJDK Runtime Environment (build 17.0.2+8-86)
OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing)
$ npm -v
8.5.5
$ ./gradlew clean compileTestJava compileJava --stacktrace
Downloading https://services.gradle.org/distributions/gradle-7.4.1-bin.zip
...........10%...........20%...........30%...........40%...........50%...........60%...........70%...........80%...........90%...........100%
Could not set executable permissions for: /root/.gradle/wrapper/dists/gradle-7.4.1-bin/58kw26xllvsiedyf3nujyarhn/gradle-7.4.1/bin/gradle
Welcome to Gradle 7.4.1!
Here are the highlights of this release:
- Aggregated test and JaCoCo reports
- Marking additional test source directories as tests in IntelliJ
- Support for Adoptium JDKs in Java toolchains
For more details see https://docs.gradle.org/7.4.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
FAILURE: Build failed with an exception.
* What went wrong:
A problem occurred starting process 'Gradle build daemon'
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Exception is:
org.gradle.process.internal.ExecException: A problem occurred starting process 'Gradle build daemon'
at org.gradle.process.internal.DefaultExecHandle.execExceptionFor(DefaultExecHandle.java:241)
at org.gradle.process.internal.DefaultExecHandle.setEndStateInfo(DefaultExecHandle.java:218)
at org.gradle.process.internal.DefaultExecHandle.failed(DefaultExecHandle.java:369)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:87)
at org.gradle.internal.operations.CurrentBuildOperationPreservingRunnable.run(CurrentBuildOperationPreservingRunnable.java:38)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: net.rubygrapefruit.platform.NativeException: Could not start '/usr/local/openjdk-17/bin/java'
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:27)
at net.rubygrapefruit.platform.internal.WrapperProcessLauncher.start(WrapperProcessLauncher.java:36)
at org.gradle.process.internal.ExecHandleRunner.startProcess(ExecHandleRunner.java:98)
at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:71)
... 6 more
Caused by: java.io.IOException: Cannot run program "/usr/local/openjdk-17/bin/java" (in directory "/root/.gradle/daemon/7.4.1"): error=0, Failed to exec spawn helper: pid: 106, exit value: 1
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1143)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1073)
at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:25)
... 9 more
Caused by: java.io.IOException: error=0, Failed to exec spawn helper: pid: 106, exit value: 1
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110)
... 11 more
* Get more help at https://help.gradle.org
Cleaning up project directory and file based variables
Now, I made the following observations
When using the openjdk:17-jdk-bullseye image my build works as intended.
Whenever I use the openjdk:17-jdk-bullseye, I don't see this line in the output:
Could not set executable permissions for: /root/.gradle/wrapper/dists/gradle-7.4.1-bin/58kw26xllvsiedyf3nujyarhn/gradle-7.4.1/bin/gradle
I know that I am root, so I should be able to set +x on .../bin/gradle
When running ll on my project, this is what I see on gradlew: -rwxr-xr-x 1 alex staff [ ... ] gradlew
Unfortunately I ran out of ideas and would be thankful for any follow up questions or observations that I have lost. The most common answer to this problem seems to be "Make sure that gradlew is executable!" - well it is.
While typing this answer, I was wondering whether this could be an x86/x64/arm64 related issue? I just noticed the OS/ARCH field is set to linux/arm64/v8 on docker hub.
It worked as sytech suggested - I've just built & pushed the docker-image using gitlab and pushed it into its container repository. I then used it in my application build - and it works as expected.
The .gitlab-ci.yml in the Dockerfile repository looks like this:
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# Tell 'docker:dind' to enable TLS (recommended)
# and generate certificates in the specified directory.
DOCKER_TLS_CERTDIR: "/certs"
build-push-docker-image-job:
# Specify a Docker image to run the job in.
image: docker:latest
# Specify an additional image 'docker:dind' ("Docker-in-Docker") that
# will start up the Docker daemon when it is brought up by a runner.
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- master
(=> source: https://www.shellhacks.com/gitlab-ci-cd-build-docker-image-push-to-registry)
This then publishes the build into my repository's container archive.
In the other build I simply reference the build:
default:
image: registry.gitlab.com/<GROUP>/<PROJECT>/<SOME_NAME>:master
start a new build - and then the build finally works:
BUILD SUCCESSFUL in 3m 7s
11 actionable tasks: 8 executed, 3 up-to-date
Cleaning up project directory and file based variables
00:00
Job succeeded
I suspect the architecture to be the culprit.
I am not able to use kubectl command inside gitlab-ci.yml file.
I have already gone through the steps mentioned in the doc to add an existing cluster in the doc.
Nowhere they have mentioned, How can I use kubectl.
I tried the below configuration.
stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl apply -f scheduler-deployment.yaml
only:
- master
But I am getting below error,
Executing "step_script" stage of the job script
00:01
Using docker image sha256:c8d24d490701efec4c8d544978b3e4ecc4855475a221b002a8f9e5e473398805 for roffe/kubectl with digest roffe/kubectl#sha256:ba13f8ffc55c83a7ca98a6e1337689fad8a5df418cb160fa1a741c80f42979bf ...
$ kubectl apply -f scheduler-deployment.yaml
error: unable to recognize "scheduler-deployment.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Clearly, it is not able to connect to the cluster, or maybe trying to connect to the cluster inside this roffe/kubectl image container.
When I remove the image, I get this error.
/bin/bash: line 117: kubectl: command not found
I have gone through the whole doc I couldn't find a single example or reference that explains this part.
Please suggest how I can deploy to the existing k8s cluster.
Update
I went through this doc and I am using defined variables in my gitlab-ci.yml to update the context of the kubectl.
But still it doesn't work.
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- echo $HOME
- echo $KUBECONFIG
- echo $KUBE_URL
- mkdir -p $HOME/.kube
- echo -n $KUBECONFIG | base64 -d > $HOME/.kube/config
- kubectl get pods
only:
- master
Error that I get,
$ echo $HOME
/root
$ echo $KUBECONFIG
$ echo $KUBE_URL
$ mkdir -p $HOME/.kube
$ echo -n $KUBECONFIG | base64 -d > $HOME/.kube/config
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
To automate deployment with an existing cluster, You need to follow below steps:
1. Add you cluster to gitlab project.
Follow this doc, and add your existing cluster
2. Build your project and push to docker or any registry
stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
3. Apply your deployment.yml to cluster
To use kubectl inside gitlab-ci.yml, you need an image that has kubectl. The one that you have used will work.
But the kubectl inside the container has no idea about the context of cluster that you added earlier.
Now this is where gitlab's envrionment variable play their role.
The default environment scope is *, which means all jobs, regardless of their environment, use that cluster. Each scope can be used only by a single cluster in a project, and a validation error occurs if otherwise. Also, jobs that don’t have an environment keyword set can’t access any cluster. see here
So, when you added your cluster, it goes by default in scope * and will be passed to every job provided it uses some environment.
Also, When you create a cluster, gitlab create environment variables for that cluster by default, see here.
Important thing to notice is that it also adds an environment variable KUBECONFIG.
In order to access your Kubernetes cluster, kubectl uses a configuration file. The default kubectl configuration file is located at ~/.kube/config and is referred to as the kubeconfig file.
kubeconfig files organize information about clusters, users, namespaces, and authentication mechanisms. The kubectl command uses these files to find the information it needs to choose a cluster and communicate with it.
The loading order follows these rules:
If the --kubeconfig flag is set, then only the given file is loaded. The flag may only be set once and no merging takes place.
If the $KUBECONFIG environment variable is set, then it is parsed as a list of filesystem paths according to the normal path delimiting rules for your system.
Otherwise, the ${HOME}/.kube/config file is used and no merging takes place.
So, kubectl command can use the variable KUBECONFIG to set the context see here.
So, your deployment job will be like below,
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl get pods
- kubectl get all
- kubectl get namespaces
- kubectl apply -f scheduler-deployment.yaml
environment:
name: production
kubernetes:
namespace: default
only:
- master
You can also set a namespace for the job with environment.kubernetes.namespace
this image (roffe/kubectl) doesn't have kubectl package so you can add some package kubectl and configuration to connect your kubernetes cluster.
So I wrote a simple one-page server with node and express. I wrote a dockerfile for this and ran it locally. Then I made a postman collection and tested the endpoints.
I want to do this with gitlab ci using newman so I came up with the following .gitlab-ci.yml:
image: docker:latest
services:
- docker:dind
before_script:
- docker build -t test_img .
- docker run -d -p 3039:3039 test_img
stages:
- test
# test
api-test:
image:
name: postman/newman:alpine
entrypoint: [""]
stage: test
script:
- newman run pdfapitest.postman_collection.json
It fails saying:
docker build -t test_img .
/bin/sh: eval: line 86: docker: not found
ERROR: Job failed: exit code 127
full output: https://pastebin.com/raw/C3mmUXKa
what am I doing wrong here? this seems to me like a very common use case but I haven't found anything useful about this.
The issue is that your api-test job uses the image postman/newman:alpine to run the script.
This means that when GitLab tries to run the before_script section, it has no docker command available.
What you should do is to provide the docker command in the image you're using to run the job. You can do that either by installing docker as the first step of your script, or starting from a custom image which contains the software you're using inside the job plus the docker client itself.
I tested a gitlab-runner on a virtual machine, it worked perfectly. I followed this tutorial at part Use docker-in-docker executor :
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
When i register a runner with exactly the same configuration on my dev server, the runner is called when there is a commit but i got alot of errors :
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-XXX-project-XX-concurrent-X-docker-X AS /runner-XXX-project-XX-concurrent-X-docker-X-wait-for-service/service (executor_docker.go:1337:1s)
DEPRECATION: this GitLab server doesn't support refspecs, gitlab-runner 12.0 will no longer work with this version of GitLab
$ docker info
error during connect: Get http://docker:2375/v1.39/info: dial tcp: lookup docker on MY.DNS.IP:53: no such host
ERROR: Job failed: exit code 1
I believe all these error are due to the first warning. I tried to :
Add a second DNS with 8.8.8.8 IP to my machine, same error
Add privileged=true manually in /etc/gitlab-runner/config.toml, same error, so it's not due to the privileged = true parameter
Replace tcp://docker:2375 by tcp://localhost:2375, can't find docker daemon on the machine when docker info
gitlab-ci.yml content :
image: docker:stable
stages :
- build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
only:
refs:
- dev
changes:
- folder1/**/*
build-folder2:
stage: build
script:
- docker build -t image2 folder2/
- docker run --name docker2 -p 3000:3000 -d image2
only:
refs:
- dev
changes:
- folder2/**/*
If folder1 of branch dev is modified, we build and run the docker1
If folder2 of branch dev is modified, we build and run the docker2
docker version on dev server :
docker -v
Docker version 17.03.0-ce, build 3a232c8
gitlab-runner version on dev server :
gitlab-runner -v
Version: 11.10.1
I will try to provide an answer for you, as I come to fix this same problem when trying yo run DinD.
This message:
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
Means that either you have not properly configured your runner, or it is not linked by the gitlab-ci.yml file. You should be able to ckeck the ID of the runner used in the log page at Gitlab.
To start with, verify that you entered the gitlab-runner register command right, with the proper registration token.
Second, since you are setting a specific runner manually, verify that you have set some unique tag to it (eg. build_docker), and call it from your gitlab-ci.yml file. For example:
...
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
tags:
- build_docker
...
That way it should work.