I need a failed test in my pipeline to fail the job so that I can have control over it. The problem is that the tests are being run in a "docker in docker" so the job doesn't fail because the container did run correctly, but the test doesn't return an error code (even if one fails).
The script "docker:test" run my test suit in a container and my pipeline is like:
image: docker:dind #Alpine
stages:
- install
- test
# - build
- deploy
env:
stage: install
script:
- chmod +x ./setup_env.sh
- ./setup_env.sh
artifacts:
paths:
- .env
expire_in: 1 days
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
# docker image:
# stage: build
# script:
# - npm run docker:build
remove .env:
stage: deploy
script:
- rm .env
pages:
stage: deploy
script:
- mkdir .public
- cp -r coverage/* .public
- mv .public public
artifacts:
paths:
- public
# only:
# - main
And my npm script is:
"docker:test": "npm i && tsc && docker build -t extractos-bancarios-test --target test . && docker run -d --name extractos-bancarios-test extractos-bancarios-test && docker logs -f extractos-bancarios-test >> logs.log",
I need to fail the pipeline when a test fails while using docker in docker
I was able to solve the problem on my own and I leave it documented so that no one wastes as much time as I did.
For the container inside the first container to fail, I needed it to return an exit code 1 when there is an error in the report. So I added a conditional with a grep in the scripts section of my .gitlab-ci.yml:
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
- rm junit.xml || true
- rm -r coverage || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
- if grep '<failure' junit.xml; then exit 1; else exit 0; fi
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
Related
I have tried many ways through searching for a solution.
I think my problem is different.
I am wanting to have a docker image that has the environment installed and then active and ready for shell commands like: flake8, pylint, black, isort, coverage
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
RUN echo "conda activate up-and-down-pytorch" >> ~/.bashrc
conda_env_unit_test.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
# image: continuumio/miniconda3:latest
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
# - conda env create --file conda_env.yml
# - source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
The Docker Image gets uploaded to the gitlab registry and the unit test stage uses that image, however:
/bin/bash: line 127: coverage: command not found
(ultimate goal was to not have to create the conda environment every time I wanted to lint or run unit tests)
Figured it out today.
Dropped the duration for the unit tests.
Change was to source the environment in the unit-test job. Didn't need to do that in the Dockerfile.
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
conda_env_unit_tests.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pandas
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
- source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
Tell me please, is it possible to somehow run the script from the docker collaborator via gitlab-ci.yml
My docker-composer.yml
version: '3.8'
services:
app:
image: USER/test-web_app:latest
ports:
- "9876:80"
My gitlab-ci.yml
variables:
stages:
- build_project
- make_image
- deploy_image
build_project:
stage: build_project
image: node:16.15.0-alpine
services:
- docker:20.10.14-dind
script:
- npm cache clean --force
- npm install --legacy-peer-deps
- npm run build
artifacts:
expire_in: 15 mins
paths:
- build
- node_modules
only:
- main
make_image:
stage: make_image
image: docker:20.10.14-dind
services:
- docker:20.10.14-dind
before_script:
- docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- docker build -t $REGISTER/$REGISTER_USER/$PROJECT_NAME:latest $DOCKER_FILE_LOCATION
- docker push $REGISTER_USER/$PROJECT_NAME:latest
after_script:
- docker logout
only:
- main
deploy_image:
stage: deploy_image
image: alpine:latest
services:
- docker:20.10.14-dind
before_script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker-compose down
**?????????**
after_script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP docker logout
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP exit
only:
- main
How can I use a docker-composer script inside gitlab-ci to run it on a remote server?
Is it possible to use several different docker-composer for different build versions?
I created a docker image with automated tests that generates a report XML file. After the test run, this file is generated. I want to copy this file to the repository because the pipeline needs this file to show result tests:
My gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxxx" -p "yyyy" docker.io
script:
- docker run --name authContainer "xxxx/dockerImage:0.0.1"
after_script:
- docker cp authContainer:/artifacts/test-result.xml .
artifacts:
when: always
paths:
- test-result.xml
reports:
junit:
- test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
You're .gitlab-ci file is looking fine. You can have the XML report as artifact and gitlab will populate the results from that. Below is the script that i've used and could see the results.
script:
- pytest -o junit_family=xunit2 --junitxml=report.xml --cov=. --cov-report html
- coverage report
coverage: '/^TOTAL.+?(\d+\%)$/'
artifacts:
paths:
- coverage
reports:
junit: report.xml
when: always
I have a basic quasar page that is created using $ quasar create .
I want to deploy the application on Gitlab ci but the deplyment keeps giving me errors i have managed to fix the build and test errors but cant figure out the deployment part of it.
.gitlab-ci.yml
build site:
image: node:10
stage: build
script:
- npm install -g #quasar/cli
- npm install --progress=false
- quasar build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:10
stage: test
script:
- npm install --progress=false
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/your/project/path/
Error during deplyment phase
i tried adding rsync -av -e "ssh -vv" --delete ...
this is the error i get
Try and do your rsync with ssh verbose active, in order to see more about the error:
rsync -av -e "ssh -vv" --delete ...
Check the permission for your ssh elements.
For instance:
chmod 700 ~/.ssh
Hello I have a exit code 1 error during npm install in my docker container, during the build of the gitlab continuous integration. I have a javascript web application in nodejs and angularjs hosted on gitlab, with two repositories : one for the front and one for the back. For the front, I use a base image including node 7.7.1 and nginx and his configuration, hosted on an Amazon registry, and then the runner executes the npm install for the front, according to the package.json.
Here is the .gitlab-ci.yml :
image: docker:1.13.1
stages:
- build
- test
- deploy
variables:
BUILD_IMG: $CI_REGISTRY_IMAGE:$CI_BUILD_REF
TEST_IMG: $CI_REGISTRY_IMAGE:$CI_BUILD_REF_NAME
RELEASE_IMG: $CI_REGISTRY_IMAGE:latest
AWS_STAGING_ENV: "argalisformation-prod-env"
AWS_PROD_ENV: "argalisformation-prod-env"
DOCKERRUN: Dockerrun.aws.json
DEPLOY_ARCHIVE: ${AWS_APP}-${CI_BUILD_REF}.zip
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- .ci/before_script
build:
stage: build
script:
- docker build --pull -t $BUILD_IMG .
- docker push $BUILD_IMG
test:
stage: test
script:
- docker pull $BUILD_IMG
- docker run --rm $BUILD_IMG npm run test
- docker tag $BUILD_IMG $TEST_IMG
- docker push $TEST_IMG
deploy:staging:
stage: deploy
environment: Staging
variables:
DOCKER_IMG: ${CI_REGISTRY_IMAGE}:${CI_BUILD_REF}
script:
- ./.ci/create-deploy-archive $DOCKER_IMG $AWS_BUCKET $DOCKERRUN $DEPLOY_ARCHIVE
- ./.ci/aws-deploy $DEPLOY_ARCHIVE $CI_BUILD_REF $AWS_STAGING_ENV
artifacts:
paths:
- $DEPLOY_ARCHIVE
except:
- production
deploy:production:
stage: deploy
environment: Production
variables:
DOCKER_IMG: ${CI_REGISTRY_IMAGE}:latest
script:
- .ci/push-new-image $TEST_IMG $RELEASE_IMG
- .ci/create-deploy-archive $DOCKER_IMG $AWS_BUCKET $DOCKERRUN $DEPLOY_ARCHIVE
- .ci/aws-deploy $DEPLOY_ARCHIVE $CI_BUILD_REF $AWS_PROD_ENV
artifacts:
paths:
- $DEPLOY_ARCHIVE
only:
- production
when: manual
Here is the output of the runner error :
npm info lifecycle node-sass#3.13.1~install: node-sass#3.13.1
> node-sass#3.13.1 install /src/node_modules/sasslint-webpack-plugin/node_modules/node-sass
> node scripts/install.js
The command '/bin/sh -c npm set progress=false && npm install node-sass --save-dev && npm install' returned a non-zero code: 1
ERROR: Job failed: exit status 1
My npm version is 4.1.2.