I have tried many ways through searching for a solution.
I think my problem is different.
I am wanting to have a docker image that has the environment installed and then active and ready for shell commands like: flake8, pylint, black, isort, coverage
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
RUN echo "conda activate up-and-down-pytorch" >> ~/.bashrc
conda_env_unit_test.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
# image: continuumio/miniconda3:latest
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
# - conda env create --file conda_env.yml
# - source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
The Docker Image gets uploaded to the gitlab registry and the unit test stage uses that image, however:
/bin/bash: line 127: coverage: command not found
(ultimate goal was to not have to create the conda environment every time I wanted to lint or run unit tests)
Figured it out today.
Dropped the duration for the unit tests.
Change was to source the environment in the unit-test job. Didn't need to do that in the Dockerfile.
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
conda_env_unit_tests.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pandas
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
- source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
Related
I need a failed test in my pipeline to fail the job so that I can have control over it. The problem is that the tests are being run in a "docker in docker" so the job doesn't fail because the container did run correctly, but the test doesn't return an error code (even if one fails).
The script "docker:test" run my test suit in a container and my pipeline is like:
image: docker:dind #Alpine
stages:
- install
- test
# - build
- deploy
env:
stage: install
script:
- chmod +x ./setup_env.sh
- ./setup_env.sh
artifacts:
paths:
- .env
expire_in: 1 days
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
# docker image:
# stage: build
# script:
# - npm run docker:build
remove .env:
stage: deploy
script:
- rm .env
pages:
stage: deploy
script:
- mkdir .public
- cp -r coverage/* .public
- mv .public public
artifacts:
paths:
- public
# only:
# - main
And my npm script is:
"docker:test": "npm i && tsc && docker build -t extractos-bancarios-test --target test . && docker run -d --name extractos-bancarios-test extractos-bancarios-test && docker logs -f extractos-bancarios-test >> logs.log",
I need to fail the pipeline when a test fails while using docker in docker
I was able to solve the problem on my own and I leave it documented so that no one wastes as much time as I did.
For the container inside the first container to fail, I needed it to return an exit code 1 when there is an error in the report. So I added a conditional with a grep in the scripts section of my .gitlab-ci.yml:
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
- rm junit.xml || true
- rm -r coverage || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
- if grep '<failure' junit.xml; then exit 1; else exit 0; fi
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
I created a docker image with automated tests that generates a report XML file. After the test run, this file is generated. I want to copy this file to the repository because the pipeline needs this file to show result tests:
My gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxxx" -p "yyyy" docker.io
script:
- docker run --name authContainer "xxxx/dockerImage:0.0.1"
after_script:
- docker cp authContainer:/artifacts/test-result.xml .
artifacts:
when: always
paths:
- test-result.xml
reports:
junit:
- test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
You're .gitlab-ci file is looking fine. You can have the XML report as artifact and gitlab will populate the results from that. Below is the script that i've used and could see the results.
script:
- pytest -o junit_family=xunit2 --junitxml=report.xml --cov=. --cov-report html
- coverage report
coverage: '/^TOTAL.+?(\d+\%)$/'
artifacts:
paths:
- coverage
reports:
junit: report.xml
when: always
I run automated tests on gitlab ci with gitlab runner, all works good except reports. After tests junit reports are not updated, always show the same pass and not pass tests even thought cmd show different number of passed tests.
Gitlab script:
stages:
- build
- test
docker-build-master:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker build ./AutomaticTests --pull -t "dockerImage"
- docker image tag dockerImage xxx/dockerImage:0.0.1
- docker push "xxx/dockerImage:0.0.1"
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run "xxx/dockerImage:0.0.1"
artifacts:
when: always
paths:
- AutomaticTests/bin/Release/artifacts/test-result.xml
reports:
junit:
- AutomaticTests/bin/Release/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /AutomaticTests
RUN chmod 777 /AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=..\artifacts\test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
I had a similar issue when using docker-in-docker for my gitlab pipeline. You run your tests inside your container. Therefore, test results are stored inside your "container-under-test". However, the gitlab-ci paths reference not the "container-under-test", but the outside container of your docker-in-docker environment.
You could try to copy the test results from the image directly to your outside container via something like this:
mkdir reports
docker cp $(docker create --rm DOCKER_IMAGE):/ABSOLUTE/FILEPATH/IN/DOCKER/CONTAINER reports/.
So, this would be something like this in your case (untested...!):
...
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- mkdir reports
- docker cp $(docker create --rm xxx/dockerImage:0.0.1):/AutomaticTests/bin/Release/artifacts/test-result.xml reports/.
artifacts:
when: always
reports:
junit:
- reports/test-result.xml
...
Also, see this post for furhter explanation on the docker cp command: https://stackoverflow.com/a/59055906/6603778
Keep in mind, that docker cp requires an absolute path to the file you want to copy from your container.
I am working on a gitlab CI/CD project to build an asp.net core application into a docker.
Currently I have 2 possible implementations in mind. The first one have the full logic in the Dockerfile, but I can't visualize the stages in Gitlab this way (build, test, publish). So I thought about moving the main logic to the gitlab-ci.yml file. But what bothers me now is that I have to manage the image docker dotnet versions on 2 places (sdk:3.1, aspnet:3.1.1-alpine3.10). Is it a good idea to deliver the version via build-arg or is there a more elegant solution?
.gitlab-ci.yml
stages:
- build
- test
- docker
build:
stage: build
image: mcr.microsoft.com/dotnet/core/sdk:3.1
only:
- master
script:
- cd src
- dotnet restore --interactive
- dotnet build --configuration Release
- dotnet publish --configuration Release --output ../publish/
artifacts:
paths:
- ./publish/*.*
expire_in: 1 week
tags:
- docker
test:
stage: test
image: mcr.microsoft.com/dotnet/core/sdk:3.1
only:
- master
script:
- cd src
- dotnet test --test-adapter-path:. --logger:"junit;LogFilePath=../../MyProject.xml"
artifacts:
paths:
- ./MyProject.xml
reports:
junit: ./MyProject.xml
tags:
- docker
docker:
stage: docker
image: docker:stable
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
only:
- master
script:
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker build --tag "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA" --tag "$CI_REGISTRY_IMAGE:latest" --build-arg EXECUTABLE=Test.WebApi.dll .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
- docker push "$CI_REGISTRY_IMAGE:latest"
tags:
- docker
Dockerfile
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1.1-alpine3.10
ARG EXECUTABLE
WORKDIR /app
COPY /publish .
ENV ASPNETCORE_URLS "http://*:5000"
ENV ASPNETCORE_ENVIRONMENT "Staging"
CMD ["dotnet", "$EXECUTABLE"]
Here is my solution, I have defined the variables above and replace them in the docker file with sed
My Solution have this two Projects
Test.WebApi (WebApi Project)
Test.WebApi.UnitTest (Unit Test Project)
#ThomasBrüggemann thanks for the Inspiration.
.gitlab-ci.yml
variables:
PROJECT_NAME: "Test.WebApi"
BUILD_IMAGE: "mcr.microsoft.com/dotnet/core/sdk:3.1"
RUNTIME_IMAGE: "mcr.microsoft.com/dotnet/core/aspnet:3.1.1-alpine3.10"
stages:
- build
- test
- docker
build:
stage: build
image: $BUILD_IMAGE
only:
- master
script:
- cd src/$PROJECT_NAME
- dotnet restore --interactive
- dotnet build --configuration Release
- dotnet publish --configuration Release --output ../../publish/
artifacts:
paths:
- ./publish/*
expire_in: 1 week
tags:
- docker
test:
stage: test
image: $BUILD_IMAGE
only:
- master
script:
- cd src/$PROJECT_NAME.UnitTest
- dotnet test --test-adapter-path:. --logger:"junit;LogFilePath=../../UnitTestResult.xml"
artifacts:
paths:
- ./UnitTestResult.xml
reports:
junit: ./UnitTestResult.xml
tags:
- docker
docker:
stage: docker
image: docker:stable
services:
- docker:18.09.7-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
only:
- master
script:
# Prepare Dockerfile
- sed -i "s~\$DOCKERIMAGE~$RUNTIME_IMAGE~g" Dockerfile
- sed -i 's/$ENVIRONMENT/Staging/g' Dockerfile
- sed -i "s/\$ENTRYPOINT/$PROJECT_NAME.dll/g" Dockerfile
# Process Dockerfile
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker build --tag "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA" --tag "$CI_REGISTRY_IMAGE:latest" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
- docker push "$CI_REGISTRY_IMAGE:latest"
tags:
- docker
Dockerfile
FROM $DOCKERIMAGE
WORKDIR /app
COPY /publish .
EXPOSE 5000/tcp
ENV ASPNETCORE_URLS "http://*:5000"
ENV ASPNETCORE_ENVIRONMENT "$ENVIRONMENT"
CMD ["dotnet", "$ENTRYPOINT"]
Here some Version handling when tagging in GitLab-Pipeline:
script:
- COMMIT_DATE=$(git log -1 --format=%cd --date=iso-strict | grep -o '\([0-9]*\)' | tr -d '\n')
- VERSION_PREFIX=$CI_COMMIT_TAG
- VERSION_SUFFIX="${COMMIT_DATE::-6}"
- echo $VERSION_PREFIX-$VERSION_SUFFIX
- sed -i "s:<VersionPrefix>.*</VersionPrefix>:<VersionPrefix>$VERSION_PREFIX</VersionPrefix>:g" [PROJECT].csproj
- dotnet publish --version-suffix $VERSION_SUFFIX -c Release -o ./out
- docker build --tag "$CI_REGISTRY_IMAGE:$VERSION_PREFIX"
only:
- tags
In Project file must
<!-- Version is set by CI-Script do not modify manually -->
<VersionPrefix>0.0.0</VersionPrefix>
<Deterministic>False</Deterministic>
be set
Maybe this is helpful.
Something similar can be done when build without tagging.
I'm actually trying to setup continuous delivery for a Rails dockerized project, hosted on Gitlab.com. I followed this article which is not directly related to Rails environment, and that I tried to adapt... Obviously without any success :(
For the context, I created three different services: db, webpacker and app.
Following the above article, here are my .gitlab-ci.yml and docker-compose.staging2.yml (autodeploy):
image: docker
services:
- docker:dind
cache:
paths:
- node_modules
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
CONTAINER_CURRENT_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_LATEST_IMAGE: $CI_REGISTRY_IMAGE:latest
CONTAINER_STABLE_IMAGE: $CI_REGISTRY_IMAGE:stable
stages:
- test
- build
- release
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose --version
test:
stage: test
script:
- docker-compose build --pull
# Here we will run tests when available...
after_script:
- docker-compose down
- docker volume rm `docker volume ls -qf dangling=true`
build:
stage: build
script:
- docker build -t $CONTAINER_CURRENT_IMAGE . --pull
- docker push $CONTAINER_CURRENT_IMAGE
release-latest-image:
stage: release
only:
- feat-dockerisation
script:
- docker pull $CONTAINER_CURRENT_IMAGE
- docker tag $CONTAINER_CURRENT_IMAGE $CONTAINER_LATEST_IMAGE
- docker push $CONTAINER_LATEST_IMAGE
release-stable-image:
stage: release
only:
- feat-dockerisation
script:
- docker pull $CONTAINER_CURRENT_IMAGE
- docker tag $CONTAINER_CURRENT_IMAGE $CONTAINER_STABLE_IMAGE
- docker push $CONTAINER_STABLE_IMAGE
deploy_staging:
stage: deploy
only:
- feat-dockerisation
environment: production
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- which ssh-agent || (apk add openssh-client)
- eval $(ssh-agent -s)
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
script:
- scp -rp ./docker-compose.staging2.yml root#${DEPLOYMENT_SERVER_IP}:~/
- ssh root#$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY};
docker-compose -f docker-compose.staging2.yml down;
docker pull $CONTAINER_LATEST_IMAGE;
docker-compose -f docker-compose.staging2.yml up -d"
version: '3'
services:
db:
image: postgres:11-alpine
ports:
- 5433:5432
environment:
POSTGRES_PASSWORD: postgres
webpacker:
image: registry.gitlab.com/soykje/beweeg-ror:latest
command: [sh, -c, "yarn && bin/webpack-dev-server"]
ports:
- 3035:3035
app:
image: registry.gitlab.com/soykje/beweeg-ror:latest
links:
- db
- webpacker
ports:
- 3000:3000
I'm getting started with Docker and CI/CD so... Can't find what I am doing wrong :/
After all the jobs are successfully completed on Gitlab CI/CD, when I try to access my app on the Docker droplet I get nothing... When I ssh on my droplet, everything seems ok, but I still cannot browse anything... Would anyone have an idea of what I am missing?
I feel I'm pretty close to achieve (maybe I'm wrong too...), so any help would be very welcome!
Thx in advance!