Hello I have a exit code 1 error during npm install in my docker container, during the build of the gitlab continuous integration. I have a javascript web application in nodejs and angularjs hosted on gitlab, with two repositories : one for the front and one for the back. For the front, I use a base image including node 7.7.1 and nginx and his configuration, hosted on an Amazon registry, and then the runner executes the npm install for the front, according to the package.json.
Here is the .gitlab-ci.yml :
image: docker:1.13.1
stages:
- build
- test
- deploy
variables:
BUILD_IMG: $CI_REGISTRY_IMAGE:$CI_BUILD_REF
TEST_IMG: $CI_REGISTRY_IMAGE:$CI_BUILD_REF_NAME
RELEASE_IMG: $CI_REGISTRY_IMAGE:latest
AWS_STAGING_ENV: "argalisformation-prod-env"
AWS_PROD_ENV: "argalisformation-prod-env"
DOCKERRUN: Dockerrun.aws.json
DEPLOY_ARCHIVE: ${AWS_APP}-${CI_BUILD_REF}.zip
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- .ci/before_script
build:
stage: build
script:
- docker build --pull -t $BUILD_IMG .
- docker push $BUILD_IMG
test:
stage: test
script:
- docker pull $BUILD_IMG
- docker run --rm $BUILD_IMG npm run test
- docker tag $BUILD_IMG $TEST_IMG
- docker push $TEST_IMG
deploy:staging:
stage: deploy
environment: Staging
variables:
DOCKER_IMG: ${CI_REGISTRY_IMAGE}:${CI_BUILD_REF}
script:
- ./.ci/create-deploy-archive $DOCKER_IMG $AWS_BUCKET $DOCKERRUN $DEPLOY_ARCHIVE
- ./.ci/aws-deploy $DEPLOY_ARCHIVE $CI_BUILD_REF $AWS_STAGING_ENV
artifacts:
paths:
- $DEPLOY_ARCHIVE
except:
- production
deploy:production:
stage: deploy
environment: Production
variables:
DOCKER_IMG: ${CI_REGISTRY_IMAGE}:latest
script:
- .ci/push-new-image $TEST_IMG $RELEASE_IMG
- .ci/create-deploy-archive $DOCKER_IMG $AWS_BUCKET $DOCKERRUN $DEPLOY_ARCHIVE
- .ci/aws-deploy $DEPLOY_ARCHIVE $CI_BUILD_REF $AWS_PROD_ENV
artifacts:
paths:
- $DEPLOY_ARCHIVE
only:
- production
when: manual
Here is the output of the runner error :
npm info lifecycle node-sass#3.13.1~install: node-sass#3.13.1
> node-sass#3.13.1 install /src/node_modules/sasslint-webpack-plugin/node_modules/node-sass
> node scripts/install.js
The command '/bin/sh -c npm set progress=false && npm install node-sass --save-dev && npm install' returned a non-zero code: 1
ERROR: Job failed: exit status 1
My npm version is 4.1.2.
Related
I have tried many ways through searching for a solution.
I think my problem is different.
I am wanting to have a docker image that has the environment installed and then active and ready for shell commands like: flake8, pylint, black, isort, coverage
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
RUN echo "conda activate up-and-down-pytorch" >> ~/.bashrc
conda_env_unit_test.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
# image: continuumio/miniconda3:latest
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
# - conda env create --file conda_env.yml
# - source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
The Docker Image gets uploaded to the gitlab registry and the unit test stage uses that image, however:
/bin/bash: line 127: coverage: command not found
(ultimate goal was to not have to create the conda environment every time I wanted to lint or run unit tests)
Figured it out today.
Dropped the duration for the unit tests.
Change was to source the environment in the unit-test job. Didn't need to do that in the Dockerfile.
Dockerfile
FROM continuumio/miniconda3
# Create the environment:
COPY conda_env_unit_tests.yml .
RUN conda env create -f conda_env_unit_tests.yml
conda_env_unit_tests.yml
name: up-and-down-pytorch
channels:
- defaults
- conda-forge
dependencies:
- python=3.9
- pandas
- pytest
- pytest-cov
- black
- flake8
- isort
- pylint
.gitlab-ci.yml (slimmed down)
stages:
- docker
- linting
- test
build_unit_test_docker:
stage: docker
tags:
- docker
image: docker:stable
services:
- docker:dind
variables:
IMAGE_NAME: "miniconda3-up-and-down-unit-tests"
script:
- cp /builds/upanddown1/mldl/up_and_down_pytorch/conda_env_unit_tests.yml /builds/upanddown1/mldl/up_and_down_pytorch/docker/unit_tests/
- docker -D login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker -D build -t $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME docker/unit_tests/
- docker -D push $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/$IMAGE_NAME
rules:
- changes:
- docker/unit_tests/Dockerfile
- conda_env_unit_tests.yml
unit-test:
stage: test
image: $CI_REGISTRY/upanddown1/mldl/up_and_down_pytorch/miniconda3-up-and-down-unit-tests
script:
- source activate up-and-down-pytorch
- coverage run --source=. -m pytest --verbose
- coverage report
- coverage xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
I need a failed test in my pipeline to fail the job so that I can have control over it. The problem is that the tests are being run in a "docker in docker" so the job doesn't fail because the container did run correctly, but the test doesn't return an error code (even if one fails).
The script "docker:test" run my test suit in a container and my pipeline is like:
image: docker:dind #Alpine
stages:
- install
- test
# - build
- deploy
env:
stage: install
script:
- chmod +x ./setup_env.sh
- ./setup_env.sh
artifacts:
paths:
- .env
expire_in: 1 days
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
# docker image:
# stage: build
# script:
# - npm run docker:build
remove .env:
stage: deploy
script:
- rm .env
pages:
stage: deploy
script:
- mkdir .public
- cp -r coverage/* .public
- mv .public public
artifacts:
paths:
- public
# only:
# - main
And my npm script is:
"docker:test": "npm i && tsc && docker build -t extractos-bancarios-test --target test . && docker run -d --name extractos-bancarios-test extractos-bancarios-test && docker logs -f extractos-bancarios-test >> logs.log",
I need to fail the pipeline when a test fails while using docker in docker
I was able to solve the problem on my own and I leave it documented so that no one wastes as much time as I did.
For the container inside the first container to fail, I needed it to return an exit code 1 when there is an error in the report. So I added a conditional with a grep in the scripts section of my .gitlab-ci.yml:
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
- rm junit.xml || true
- rm -r coverage || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
- if grep '<failure' junit.xml; then exit 1; else exit 0; fi
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
For the stage "deploy" I need a proxy. But stage "test" does not work from the point, on where the Karma test is starting. Is there a way, where I can define: Use proxy settings for stage "Deploy" but not for "test"?
I tried to exclude the IP, Karma is using, from proxy but the Ip is changing every time.
variables:
http_proxy: "$CODE_PROXY"
https_proxy: "$CODE_PROXY"
no_proxy: "127.0.0.1,localhost"
stages:
- test
- deploy
test:
stage: test
image: node:erbium
services:
- selenium/standalone-chrome:3.141.59
script:
- npm ci
- npm run lint
- npm run lint:sass
- npm run lint:editorconfig
- npm run test -- --progress=false --code-coverage
- npm run e2e -- --host=$(hostname -i)
- npm run build:prod -- --progress=false
coverage: '/Statements\s*:\s*(\d+\.?\d+)\%/'
artifacts:
expire_in: 3h
paths:
- dist/
reports:
junit: dist/reports/app-name/test-*.xml
cobertura: dist/coverage/app-name/cobertura-coverage.xml
tags:
- DOCKER
deploy:
stage: deploy
image: python:latest
script:
- pip install awscli
- aws s3 rm s3://$S3_BUCKET_NAME --recursive
- aws s3 cp ./dist/app-name s3://$S3_BUCKET_NAME/ --recursive
only:
- master
Two ways
Mixin variables
.proxy-variables: &proxy-variables
http_proxy: "$CODE_PROXY"
https_proxy: "$CODE_PROXY"
no_proxy: "127.0.0.1,localhost"
deploy:
stage: deploy
image: python:latest
variables:
- *proxy-variables
script:
- pip install awscli
- aws s3 rm s3://$S3_BUCKET_NAME --recursive
- aws s3 cp ./dist/app-name s3://$S3_BUCKET_NAME/ --recursive
only:
- master
Extend job template
.proxied-job:
variables:
http_proxy: "$CODE_PROXY"
https_proxy: "$CODE_PROXY"
no_proxy: "127.0.0.1,localhost"
deploy:
extends: .proxied-job
stage: deploy
image: python:latest
script:
- pip install awscli
- aws s3 rm s3://$S3_BUCKET_NAME --recursive
- aws s3 cp ./dist/app-name s3://$S3_BUCKET_NAME/ --recursive
only:
- master
I'm practicing with Gitlab CI to understand how to build an application and then use that within a Docker image. For now, my repo consists simply of helloworld.txt, dockerfile, and gitlab-ci.yml.
PROBLEM: During the build stage, I use a shell executor to 'zip helloworld.zip helloworld.txt". Then, I "docker build -t myproject/myapp ." where I expect to COPY helloworld.zip /" but it seems that the zip file I created is not available during the docker build context. Am I not saving the helloworld.zip file to the right location? Or something else? My long term intent is to write a python application, and during the build stage to compile into a single executable and copy into a docker container.
#cat helloworld.txt
hello world
#cat dockerfile
FROM centos:7
COPY helloworld.zip /
CMD ["/bin/bash"]
#cat gitlab-ci.yml
stages:
- build
- test
- release
- deploy
variables:
IMAGE_TEST_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
IMAGE_RELEASE_NAME: $CI_REGISTRY_IMAGE:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
build:
stage: build
script:
- echo "compile the program"
- zip zipfile.zip helloworld.txt
- docker build --pull -t $IMAGE_TEST_NAME .
- docker push $IMAGE_TEST_NAME
test:
stage: test
script:
- docker pull $IMAGE_TEST_NAME
- docker run $IMAGE_TEST_NAME yum install unzip -y && unzip /helloworld.zip && cat /helloworld.txt
release:
stage: release
script:
- docker pull $IMAGE_TEST_NAME
- docker tag $IMAGE_TEST_NAME $IMAGE_RELEASE_NAME
- docker push $IMAGE_RELEASE_NAME
only:
- master
deploy:
stage: deploy
script:
- ./deploy.sh
only:
- master
when: manual
I expect that within the same stage (in this case build), I can run a program such as zip and then COPY that zip file into a given directory within a newly built docker image during the docker build process.
EDIT
After learning that I can't do this, I've created two different stages: build_app and build_container. Also knowing that artifacts are used by default in following stages, I didn't add an artifacts to the first stage or a dependancies to the next stage. This is the gitlab-ci.yml below and is still producing the same error.
stages:
- build_app
- build_container
- test
- release
- deploy
# you can delete this line if you're not using Docker
#image: centos:latest
variables:
IMAGE_TEST_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
IMAGE_RELEASE_NAME: $CI_REGISTRY_IMAGE:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
build_app:
stage: build_app
script:
- echo "compile the program"
- zip zipfile.zip helloworld.txt
build_container:
stage: build_container
script:
- docker build --pull -t $IMAGE_TEST_NAME .
- docker push $IMAGE_TEST_NAME
test:
stage: test
script:
- docker pull $IMAGE_TEST_NAME
- docker run $IMAGE_TEST_NAME yum install unzip -y && unzip /helloworld.zip && cat /helloworld.txt
release:
stage: release
script:
- docker pull $IMAGE_TEST_NAME
- docker tag $IMAGE_TEST_NAME $IMAGE_RELEASE_NAME
- docker push $IMAGE_RELEASE_NAME
only:
- master
deploy:
stage: deploy
script:
- ./deploy.sh
only:
- master
when: manual
Job Status:
Build App: Passed
Build Container: Failed
Running with gitlab-runner 11.6.1 (8d829975)
on gitrunner-shell trtHcQTS
Using Shell executor...
Running on gitrunner.example.com...
Fetching changes...
Removing zipfile.zip
HEAD is now at e0a0a95 Update .gitlab-ci.yml
Checking out e0a0a952 as newFeature...
Skipping Git submodules setup
$ echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
WARNING! Your password will be stored unencrypted in /home/gitlab-runner/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker build --pull -t $IMAGE_TEST_NAME .
Sending build context to Docker daemon 112.1kB
Step 1/3 : FROM centos:7
7: Pulling from library/centos
Digest: sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
Status: Image is up to date for centos:7
---> 1e1148e4cc2c
Step 2/3 : COPY helloworld.zip /
COPY failed: stat /var/lib/docker/tmp/docker-builder312764301/helloworld.zip: no such file or directory
ERROR: Job failed: exit status 1
This is not possible. Gitlab CI's job model assumes that jobs of the same stage are independent.
See the manual for the dependencies keyword in gitlab-ci.yml:
This feature [...] allows you to define the artifacts to pass between different jobs.
Note that artifacts from all previous stages are passed by default.
[...] You can only define jobs from stages that are executed before the current one.
I am trying to create a correct .gitlab-ci.yml file. This is for the online gitlab.com not for a self hosted Gitlab. Most (if not all) documentation is about a self hosted gitlab instance.
What I want is to run my Mocha-Chai tests on the built container and when the tests pass I want to build an image and store it in the Gitlab Registry with a tag that matches my latest git tag.
Test part
I cannot get the tests running, whatever I try I always get Mocha not found.
Below is my .yml file. The build section is working.
The problem is in the test section and in the docker tag part of the release-image. I got the yml file from the official gitlab documentation the official gitlab documentation
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
- deploy
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com/edelacruz/cloudtrader-microservices
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE npm install && npm test
I also tried
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE npm test
and
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE npm install mocha -g
- docker run $CONTAINER_TEST_IMAGE npm install chai -g
- docker run $CONTAINER_TEST_IMAGE npm test
all the the same result:
sh: mocha: not found
the test script in package.json is
"test": "mocha ./Test",
I tried both putting mocha and chai in the devDependencies and in dependencies.
"devDependencies": {
"chai": "^4.0.2",
"mocha": "^3.4.2"
}
"dependencies": {
"chai": "^4.0.2",
"mocha": "^3.4.2"
},
Tag part
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/edelacruz/cloudtrader-microservices:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com/edelacruz/cloudtrader-microservices
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE:$CI_COMMIT_TAG
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
The release-image works if I leave out the tag part.
But I really want to have my image tagged with the git tag, not with latests or master.
$ docker tag $CONTAINER_TEST_IMAGE
$CONTAINER_RELEASE_IMAGE:$CI_COMMIT_TAG Error parsing reference:
"registry.gitlab.com/edelacruz/cloudtrader-microservices:" is not a
valid repository/tag: invalid reference format ERROR: Job failed: exit
code 1
Use this in the first approach:
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE sh -c 'npm install -g mocha && npm install && npm test'
I've added the globally installed mocha. What you tried in the later approaches didn't work because every docker run is a new container based on the image and not on the previous container.
In your first try (with the line docker run $CONTAINER_TEST_IMAGE npm install && npm test), the gitlab runner separates the command into docker run $CONTAINER_TEST_IMAGE npm install and npm test. As you may notice, the second command isn't run within a docker container.
For your second try, docker run $CONTAINER_TEST_IMAGE npm test requires that mocha be already installed in the docker image.
For your third try:
docker run $CONTAINER_TEST_IMAGE npm install mocha -g
docker run $CONTAINER_TEST_IMAGE npm install chai -g
docker run $CONTAINER_TEST_IMAGE npm test
Each of the commands is actually run on a separate docker container (ie. there's nothing indicating that the commands need to be run within the same docker container).
So, what's the easiest way to resolve this? Your first try is actually pretty close. You just have to make sure that the gitlab runner does not split the command into two.
Something like the following should work:
test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "npm install --only=dev; npm test"