Below is my pipeline on which I'm trying to get Cypress job to run tests against
Nginx service (which points to the main app) which is built at the build stage
The below is based on official template from here https://gitlab.com/cypress-io/cypress-example-docker-gitlab/-/blob/master/.gitlab-ci.yml
:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
job:
stage: build
script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose npm
- docker-compose up -d --build
e2e:
image: cypress/included:9.1.1
stage: test
script:
- export CYPRESS_VIDEO=false
- export CYPRESS_baseUrl=http://nginx:80
- npm i randomstring
- $(npm bin)/cypress run -t -v $PWD/e2e -w /e2e -e CYPRESS_VIDEO -e CYPRESS_baseUrl --network testdriven_default
- docker-compose down
Error output:
Cypress encountered an error while parsing the argument config
You passed: if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
elif [ -x /busybox/sh ]; then
exec /busybox/sh
else
echo shell not found
exit 1
fi
The error was: Cannot read properties of undefined (reading 'split')
What is wrong with this set up ?
From #jparkrr on GitHub : https://github.com/cypress-io/cypress-docker-images/issues/300#issuecomment-626324350
I had the same problem. You can specify entrypoint: [""] for the image in .gitlab-ci.yml.
Read more about it here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#overriding-the-entrypoint-of-an-image
In your case :
e2e:
image:
name: cypress/included:9.1.1
entrypoint: [""]
Related
I can push images to ECR but I am not even close sure what I should do next (what should be the flow) to make my images run on Kubernetes on EKS
jobs:
create-deployment:
executor: aws-eks/python3
parameters:
cluster-name:
description: |
Name of the EKS cluster
type: string
steps:
- checkout
- aws-eks/update-kubeconfig-with-authenticator:
cluster-name: << parameters.cluster-name >>
install-kubectl: true
- kubernetes/create-or-update-resource:
get-rollout-status: true
resource-file-path: tests/nginx-deployment/deployment.yaml
# resource-file-path: configs/k8s/prod-deployment.yaml
resource-name: deployment/prod-deployment
orbs:
aws-ecr: circleci/aws-ecr#6.15.0
aws-eks: circleci/aws-eks#1.1.0
kubernetes: circleci/kubernetes#0.4.0
version: 2.1
workflows:
deployment:
jobs:
- aws-ecr/build-and-push-image:
repo: bwtc-backend
tag: "${CIRCLE_BRANCH}-v0.1.${CIRCLE_BUILD_NUM}"
dockerfile: configs/Docker/Dockerfile.prod
path: .
filters:
branches:
ignore:
- master
- aws-eks/create-cluster:
cluster-name: eks-demo-deployment
requires:
- aws-ecr/build-and-push-image
- create-deployment:
cluster-name: eks-demo-deployment
requires:
- aws-eks/create-cluster
- aws-eks/update-container-image:
cluster-name: eks-demo-deployment
container-image-updates: 'nginx=nginx:1.9.1'
post-steps:
- kubernetes/delete-resource:
resource-names: nginx-deployment
resource-types: deployment
wait: true
record: true
requires:
- create-deployment
resource-name: deployment/nginx-deployment
- aws-eks/delete-cluster:
cluster-name: eks-demo-deployment
requires:
- aws-eks/update-container-image
That's what I've got in my config for now.
The problem I am facing at the moment is:
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Exited with code exit status 2
CircleCI received exit code 2
I am using a snippet from CircleCI Documentation, so I guess it should work.
I passed in all the params as I can see but I can't get what I've missed here.
I need your help guys!
The last update has a bug. It is failing due to an incorrect URL. The issue is still open, see here. The correct url should be
https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz
unlike the current
"https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz"
While you wait for it to be approved, use the following to install eksctl first.
- run:
name: Install the eksctl tool
command: |
if which eksctl > /dev/null; then
echo "eksctl is already installed"
exit 0
fi
mkdir -p eksctl_download
curl --silent --location --retry 5 "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
| tar xz -C eksctl_download
chmod +x eksctl_download/eksctl
SUDO=""
if [ $(id -u) -ne 0 ] && which sudo > /dev/null ; then
SUDO="sudo"
fi
$SUDO mv eksctl_download/eksctl /usr/local/bin/
rmdir eksctl_download
and then run the job
- aws-eks/create-cluster:
cluster-name: eks-demo-deployment
That should solve the issue.
An example
# Creation of Cluster
create-cluster:
executor: aws-eks/python3
parameters:
cluster-name:
description: |
Name of the EKS cluster
type: string
steps:
- run:
name: Install the eksctl tool
command: |
if which eksctl > /dev/null; then
echo "eksctl is already installed"
exit 0
fi
mkdir -p eksctl_download
curl --silent --location --retry 5 "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
| tar xz -C eksctl_download
chmod +x eksctl_download/eksctl
SUDO=""
if [ $(id -u) -ne 0 ] && which sudo > /dev/null ; then
SUDO="sudo"
fi
$SUDO mv eksctl_download/eksctl /usr/local/bin/
rmdir eksctl_download
- aws-eks/create-cluster:
cluster-name: eks-demo-deployment
I'm getting some unexpected behaviour when trying to run a GitLab CI job inside a custom Docker container. It runs as expected outside GitLab. I don't even know what it is I'm not understanding... perhaps there's something about how GitLab runs docker containers that I'm missing, or how Docker ENTRYPOINT and CMD work together, or is it even just a mistake in my shell script?
My custom Docker image is intended to run Percy CLI commands. It's based on node:12-slim and basically just installs Puppeteer and Percy. It sets the ENTRYPOINT to a custom shell script:
ENTRYPOINT ["/usr/bin/local/percy-entrypoint.sh"]
but doesn't override the base image CMD.
My entrypoint script essentially just checks for a couple of env vars (and chucks out some debug stuff), then runs a "sitemap" function:
echo "All args:"
echo "$#"
if [ "$1" = "sitemap" ] ; then
percy exec -- node /sitemap.js
else
exec "$#"
fi
I'm running it from a GitLab-CI job as:
snapshot:
stage: test
image: my/percy-buildbox:latest
script:
- sitemap
variables:
PERCY_TOKEN: ${PERCY_TOKEN}
SITEMAP_URL: percy-sitemap.xml
The output generated by this job is:
All args:
sh -c if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
elif [ -x /busybox/sh ]; then
exec /busybox/sh
else
echo shell not found
exit 1
fi
$ sitemap
/bin/bash: line 134: sitemap: command not found
I don't even... where's all that if stuff coming from? What's it doing? Where am I going wrong?
I've got pipelines for dev, staging and production.
The staging pipeline is where I've got the issue. The pipeline builds just fine on dev (on and off the CI runner) but staging code builds only locally and on live server but will fail
in the CI runner. I indicated suspecting code with <--.
I've checked whether the database container is running at the time of testing and it is up and running. Logs show nothing unusual.
Cypress tests fail on tests where interaction with the database is being tested:
test-ci.sh:
#!/bin/bash
env=$1
fails=""
inspect() {
if [ $1 -ne 0 ]; then
fails="${fails} $2"
fi
}
# run server-side tests
dev() {
docker-compose up -d --build
docker-compose exec -T users python manage.py recreate_db
docker-compose exec -T users python manage.py test
inspect $? users
docker-compose exec -T client npm test -- --coverage --watchAll --watchAll=false
inspect $? client
docker-compose down
}
# run e2e tests
e2e() {
if [ "${env}" = "staging" ]; then
docker-compose -f docker-compose-stage.yml up -d --build
docker-compose -f docker-compose-stage.yml exec -T users python manage.py recreate_db # <--
docker run -e REACT_APP_USERS_SERVICE_URL=$REACT_APP_USERS_SERVICE_URL -v $PWD:/e2e -w /e2e -e CYPRESS_VIDEO=$CYPRESS_VIDEO --network flaskondocker_default cypress/included:6.0.0 --config baseUrl=http://nginx
inspect $? e2e
docker-compose -f docker-compose-stage.yml down
else
docker-compose -f docker-compose-prod.yml up -d --build
docker-compose -f docker-compose-prod.yml exec -T users python manage.py recreate_db
docker run -e REACT_APP_USERS_SERVICE_URL=$REACT_APP_USERS_SERVICE_URL -v $PWD:/e2e -w /e2e -e CYPRESS_VIDEO=$CYPRESS_VIDEO --network flaskondocker_default cypress/included:6.0.0 --config baseUrl=http://nginx
inspect $? e2e
docker-compose -f docker-compose-prod.yml down
fi
}
# run specific tests
if [ "${env}" = "staging" ]; then
echo "****************************************"
echo "Running e2e tests ..."
echo "****************************************"
e2e
elif [ "${env}" = "production" ]; then
echo "****************************************"
echo "Running e2e tests ..."
echo "****************************************"
e2e
else
echo "****************************************"
echo "Running client and server-side tests ..."
echo "****************************************"
dev
fi
if [ -n "${fails}" ]; then
echo "Test failed: ${fails}"
exit 1
else
echo "Tests passed!"
exit 0
fi
The tests are behaving like docker-compose -f docker-compose-stage.yml exec -T users python manage.py recreate_db failed or hasn't been executed but logs show no errors.
gitlab-ci.yml file:
image: docker:stable
services:
- docker:19.03.12-dind
variables:
COMMIT: ${CI_COMMIT_SHORT_SHA}
MAIN_REPO: https://gitlab.com/coding_hedgehog/flaskondocker.git
USERS: training-users
USERS_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/users
USERS_DB: training-users-db
USERS_DB_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/users-db
CLIENT: training-client
CLIENT_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/client
SWAGGER: training-swagger
SWAGGER_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/swagger
stages:
- build
- push
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- export CYPRESS_VIDEO=false
- export SECRET_KEY=pythonrocks
- export AWS_ACCOUNT_ID=nada
- export AWS_ACCESS_KEY_ID=nada
- export AWS_SECRET_ACCESS_KEY=nada
- apk add --no-cache py-pip python2-dev python3-dev libffi-dev openssl-dev gcc libc-dev make npm
- pip install docker-compose
- npm install
compile:
stage: build
script:
- docker pull cypress/included:6.0.0
- sh test-ci.sh $CI_COMMIT_BRANCH
deployment:
stage: push
script:
- sh ./docker-push.sh
when: on_success
Let me just emphasize that the tests are passing locally on my computer as well as on live server. The database-related e2e tests fail when ran headlessly within CI.
What debugging steps I can take knowing that no containers are crashing, logs show no errors, same code builds locally and runs OK live but fails in the CI ?
We have had some issues where database checks worked locally, but not in headless CI. We found out that it was because of datetime fields. The markup response in CI was different than locally. Thus, all assertions that checked dates failed. We fixed this by writing MySQL queries that format the datetime result. Then adjust the assertions in Cypress accordingly. Maybe your problem has to do with this issue.
SELECT DATE_FORMAT(columnname, "%d-%c-%Y") as columnname FROM table
So for further debugging, do you have any simple tests that run correctly in CI? Or does nothing work?
I've successfully setup the Container Scanning feature from GitLab for a single Docker image. Now I'd like to scan yet another image using the same CI/CD configuration in .gitlab-ci.yml
Problem
It looks like it is not possible to have multiple Container Scanning reports on the Merge Request detail page.
The following screenshot shows the result of both Container Scanning jobs in the configuration below.
We scan two Docker images, which both have CVE's to be reported:
iojs:1.6.3-slim (355 vulnerabilities)
golang:1.3 (1139 vulnerabilities)
Expected result
The Container Scanning report would show a total of 1494 vulnerabilities (355 + 1139). Currently it looks like only the results for the golang image are being included.
Relevant parts of the configuration
container_scanning_first_image:
script:
- docker pull golang:1.3
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-first-image.json
container_scanning_second_image:
script:
- docker pull iojs:1.6.3-slim
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-second-image.json
Full configuration for reference
image: docker:stable
stages:
- scan
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
container_scanning_first_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull golang:1.3
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
paths:
- gl-container-scanning-report-first-image.json
reports:
container_scanning: gl-container-scanning-report-first-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
container_scanning_second_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull iojs:1.6.3-slim
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
paths:
- gl-container-scanning-report-second-image.json
reports:
container_scanning: gl-container-scanning-report-second-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
Question
How should the GitLab Container Scanning feature be configured in order to be able to report the results of two Docker images?
My Dockerfile:
FROM mm_php:7.1
ADD ./docker/test/source/entrypoint.sh /work/entrypoint.sh
ADD ./docker/wait-for-it.sh /work/wait-for-it.sh
RUN chmod 755 /work/entrypoint.sh \
&& chmod 755 /work/wait-for-it.sh
ENTRYPOINT ["/work/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash -e
/work/wait-for-it.sh db:5432 -- echo "PostgreSQL started"
./vendor/bin/parallel-phpunit --pu-cmd="./vendor/bin/phpunit -c phpunit-docker.xml" tests
docker-compose.yml:
version: '2'
services:
test:
build:
context: .
args:
ssh_prv_key: ${ssh_prv_key}
application_env: ${application_env}
dockerfile: docker/test/source/Dockerfile
links:
- db
db:
build:
context: .
dockerfile: docker/test/postgres/Dockerfile
environment:
PGDATA: /tmp
.gitlab-ci.yml:
image: docker:latest
services:
- name: docker:dind
command: ["--insecure-registry=my.domain:5000 --registry-mirror=http://my.domain"]
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
test:
stage: test
script:
- export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
All works good. But if tests are failed, status of job in Gitlab is SUCCESS instead of FAILED.
How to obtain status FAILED if tests are failed?
UPD
If I run docker-compose up locally, it return no error code:
$ export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
Building db
Step 1/2 : FROM mm_postgres:9.6
...
test_1 | FAILURES!
test_1 | Tests: 1, Assertions: 1, Failures: 1.
test_1 | Success: 2 Fail: 2 Error: 0 Skip: 2 Incomplete: 0
mmadmin_test_1 exited with code 1
$ echo $?
0
It looks to me like it's reporting failed on the test without necessarily reporting failure on the return value of the docker-compose call. Have you tried capturing the return value of docker-compose when tests fail locally?
In order to get docker-compose to return the exit code from a specific service, try this:
docker-compose up --exit-code-from=service
When Gitlab CI runs something, if the process executed returns something different from zero, then, your build will fail.
In your case, you are running a docker-compose and this program returns zero when the container finish, what is correct.
You are trying to get phpunit's failure.
I think that is better you split your build in steps and not use docker-compose in this case:
gitlab.yml:
stages:
- build
- test
build:
image: docker:latest
stage: build
script:
- docker build -t ${NAME_OF_IMAGE} .
- docker push ${NAME_OF_IMAGE}
test:
image: ${NAME_OF_IMAGE}
stage: test
script:
- ./execute_your.sh