Why e2e database tests failing within CI but not locally? - docker

I've got pipelines for dev, staging and production.
The staging pipeline is where I've got the issue. The pipeline builds just fine on dev (on and off the CI runner) but staging code builds only locally and on live server but will fail
in the CI runner. I indicated suspecting code with <--.
I've checked whether the database container is running at the time of testing and it is up and running. Logs show nothing unusual.
Cypress tests fail on tests where interaction with the database is being tested:
test-ci.sh:
#!/bin/bash
env=$1
fails=""
inspect() {
if [ $1 -ne 0 ]; then
fails="${fails} $2"
fi
}
# run server-side tests
dev() {
docker-compose up -d --build
docker-compose exec -T users python manage.py recreate_db
docker-compose exec -T users python manage.py test
inspect $? users
docker-compose exec -T client npm test -- --coverage --watchAll --watchAll=false
inspect $? client
docker-compose down
}
# run e2e tests
e2e() {
if [ "${env}" = "staging" ]; then
docker-compose -f docker-compose-stage.yml up -d --build
docker-compose -f docker-compose-stage.yml exec -T users python manage.py recreate_db # <--
docker run -e REACT_APP_USERS_SERVICE_URL=$REACT_APP_USERS_SERVICE_URL -v $PWD:/e2e -w /e2e -e CYPRESS_VIDEO=$CYPRESS_VIDEO --network flaskondocker_default cypress/included:6.0.0 --config baseUrl=http://nginx
inspect $? e2e
docker-compose -f docker-compose-stage.yml down
else
docker-compose -f docker-compose-prod.yml up -d --build
docker-compose -f docker-compose-prod.yml exec -T users python manage.py recreate_db
docker run -e REACT_APP_USERS_SERVICE_URL=$REACT_APP_USERS_SERVICE_URL -v $PWD:/e2e -w /e2e -e CYPRESS_VIDEO=$CYPRESS_VIDEO --network flaskondocker_default cypress/included:6.0.0 --config baseUrl=http://nginx
inspect $? e2e
docker-compose -f docker-compose-prod.yml down
fi
}
# run specific tests
if [ "${env}" = "staging" ]; then
echo "****************************************"
echo "Running e2e tests ..."
echo "****************************************"
e2e
elif [ "${env}" = "production" ]; then
echo "****************************************"
echo "Running e2e tests ..."
echo "****************************************"
e2e
else
echo "****************************************"
echo "Running client and server-side tests ..."
echo "****************************************"
dev
fi
if [ -n "${fails}" ]; then
echo "Test failed: ${fails}"
exit 1
else
echo "Tests passed!"
exit 0
fi
The tests are behaving like docker-compose -f docker-compose-stage.yml exec -T users python manage.py recreate_db failed or hasn't been executed but logs show no errors.
gitlab-ci.yml file:
image: docker:stable
services:
- docker:19.03.12-dind
variables:
COMMIT: ${CI_COMMIT_SHORT_SHA}
MAIN_REPO: https://gitlab.com/coding_hedgehog/flaskondocker.git
USERS: training-users
USERS_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/users
USERS_DB: training-users-db
USERS_DB_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/users-db
CLIENT: training-client
CLIENT_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/client
SWAGGER: training-swagger
SWAGGER_REPO: ${MAIN_REPO}#${CI_COMMIT_BRANCH}:services/swagger
stages:
- build
- push
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- export CYPRESS_VIDEO=false
- export SECRET_KEY=pythonrocks
- export AWS_ACCOUNT_ID=nada
- export AWS_ACCESS_KEY_ID=nada
- export AWS_SECRET_ACCESS_KEY=nada
- apk add --no-cache py-pip python2-dev python3-dev libffi-dev openssl-dev gcc libc-dev make npm
- pip install docker-compose
- npm install
compile:
stage: build
script:
- docker pull cypress/included:6.0.0
- sh test-ci.sh $CI_COMMIT_BRANCH
deployment:
stage: push
script:
- sh ./docker-push.sh
when: on_success
Let me just emphasize that the tests are passing locally on my computer as well as on live server. The database-related e2e tests fail when ran headlessly within CI.
What debugging steps I can take knowing that no containers are crashing, logs show no errors, same code builds locally and runs OK live but fails in the CI ?

We have had some issues where database checks worked locally, but not in headless CI. We found out that it was because of datetime fields. The markup response in CI was different than locally. Thus, all assertions that checked dates failed. We fixed this by writing MySQL queries that format the datetime result. Then adjust the assertions in Cypress accordingly. Maybe your problem has to do with this issue.
SELECT DATE_FORMAT(columnname, "%d-%c-%Y") as columnname FROM table
So for further debugging, do you have any simple tests that run correctly in CI? Or does nothing work?

Related

how to run a pipeline in gitlab on docker container? closed network error

I have this pipeline that I cant figure out why its running into issues. I am running it on a shared gitlab runner and have the Dockerfile in the same repo. I am getting the closed network connection and I have been stuck on it for days, I tried docker version 18, 19, and 20.
This is to build a custom docker container and deploy the code.
.gitlab-ci.yml
before_script:
- docker --version
#image: ubuntu:18.04 #
#services:
# - docker:18.09.7-dind
stages: # List of stages for jobs, and their order of execution
- build
- test
- deploy
build-image:
stage:
- build
tags:
- docker
- shared
image: docker:20-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
# entrypoint: ["env", "-u", "DOCKER_HOST"]
# command: ["dockerd-entrypoint.sh"]
script:
- echo "FROM ubuntu:18.04" > Dockerfile
- docker build .
unit-test-job:
tags:
- docker # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- sleep 60
- echo "Code coverage is 90%"
lint-test-job:
tags:
- docker # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- sleep 10
- echo "No lint issues found."
deploy-job:
tags:
- docker # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
script:
- echo "Deploying application..."
- echo "Application successfully deployed."
Output
Running with gitlab-runner 14.8.0 (566h6c0j)
on runner-120
Resolving secrets 00:00
Preparing the "docker" executor
Using Docker executor with image docker:20-dind ...
Starting service docker:20-dind ...
Pulling docker image docker:20-dind ...
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Waiting for services to be up and running...
*** WARNING: Service runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0 probably didn't start properly.
Health check error:
service "runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-04-25T06:27:22.962117515Z ip: can't find device 'ip_tables'
2022-04-25T06:27:22.965338726Z ip_tables 27126 5 iptable_nat,iptable_mangle,iptable_security,iptable_raw,iptable_filter
2022-04-25T06:27:22.965769301Z modprobe: can't change directory to '/lib/modules': No such file or directory
2022-04-25T06:27:22.984812613Z mount: permission denied (are you root?)
2022-04-25T06:27:22.984847849Z Could not mount /sys/kernel/security.
2022-04-25T06:27:22.984853848Z AppArmor detection and --privileged mode might break.
2022-04-25T06:27:22.984858696Z mount: permission denied (are you root?)
*********
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Preparing environment 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Running on runner-120-concurrent-0 via nikobelly-docker...
Getting source from Git repository 00:01
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/nikobelly/test_pipeline/.git/
Checking out 5d3bgbe5 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
$ docker --version
Docker version 20.10.14, build a224086
$ echo "FROM ubuntu:18.04" > Dockerfile
$ docker build .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": write tcp 172.14.0.4:46336->10.24.125.200:2375: use of closed network connection
Cleaning up project directory and file based variables 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
ERROR: Job failed: exit code 1
So - you're trying to build a docker image inside a container.
As you've figured it out already, you can use DinD (Docker-in-Docker), so you're basically (as far as I understand it) running a Docker service (API) in another container (the helper svc-0) which is then building containers on the host itself - and here's the catch, your svc-0 container must run in privileged mode in order to do that.
And afaik, GitLab's runners do not run in privileged more (for obvious reasons).
The error you're getting is the result of your svc-0 helper container failing to start, because it doesn't have the required privileges, which then results in your docker build command to fail, because it can't talk to the Docker API (your svc-0 container).
Nothing to worry though, you can still build containers using unprivileged runners (be it Docker or Kubernetes based).
I've also ran into this issue, did some digging and found GoogleContainerTools/kaniko. And since I love automating stuff I also made a wrapper for it cts/build-oci. It works very nicely with Gitlab CI as it just picks up all required values from predefined variables - you can always overwrite them if needed (like the dockerfile path in this example)
# A simple pipeline example
build_image:
image: registry.gitplac.si/cts/build-oci:1.0.4
script: [ "/build.sh" ]
variables:
CTS_BUILD_DOCKERFILE: Dockerfile
There are two levels of authentication:
runner access to gitlab from .gitlab-ci.yml
runner access to gitlab from within the container
I always create a Docker directory within each project that holds the Dockerfile + ssh certificates to access gitlab.
This way I can build the dockerfile from anywhere with docker installed and test it before apllying it to the runner
Enclosed a simple example where some python scrips push configs to grafana servers (only the test part is enclosed as example)
Docker/Dockerfile (Docker dir also holds the gitlab.priv + gitlab.publ for a personal gitlab ssh-key that are copied into):
FROM xxxx.yyyy.zzzz:4567/testtools/python/python:3.10.4
ENV DIR /fido2-grafana
ENV GITREPO git#xxxx.yyyy.zzzz:id-pro/test/fido2-grafana.git
ENV KEY_GEN_PATH /root/.ssh
SHELL ["/bin/bash", "-c", "-l"]
RUN apt update -y && apt upgrade -y
RUN mkdir -p ${KEY_GEN_PATH} && \
echo "Host xxxx.yyyy.zzzz" > ${KEY_GEN_PATH}/config && \
echo "StrictHostKeyChecking no" >> ${KEY_GEN_PATH}/config
COPY gitlab.priv ${KEY_GEN_PATH}/id_rsa
COPY gitlab.publ ${KEY_GEN_PATH}/id_rsa.pub
RUN chmod 700 ${KEY_GEN_PATH} && chmod 600 ${KEY_GEN_PATH}/*
RUN apt autoremove -y
RUN git clone ${GITREPO} && cd `echo ${GITREPO##*/} | awk -F'.' '{print $1}'`
RUN cd ${DIR} && pip install -r requirements.txt
WORKDIR ${DIR}
.gitlab-ci.yml:
variables:
TAG: latest
JOBNAME: fido2-grafana
MYPATH: $CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$JOBNAME
stages:
- build
- deploy
build-execution-container:
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker build --pull -t $MYPATH:$TAG Docker
- docker push $MYPATH:$TAG
deploy-boards:
before_script:
- echo "Running ${JOBNAME}:${TAG} to deploy boards"
stage: deploy
image: ${MYPATH}:${TAG}
script:
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS health.json'| tee output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS status.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Fido2 BKS Metrics.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Service uptime.json'| tee -a output.log; exit $?"
artifacts:
name: "${JOBNAME} report"
when: always
paths:
- output.log

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

How do I set docker-credential-ecr-login in my PATH before anything else in GitLab CI

I'm using AWS ECR to host a private Dockerfile image, and I would like to use it in GitLab CI.
Accordingly to the documentation I need to set docker-credential-ecr-login to fetch the private image, but I have no idea how to do that before anything else. That's my .gitlab-ci file:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Thank you.
I confirm the feature at stake is not yet available in GitLab CI; however I've recently seen it is possible to implement a generic workaround to run a dedicated CI script within a container taken from a private Docker image.
The template file .gitlab-ci.yml below is adapted from the OP's example, using the Docker-in-Docker approach I suggested in this other SO answer, itself inspired by the GitLab CI doc dealing with dind:
stages:
- test
variables:
IMAGE: "0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest"
REGION: "ap-northeast-1"
tests:
stage: test
image: docker:latest
services:
- docker:dind
variables:
# GIT_STRATEGY: none # uncomment if "git clone" is unneeded for this job
before_script:
- ': before_script'
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $(aws ecr get-login --no-include-email --region "$REGION")
- docker pull "$IMAGE"
script:
- ': script'
- |
docker run --rm -v "$PWD:/build" -w /build "$IMAGE" /bin/bash -c "
export PS4='+ \e[33;1m($CI_JOB_NAME # line \$LINENO) \$\e[0m ' # optional
set -ex
## TODO insert your multi-line shell script here ##
echo \"One comment\" # quotes must be escaped here
: A better comment
echo $PWD # interpolated outside the container
echo \$PWD # interpolated inside the container
bundle install
bundle exec rspec
## (cont'd) ##
"
- ': done'
allow_failure: true # for now as we do not have tests
This example assumes the Docker $IMAGE contains the /bin/bash binary, and relies on the so-called block style of YAML.
The above template already contains comments, but to be self-contained:
You need to escape double quotes if your Bash commands contain them, because the whole code is surrounded by docker run … " and ";
You also need to escape local Bash variables (cf. the \$PWD above), otherwise these variables will be resolved prior running the docker run … "$IMAGE" /bin/bash -c "…" command itself.
I replaced the echo "stuff" or so commands with their more effective colon counterpart:
set -x
: stuff
: note that these three shell commands do nothing
: but printing their args thanks to the -x option.
[Feedback is welcome as I can't directly test this config (I'm not an AWS ECR user), but I'm puzzled by the fact the OP's example contained at the same time some apt and apk commands…]
Related remark on a pitfall of set -e
Beware that the following script is buggy:
set -e
command1 && command2
command3
Namely, write instead:
set -e
command1 ; command2
command3
or:
set -e
( command1 && command2 )
command3
To be convinced about this, you can try running:
bash -e -c 'false && true; echo $?; echo this should not be run'
→ 1
→ this should not be run
bash -e -c 'false; true; echo $?; echo this should not be run'
bash -e -c '( false && true ); echo $?; echo this should not be run'
From GitLab documentation. In order to interact with your AWS account, the GitLab CI/CD pipelines require both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be defined in your GitLab settings under Settings > CI/CD > Variables. Then add to your before script:
image: 0222822883.dkr.ecr.us-east-1.amazonaws.com/api-build:latest
tests:
stage: test
before_script:
- echo "before_script"
- apt install amazon-ecr-credential-helper
- apk add --no-cache curl jq python py-pip
- pip install awscli
- $( aws ecr get-login --no-include-email )
script:
- echo "script"
- bundle install
- bundle exec rspec
allow_failure: true # for now as we do not have tests
Also, you had a typo is awscli, not awsclir.Then add the builds, tests and push accordingly.
I think that you have some sort of logic error in the case. image in the build configuration is a CI scripts runner image, not image you build and deploy.
I think you don't have to use it in any case since it is just an image which has utilities & connections to the GitLab CI & etc. The image shouldn't have any dependencies of your project normally.
Please check examples like this one https://gist.github.com/jlis/4bc528041b9661ae6594c63cd2ef673c to get it more clear how to do it a correct way.
I faced the same problem using docker executor mode of gitlab runner.
SSH into the EC2 instance showed that docker-credential-ecr-login was present in /usr/bin/. To pass it to the container I had to mount this package to the gitlab runner container.
gitlab-runner register -n \
--url '${gitlab_url}' \
--registration-token '${registration_token}' \
--template-config /tmp/gitlab_runner.template.toml \
--executor docker \
--tag-list '${runner_name}' \
--description 'gitlab runner for ${runner_name}' \
--docker-privileged \
--docker-image "alpine" \
--docker-disable-cache=true \
--docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \
--docker-volumes "/cache" \
--docker-volumes "/usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login" \
--docker-volumes "/home/gitlab-runner/.docker:/root/.docker"
More information on this thread as well: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1583#note_375018948
We have a similar setup where we need to run CI jobs based off of an Image that is hosted on ECR.
Steps to follow:-
follow this guide here>> https://github.com/awslabs/amazon-ecr-credential-helper
gist of this above link is if you are on "Amazon Linux 2"
sudo amazon-linux-extras enable docker
sudo yum install amazon-ecr-credential-helper
open the ~/.docker/config.json on your gitlab runner in VI editor
Paste this code in the ~/.docker/config.json
{
"credHelpers":
{
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
}
}
source ~/.bashrc
systemctl restart docker
also remove any references of DOCKER_AUTH_CONFIG from your GitLab>>CI/CD>> Variables
That's it

How to workaround "the input device is not a TTY" when using grunt-shell to invoke a script that calls docker run?

When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file

Run sonarqube scanner with gitlab ci

I am trying to put together a CI environment for a .NET application using the following stack (just the relevant ones):
Debian + mono
Docker
Gitlab CI
Gitlab-multi-runner (as a docker container)
Sonarqube + Postgre
I've used docker-compose to create the container for sonarqube and postgre, both are running and working. I am sadly stuck with executing sonarqube analysis for my build executed by the gitlab runner and all examples I found were using Maven. I've tried to use sonar-scanner as well, no luck so far.
Here are the contents of my gitlab-ci.yml:
image: mono:latest
cache:
paths:
- ./src/T_GitLabCi/packages/
stages:
- build
.shared: &restriction
only:
- master
tags:
- docker
build:
<<: *restriction
stage: build
script:
- nuget restore ./src/T_GitLabCi
- MONO_IOMAP=case xbuild /t:Build /p:Configuration="Release" /p:Platform="Any CPU" ./src/T_GitLabCi/T_GitLabCi.sln
- mono ./tools/NUnitConsoleRunner/nunit3-console.exe ./src/T_GitLabCi/T_GitLabCi.sln --work=./src/T_GitLabCi/test --config=Release
- << EXECUTE SONAR ANALYSIS >>
I am definitely missing something here. Could somebody point me the right direction?
I have projects written in PHP but that shouldn't matter. Here's what I did.
I enabled a private registry hosted on my GitLab installation
In this registry I have a "sonar-scanner" image built from this Dockerfile (it's based on one of the images available on Docker hub):
FROM java:alpine
ENV SONAR_SCANNER_VERSION 2.8
RUN apk add --no-cache wget && \
wget https://sonarsource.bintray.com/Distribution/sonar-scanner-cli/sonar-scanner-${SONAR_SCANNER_VERSION}.zip && \
unzip sonar-scanner-${SONAR_SCANNER_VERSION} && \
cd /usr/bin && ln -s /sonar-scanner-${SONAR_SCANNER_VERSION}/bin/sonar-scanner sonar-scanner && \
apk del wget
COPY files/sonar-scanner-run.sh /usr/bin
and here's the files/sonar-scanner-run.sh file:
#!/bin/sh
URL="<YOUR SONARQUBE URL>"
USER="<SONARQUBE USER THAT CAN ACCESS THE PROJECTS>"
PASSWORD="<USER PASSWORD>"
if [ -z "$SONAR_PROJECT_KEY" ]; then
echo "Undefined \"projectKey\"" && exit 1
else
COMMAND="sonar-scanner -Dsonar.host.url=\"$URL\" -Dsonar.login=\"$USER\" -Dsonar.password=\"$PASSWORD\" -Dsonar.projectKey=\"$SONAR_PROJECT_KEY\""
if [ ! -z "$SONAR_PROJECT_VERSION" ]; then
COMMAND="$COMMAND -Dsonar.projectVersion=\"$SONAR_PROJECT_VERSION\""
fi
if [ ! -z "$SONAR_PROJECT_NAME" ]; then
COMMAND="$COMMAND -Dsonar.projectName=\"$SONAR_PROJECT_NAME\""
fi
if [ ! -z $CI_BUILD_REF ]; then
COMMAND="$COMMAND -Dsonar.gitlab.commit_sha=\"$CI_BUILD_REF\""
fi
if [ ! -z $CI_BUILD_REF_NAME ]; then
COMMAND="$COMMAND -Dsonar.gitlab.ref_name=\"$CI_BUILD_REF_NAME\""
fi
if [ ! -z $SONAR_BRANCH ]; then
COMMAND="$COMMAND -Dsonar.branch=\"$SONAR_BRANCH\""
fi
if [ ! -z $SONAR_ANALYSIS_MODE ]; then
COMMAND="$COMMAND -Dsonar.analysis.mode=\"$SONAR_ANALYSIS_MODE\""
if [ $SONAR_ANALYSIS_MODE="preview" ]; then
COMMAND="$COMMAND -Dsonar.issuesReport.console.enable=true"
fi
fi
eval $COMMAND
fi
Now in my project in .gitlab-ci.yml I have something like this:
SonarQube:
image: <PATH TO YOUR IMAGE ON YOUR REGISTRY>
variables:
SONAR_PROJECT_KEY: "<YOUR PROJECT KEY>"
SONAR_PROJECT_NAME: "$CI_PROJECT_NAME"
SONAR_PROJECT_VERSION: "$CI_BUILD_ID"
script:
- /usr/bin/sonar-scanner-run.sh
That't pretty much all. The above example of .gitlab-ci.yml is simplified since I'm using diffrent builds for master and other branches (like when: manual) and I use this plugin to get feedback in GitLab: https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin
Feel free to ask if you have any questions. It took me some time to put this all together the way I want it :) Actually I'm still finetuning it.
You need to install sonar-scanner first. You can find portage of sonar-scanner for almost any recent language, for example for npm you don't have to use directly the java executor:
I only add to do this :
npm install --save sonar-scanner
Then I needed to add this in my package.json
"scripts": {
"sonar-scanner": "node_modules/sonar-scanner/bin/sonar-scanner"
}
This is my job in .gitlab-ci.yml:
job_testmaster:
stage: test
script:
- PACKAGE_VERSION=$(node -p "require('./package.json').version")
- echo sonar.projectVersion=${PACKAGE_VERSION} >> sonar-project.properties
- npm run build
- npm run sonar-scanner -- -Dsonar.login=${SONAR_LOGIN}
only:
- master
tags:
- docker
With this, I am able to start sonar analysis, but I am not able to use the quality gates after.
Hope this help.

Resources