Coverity scan on travis matrix build - travis-ci

I am adding [coverity scan][1] to my project, but I'm facing problem with the quota limits because my travis build is using matrix.
I manage to run a custom coverity script (via build_script_url option) to filter my building job:
#!/bin/sh
set -e
if [[ $TRAVIS_OS_NAME != osx || $JOB != BUILD_RELEASE_JOKER ]]; then
echo "Skip build configuration: $TRAVIS_OS_NAME / $JOB"
exit 1
fi
curl -s https://scan.coverity.com/scripts/travisci_build_coverity_scan.sh | bash
I'm facing the problem that if the analysis quota exceed, it stop the build job with the following message:
Coverity Scan analysis selected for branch coverity.
Coverity Scan analysis NOT authorized until Fri, 08 Jan 2016 18:00:52 +0000 UTC.
The second bad side effect is that the build job appears in green whereas the job hasn't been run!

what i've been doing for all my recent projects was, optionally (depending on the quota) running the coverity scan in my before_script:
before_script:
- autoreconf -fiv
- ./configure --disable-silent-rules
# implement Coverity Scan with before_script instead of addons.coverity_scan
# to work around too-early quota check by the coverity_scan addon
- if [[ -n $COVERITY_SCAN_PROJECT_NAME ]] ; then curl -s 'https://scan.coverity.com/scripts/travisci_build_coverity_scan.sh' | bash || true ; fi
script:
- make
this will check whether i can still submit a new build to coverity, and if so, will run the COVERITY_SCAN_BUILD_COMMAND (which is just make). if this succeeds, it will run make again (in the script stage), but due to the properties of make this will not do much.
if it fails (because the build fails), it will also run make again (in the script stage), which will soonish fail again, failing the entire build.
if however, my coverity quota is exhausted, the travisci_build_coverity_scan.sh script will fail, but this failure will then be turned into a pseudo-success via || true.
then, in the script stage, the project will be build using make, and the final success depends on the outcome of this build.
so in short:
if possible (due to quota), the build will be submitted to coverity
if the build succeeds (with or without coverity), the travis-ci state will become green ("build passing")
if the build fails (with or without coverity), the travis-ci state will become red ("build failed")

Related

Can Jenkins return 0 and 1 after test suites completed?

My query is related to Jenkins server.
I have made one API to hit the Jenkins server where Jenkins starts test suites.
My question is: can Jenkins server return 0 if any test case fail, and 1 otherwise?
The API URL is in the form
JENKINS_URL/job/Encore_Automation/build?token=TOKEN_NAME
By looking at Build Triggers / Trigger builds remotely (e.g., from scripts) it seems like this option only supports queuing a project and it does not let you retrieve results.
Jenkins REST API
After build has been triggered from REST API call, you could start making consecutive REST API calls to check it status.
Jenkins CLI
However Jenkins offers a jenkins-cli tool which let you not only to trigger the build but also to wait until its completion:
java -jar jenkins-cli.jar -s http://localhost:8080/ build JOB [-c] [-f] [-p] [-r N] [-s] [-v] [-w]
Starts a build, and optionally waits for a completion.
Aside from general scripting use, this command can be
used to invoke another job from within a build of one job.
With the -s option, this command changes the exit code based on
the outcome of the build (exit code 0 indicates a success)
and interrupting the command will interrupt the job.
With the -f option, this command changes the exit code based on
the outcome of the build (exit code 0 indicates a success)
however, unlike -s, interrupting the command will not interrupt
the job (exit code 125 indicates the command was interrupted).
With the -c option, a build will only run if there has been
an SCM change.
JOB : Name of the job to build
-c : Check for SCM changes before starting the build, and if there's no
change, exit without doing a build
-f : Follow the build progress. Like -s only interrupts are not passed
through to the build.
-p : Specify the build parameters in the key=value format.
-s : Wait until the completion/abortion of the command. Interrupts are passed
through to the build.
-v : Prints out the console output of the build. Use with -s
-w : Wait until the start of the command

What are the differences between the {before_,}{install,script} .travis.yml options?

Inside the .travis.yml configuration file what is the practical difference between before_install, install, before_script and script options?
I have found no documentation explaining the differences between these options.
You don't need to use these sections, but if you do, you communicate the intent of what you're doing:
before_install:
# execute all of the commands which need to be executed
# before installing dependencies
- composer self-update
- composer validate
install:
# install all of the dependencies you need here
- composer install --prefer-dist
before_script:
# execute all of the commands which need to be executed
# before running actual tests
- mysql -u root -e 'CREATE DATABASE test'
- bin/doctrine-migrations migrations:migrate
script:
# execute all of the commands which
# should make the build pass or fail
- vendor/bin/phpunit
- vendor/bin/php-cs-fixer fix --verbose --diff --dry-run
See, for example, https://github.com/localheinz/composer-normalize/blob/0.8.0/.travis.yml.
The difference is in the state of the job when something goes wrong.
Git 2.17 (Q2 2018) illustrates that in commit 3c93b82 (08 Jan 2018) by SZEDER Gábor (szeder).
(Merged by Junio C Hamano -- gitster -- in commit c710d18, 08 Mar 2018)
That illustrates the practical difference between before_install, install, before_script and script options
travis-ci: build Git during the 'script' phase
Ever since we started building and testing Git on Travis CI (522354d: Add Travis CI support, 2015-11-27, Git v2.7.0-rc0), we build Git in the
'before_script' phase and run the test suite in the 'script' phase
(except in the later introduced 32 bit Linux and Windows build jobs,
where we build in the 'script' phase').
Contrarily, the Travis CI practice is to build and test in the
'script' phase; indeed Travis CI's default build command for the
'script' phase of C/C++ projects is:
./configure && make && make test
The reason why Travis CI does it this way and why it's a better
approach than ours lies in how unsuccessful build jobs are
categorized. After something went wrong in a build job, its state can
be:
'failed', if a command in the 'script' phase returned an error.
This is indicated by a red 'X' on the Travis CI web interface.
'errored', if a command in the 'before_install', 'install', or
'before_script' phase returned an error, or the build job exceeded
the time limit.
This is shown as a red '!' on the web interface.
This makes it easier, both for humans looking at the Travis CI web
interface and for automated tools querying the Travis CI API, to
decide when an unsuccessful build is our responsibility requiring
human attention, i.e. when a build job 'failed' because of a compiler
error or a test failure, and when it's caused by something beyond our
control and might be fixed by restarting the build job, e.g. when a
build job 'errored' because a dependency couldn't be installed due to
a temporary network error or because the OSX build job exceeded its
time limit.
The drawback of building Git in the 'before_script' phase is that one
has to check the trace log of all 'errored' build jobs, too, to see
what caused the error, as it might have been caused by a compiler
error.
This requires additional clicks and page loads on the web interface and additional complexity and API requests in automated tools.
Therefore, move building Git from the 'before_script' phase to the
'script' phase, updating the script's name accordingly as well.
'ci/run-builds.sh' now becomes basically empty, remove it.
Several of our build job configurations override our default 'before_script' to do nothing; with this change our default 'before_script' won't do
anything, either, so remove those overriding directives as well.

Is there a way to control when puppet sends its report to Jenkins?

As a newbie to puppet, i'am trying to track deployment of my files through puppet plugin in Jenkins. Since by default, puppet track the file resources i'am able to do that.
My question is whether there is a way to tell puppet to when to send the report to Jenkins?
In my scenario, i'm getting the file from Jenkins archive, and after that i'm doing a service stop and then unzip the file and copy the content to install location and restart the service.
My requirement is if i can somehow configure puppet to wait until all those resources task run in puppet and if and only if all succeed only, send the report to Jenkins, then i'll be able to know deployment is 100% complete.
Also would like to know is there a way to notify Jenkins about deployment failures?
If I understand it correctly, you can do the next thing:
when the jenkins job starts, run on the puppet agent:
puppet agent --disable
then:
puppet agent -t
EXIT_CODE=$?
if [[ ${EXIT_CODE} == 0 ]] || [[ ${EXIT_CODE} == 2 ]]; then
true
exit $?
else
exit ${EXIT_CODE}
fi
That way if puppet status is 0 or 2 - which means if it's run changed successfully or unchanged (successfully) - you will get true.
else, you will exit your shell run with false, and the job will fail.
You can also print comments and such and use https://wiki.jenkins-ci.org/display/JENKINS/Log+Parser+Plugin plugin to make the build green / yellow / red as you like.

Disable Jenkins job for set amount of time

I have a Jenkins job that deploys an artifact to a server. I want to give the job runner the ability to disable the job for a set amount of time - could be one hour, could be one month. I also want them to have to enter a reason. The reason (along with various other info - build name, runner name etc.) then needs to be emailed to a distribution list.
Is there a way to disable a job for a set amount of time from within itself, on successful completion?
I'm guessing I need the parameterised build plugin, which I'm already successfully using for a couple of jobs.
EDIT:
I'm thinking I could do this by checking for a lock file in a pre-step, and writing a lock file either containing or named for the time at which the build becomes unlocked. I thought there might be a plugin or something I could use instead though.
Ok, so I went with my method in the end. Using a parameterised build (using the Parameterised Build Plugin), I ask the builder to enter the date they wish the job to be locked until. This is then written to a .lock file as a post-build step.
Subsequently, as a pre-build step, I look for a .lock file and read the contents. I then compare the date in the file to the current date, and either fail the build if the job is locked, or continue.
The code is below.
Pre-Step:
#!/bin/bash
echo ''
EXIT_CODE=0
LOCK_FILE_NAME=$DEPLOYMENT_ENV'.lock'
#Check chosen branch isn't master
if [ $BRANCH_NAME = 'master' ]
then
echo 'This job is for building feature branches. To build master, use the Deploy Master Artifact To Staging job.'
EXIT_CODE=666;
else
#Check if lock file exists for selected environment
if [ -f $LOCK_FILE_NAME ]
then
echo 'Lock file found for' $DEPLOYMENT_ENV
FILE_CONTENTS=`head -n 1 $LOCK_FILE_NAME`
LOCK_DATE=`date -d $FILE_CONTENTS +'%Y%m%d'`
NOW=`date +'%Y%m%d'`
#Compare lock date to today. Has the lock expired?
if [ $NOW -gt $LOCK_DATE ]
then
echo $DEPLOYMENT_ENV 'no longer locked. Proceeding with deployment...'
else
echo 'Job locked until:' `date -d $LOCK_DATE +'%d-%m-%Y'`
echo 'Aborting job...'
EXIT_CODE=666
fi
else
echo 'No lock file present for' $DEPLOYMENT_ENV
echo 'Proceeding with deployment...'
fi
echo ''
exit $EXIT_CODE
Post-Step:
#!/bin/bash
echo '**********' $WORKSPACE
#Copy artifact to web server
#Start web server
#Set lock
echo 'Now lock the environment until:' $LOCK_UNTIL;
echo $LOCK_UNTIL > $WORKSPACE'/'$DEPLOYMENT_ENV'.lock'
Scripting isn't really my forte, so if anyone has any better suggestions I'd be pleased to hear them :)

Jenkins Job fails when pytest test fails

Just wanted to explore pytest and integrating it into Jenkins. My sample pytest test cases are
def a(x):
return x+1
def test_answer():
assert a(2) == 3
def test_answer2():
assert a(0) == 2
I then generated a standalone pytest script which I run in Jenkins, generating an xml to be parsed for results.
As test_answer2 fails, the Jenkins job also fails. I'm assuming this is because the exit code returned is non-zero. How would I go around this, i.e the Jenkins job doesn't even if 1 or more tests do indeed fail. Thanks
If you are calling this test execution in a batch file or shell script or directly using the command execution in Jenkins. You can follow the below way:
Windows:
<your test execution calls>
exit 0
Linux:
set +e
<your test execution calls>
set -e
This will ignore the error if at all it is called with in the batch scripts and the Jenkins will show status as successful.
In addition to already posted answers:
You also can mark your test as xfail, what means you know it will fail, just like skipping:
#pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...
more about skipping you can find in pytest documentation
and on Jenkins side you also can manipulate of state of your build via simply try/catch statement:
try {
bat "python -m pytest ..."
} catch (pytestError) {
// rewrite state of you build as you want, Success or Failed
// currentBuild.result = "FAILED"
currentBuild.result = "SUCCESS" // your case
println pytestError
}
But be aware, it will mark whole build each time as success for that step of pytest run. Best practice just to skip tests via #pytest.mark.skip as described above.
If you are calling this test execution in a batch file or shell script or directly using the command execution in Jenkins. You can follow the below way:
Below code is NOT Working
Linux:
set +e
set -e
We use Jenkins running on a Windows system so our tests are listed in the Jenkins "Execute Windows Batch command" section.
I was able to solve this by separating the tests that might have failures with a single & (rather than &&). For example:
"C:\Program Files\Git\bin\sh.exe" -c python -m venv env && pip3 install -r requirements.txt && py.test --tap-files test_that_may_fail.py & py.test --tap-files next_test.py & py.test
Since we use pytest, any failures are flagged in python with an assert. If you use the &&, this will cause Jenkins job to abort and not run the other tests.

Resources