SonarQube in jenkinsfile - stop if quality check is failed - jenkins

I'm using newtmitch/sonar-scanner to scan my code. I'm using it as a stage in jenkinsfile. My question is, if it is possible to stop the CI (stop the jenkins pipeline build), if the quality check is not passed.
In this article, they used waitForQualityGate abortPipeline: true, to stop the pipeline if the quality gate is not passed. Can I do the same from the docker?
My docker stage look like that:
stage('Run sonar scanner docker'){
sh(script:"""
sudo docker run -v $(pwd):/usr/src --network host .../newtmitch-scanner
""")
}
Plus I have sonar-project.properties file with properties in the path.

to fail a pipeline you need your command to return an exit code of failure that is different from 0 (success). If "newtmitch-scanner" always returns a success code, but returns output, what you could do is use grep command with option -q to look for the success message

Related

Triggering the Jenkins job from the GitLab pipeline stage and on successfully completion of the job move to next stage

Can you please help, I have the following scenario and I went through many videos, blogs but could not find anything matching with my use-case
Requirement:
To write a CI\CD pipeline in GitLab, which can facilitate the following stages in this order
- verify # unit test, sonarqube, pages
- build # package
- publish # copy artifact in repository
- deploy # Deploy artifact on runtime in an test environment
- integration # run postman\integration tests
All other stages are fine and working but for the deploy stage, because of a few restrictions I have to submit an existing Jenkins job using Jenkin remote API with the following script but the problem that script returns an asynchronous response and start the Jenkins job and deploy stage completes and it moves to next stage (integration).
Run Jenkins Job:
image: maven:3-jdk-8
tags:
- java
environment: development
stage: deploy
script:
- artifact_no=$(grep -m1 '<version>' pom.xml | grep -oP '(?<=>).*(?=<)')
- curl -X POST http://myhost:8081/job/fpp/view/categorized/job/fpp_PREP_party/build --user mkumar:1121053c6b6d19bf0b3c1d6ab604f22867 --data-urlencode json="{\"parameter\":[{\"name\":\"app_version\",\"value\":\"$artifact_no\"}]}"
Note: Using GitLab CE edition and Jenkins CI project service is not available.
I am looking for a possible way of triggering the Jenkins job from the pipeline and only on successful completion of the Jenkins job my integration stage starts executing.
Thanks for the help!
Retrieving the status of a Jenkins job that is triggered programmatically through the remote access API is notorious for not being quite convoluted.
Normally you would expect to receive in the response header, under the Location attribute, a url that you can poll to get the status of your request, but unfortunately there are some in-between steps to reach that point. You can find a guide in this post. You may also have a look in this older post.
Once you have the url, you can pool and parse the status job and either sh "exit 1" or sh "exit 0" in your script to force the job that is invoking the external job to fail or succeed, depending on how you want to assert the result of the remote job

Can I speed up Gradle daemon startup on Jenkins CI?

Every time I push my gradle build to Jenkins, it spends a considerable amount of time on this step:
Starting a Gradle Daemon (subsequent builds will be faster)
The relevant part of my Jenkinsfile looks like this:
stage('Build') {
steps {
withGradle() {
sh 'chmod +x gradlew'
sh './gradlew build jar'
}
}
}
I assumed withGradle() would try to persistently run a gradle daemon in the background on Jenkins to avoid this sort of thing, but at this point i'm not entirely sure it does anything - the docs for it are incredibly vague.
How do I improve build times with this system?
withGradle is contributed by Jenkins' Gradle Plugin and contributes console output highlighting and build scan URL capturing (showing the build scan URL in the Jenkins UI). It certainly doesn't do anything with the Gradle daemon. You do not need withGradle to run you Gradle builds in Jenkins, depending whether you use build scans of course. Doing just
stage('Build') {
steps {
sh 'chmod +x gradlew'
sh './gradlew build jar'
}
}
is perfectly fine.
Gradle daemons stop themselves after being idle for 3 hours (FAQ). If a build runs just once a day the daemon will be dead for sure. This is usually the reason why the daemon is absent and needs to be started.
Gradle might also decide to start a new daemon instance if the running daemon is classified incompatible (build environment, e.g. heap memory settings, changed). This is explicitly highlighted in the build output according to my information.
With regards to slow daemon startup performance, the usual advice to run the build on the latest Gradle and Java versions.
One last tip though. Should you be using Git as version control system, you can get rid of the sh 'chmod +x gradlew' by letting Git setting the executable flag via update-index:
git update-index --chmod=+x gradlew

Redirecting log output of Jenkins Pipeline Docker Plugin

When I run a Jenkins Pipeline Docker Plugin step
docker.image('the-image').inside {
sh 'thing'
doSomeComplexStep()
}
and I want to capture the output of the whole inside block to a log file instead of writing to the Jenkins log, is there any way to do that?
I read the sources of the jenkins pipeline docker plugin without finding anything obvious, but that doesn't mean there's no way.

jenkins pipeline. Ssh to a server get stuck on job

I need to ssh to a server from a simple jenkin pipeline and make a deploy which is simply moving to a directory and do a git fetch and some other comands (nmp install among others). Thing is that when jenkin job ssh to the remote server it connects ok but then It gets stucked, I have to stop it. I just now modify the script to simply do a "ssh to server " and a "pwd command" to go to the easiest but it connects to it and it get stuck untill I abort. What Am I missing? here is the simpe pipeline script and the output on an screenshot
pipeline {
agent any
stages {
stage('Connect to server') {
steps {
sh "ssh -t -t jenkins#10.x.x.xx"
sh "pwd"
}
}
stage('branch status') {
steps {
sh "git status"
}
}
}
}
Jenkins executes each "sh" step as a separate shell script. Content is written to a temporary file on Jenkins node and only then executed. Each command is executed in separate session and is not aware of previous one. So neither ssh session or changes in environment variable will persist between the two.
More importantly though, you are forcing pseudo-terminal allocation with -t flag. This is pretty much opposite to what you want to achieve, i.e. run shell commands non-interactively. Simply
sh "ssh jenkins#10.x.x.xx pwd"
is enough for your example to work. Placing the commands on separate lines would not work with regular shell script, regardless of Jenkins. However you still need to have private key available on node, otherwise the job will hang, waiting for you to provide password interactively. Normally, you will want to use SSH Agent Plugin to provide private key at runtime.
script {
sshagent(["your-ssh-credentals"]) {
sh "..."
}
}
For execution on longer commands see What is the cleanest way to ssh and run multiple commands in Bash?

Jenkins job status failing despite running correctly

I have a jenkins jobs that executes with 'Publish over SSH'. The job connects to the remote server, transfers files and runs and ansible playbook.
The playbook runs as intended, confirmed by the logs. However at the end of the job an error is returned, failing the job. It's causing problems as it's preventing the pipeline from working correctly.
SSH: EXEC: completed after 402,593 ms
SSH: Disconnecting configuration [server] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [2]]
Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
[Run Playbook] $ /bin/sh -xe /tmp/jenkins1528195779014969962.sh
+ echo Finished
Finished
Finished: UNSTABLE
Is there a setting missing to allow this to pass?
never used the 'Publish over SSH' you are referingto, but I can recommend Jenkins Ansible Plugin. I am running several playbooks in pipeline stages here successfully from labeled build slaves (have one dedicated slave that has Ansible installed) targeting Linux hosts on cloud infrastructure via SSH.
Especially in combination with the ANSI color plugin the output very readable.
If you cannot try that plugin, check whats the return code of the playbook run shell call.

Resources