Read data in jenkinsfile from xml file created in current workspace - jenkins

Some time ago I tried to connect jenkins and gerrit and send cppcheck output from jenkins to gerrit as comment:
I installed proper patches for jenkins and gerrit(that is ok it's work)
In jekinsfile I'm tried to run cppckeck and save it's output to xml file(it's works)
Problem is here that when I'm trying to read xml file, and I have information that there is no such file. I see that script have different root catalog(i groovy I printed dir). I think code with my experimental jenkinsfile explain problem:
pipeline {
agent any
stages {
stage('Example') {
steps {
gerritReview labels: [Verified: 0]
sh 'pwd'
sh 'cppcheck --enable=all --inconclusive --xml --xml-version=2 *.c* 2> cppcheck.xml'
script {
def parser = new XmlParser()
def doc = parser.parse("cppcheck.xml"); // No xml file here because script {}
// is run in different place
}
}
}
post {
always {
gerritReview score: 1
step([$class: 'CppcheckPublisher', pattern: 'cppcheck.xml', ignoreBlankFiles: false, treshold: "0"])
}
} }
How to load this file. Or I'm doing it's all wrong?(I mean integration gerrit with jenkins who purpose is to run cppcheck and cpplint and show results in gerrit).

If the file is in your repo, you need to check out the repo first
https://www.jenkins.io/doc/pipeline/steps/workflow-scm-step/
pipeline {
agent any
stages {
stage('Example') {
steps {
checkout scm // ADD THIS LINE
gerritReview labels: [Verified: 0]
sh 'pwd'
sh 'cppcheck --enable=all --inconclusive --xml --xml-version=2 *.c* 2> cppcheck.xml'
script {
def parser = new XmlParser()
def doc = parser.parse("cppcheck.xml");
}
}
}
post {
always {
gerritReview score: 1
step([$class: 'CppcheckPublisher', pattern: 'cppcheck.xml', ignoreBlankFiles: false, treshold: "0"])
}
}
}

Related

Jenkins pipeline - skip next stage on conditional failure of pylint

I've a Jenkinsfile, which has a two different stages: Pre-Build and Build. The Pre-Build is executing pylint and uses the warnings-ng-plugin to report that back to Jenkins.
Something like that:
stages {
stage('Pre-build') {
steps {
script {
sh """#!/usr/bin/env bash
pip install .
pylint --exit-zero --output-format=parseable --reports=n myProject > reports/pylint.log
"""
}
}
post {
always {
recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30,
)
}
failure {
cleanWs()
}
}
}
stage('Build') {
steps {
script {
sh """#!/usr/bin/env bash
set -e
echo 'I AM STAGE TWO AND I SHOULD NOT BE EXECUTED'
"""
}
}
post {
always {
cleanWs()
}
}
}
}
I'm running into a couple of issues here. Currently I'm setting pylint to --exit-zero, as I want the warnings-ng plugin decide if it is good to go or not, based on the report.
Currently this is set to fail at a total of 30 issues. Now, myProject has 45 issues and I want to prevent that the next stage, Build is entered. But currently I can't seem to be able to prevent this behaviour, as it always continuous to the Build stage.
The build is flagged as failure, because of the results determined within recordIssues, but it doesn't abort the job.
I've found a ticket on https://issues.jenkins-ci.org (Ticket), but I can't seem to make sense out of all of this.
I've found a solution to your problem and I think that this is a bug in the pipeline workflow. The warnings-ng correctly sets build status to failed, but next stages are started despite the status in ${currentBuild.currentResult} variable.
You can use when { expression { return currentBuild.currentResult == "SUCCESS" } } to skip later stages, or throwing an error. But I think this should be default behaviour. Your file then should like:
stages {
stage('Pre-build') {
steps {
script {
sh """#!/usr/bin/env bash
pip install .
pylint --exit-zero --output-format=parseable --reports=n myProject > reports/pylint.log
"""
}
}
post {
always {
recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30,
)
}
}
}
stage('Build') {
when { expression { return currentBuild.currentResult == "SUCCESS" } }
steps {
script {
echo "currentResult: ${currentBuild.currentResult}"
sh """#!/usr/bin/env bash
set -e
echo 'I AM STAGE TWO AND I SHOULD NOT BE EXECUTED'
"""
}
}
}
post {
always {
cleanWs()
}
}
}
I've created an issue in their Jira.
My environment:
Jenkins ver.: 2.222.1
warnings-ng ver.: 8.1
worfklow-api ver.: 2.40
You have used post 2 times which is wrong implementation as post is designed to get executed only once after all stages are done. It should be written after all the stages just before end of pipeline.
To stop or skip the execution of 2nd Build stage, you can create global varaible at the top, capture the output of pylint in that and use if or when condition at start of stage. Something similar to --
pipeline {
def result
stages {
stage('Pre-build') {
steps {
script {
sh """#!/usr/bin/env bash
pip install .
pylint --exit-zero --output-format=parseable --reports=n myProject > reports/pylint.log
"""
}
}
}
}
stage('Pylint result') { // Not sure how recordIssue works. This just an example.
result = recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30,
)
}
stage('Build') {
if ( result == "pass") {
steps {
script {
sh """#!/usr/bin/env bash
set -e
echo 'I AM STAGE TWO AND I SHOULD NOT BE EXECUTED'
"""
}
}
}
}
}
post { // this should be used after stages
always {
cleanWs()
}
failure {
cleanWs()
}
}
Also, stages are designed in such a way that if they fail, next stage will not be executed so it's a good idea to have the pylint to be executed inside a stage instead of post condition.
Note: The code above is just an example. Please modify it according to your need.
One option that you may consider is to fail the build explicitly with the following code:
post {
always {
recordIssues(
enabledForFailure: true,
tool: pyLint(pattern: '**/pylint.log'),
unstableTotalAll: 20,
failedTotalAll: 30
)
script {
if (currentBuild.currentResult == 'FAILURE') {
error('Ensure that the build fails if the quality gates fail')
}
}
}
}
Here, after you record the issues, you also check if the value of currentBuild.currentResult is FAILURE and in that case you explicitly call the error() function which fails the build correctly.

Declarative Jenkins Pipeline; How to declare a variable and use it in script or mail notification?

(update below)
I have a declarative pipeline job which can take an argument VERSION.
pipeline {
parameters {
string(name: VERSION, defaultValue: '')
}
// ...
}
If no VERSION is given, like when Gitlab send a hook to this job, I want to compute it from git, so I do something like this
stages {
stage('Prepare') {
steps {
// ...
if (! env.VERSION) {
VERSION = sh(script: "git describe", returnStdout: true).trim()
}
}
}
}
Now I want to "inject" this variable to
my build script. It needs to find "VERSION" in the environment variables
to the jenkins mail notificator. And get it to retreive ${VERSION} in subject or body text
I tried changing above code with
stages {
stage('Prepare') {
steps {
// ...
if (! env.VERSION) {
env.VERSION = sh(script: "git describe", returnStdout: true).trim()
}
}
}
}
Got this error groovy.lang.MissingPropertyException: No such property: VERSION for class: groovy.lang.Binding
I then tried to add a "environment" step below
environment {
VERSION = ${VERSION}
}
but it didn't solve my problem.
I'm looking for any help to solve it.
UPDATE
I now have a working pipeline which looks like
pipeline {
agent any
parameters {
string(name: 'VERSION', defaultValue: '')
}
environment {
def VERSION = "${params.VERSION}"
}
stages {
stage('Prepare & Checkout') {
steps {
script {
if (! env.VERSION) {
VERSION = sh(script: "date", returnStdout: true).trim()
}
echo "** version: ${VERSION} **"
}
}
}
stage('Build') {
steps {
// sh "./build.sh"
echo "** version2: ${VERSION} **"
}
}
} // stages
post {
always {
mail to: 'foo#example.com',
subject: "SUCCESS: ${VERSION}",
body: """<html><body><p>SUCCESS</p></body></html>""",
mimeType: 'text/html',
charset: 'UTF-8'
deleteDir()
}
}
} // pipeline
I needed to add the "environment" step to be able to get $VERSION in all Stages (not only in the one it is manipulated).
I still need to find a way to inject this $VERSION variable in the environment variables, so that my build script can find it.
If you want to inject the variable in the environment so that you can use it later, you could define another variable that is equal to env.VERSION or the output of the shell scrip. Then use that variable in your pipeline eg:
pipeline {
parameters {
string(name: VERSION, defaultValue: '')
}
def version = env.VERSION
stages {
stage('Prepare') {
steps {
// ...
if (!version) {
version = sh(script: "git describe", returnStdout: true).trim()
}
}
}
mail subject: "$version build succeeded", ...
}
If you want other jobs to be able to access the value of VERSION after the build is run, you can write it in a file and archive it.
Edit:
In order for your script to be able to use the version variable, you can either make your script take version as a parameter or you can use the withEnv step.
Assuming you are using Parametrized pipelines, you should call variable as ${params.parameterName}
Although parameters are available in env they currently are created before the first time the pipeline is run, therefore you should access them via params:
In your case:
${params.VERSION}

Declarative pipeline when condition in post

As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword.
I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones.
The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch.
The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent.
pipeline {
agent any
tools {
maven 'M3'
jdk 'JDK8'
}
stages {
stage('Notifications') {
steps {
sh 'mkdir tmpPom'
sh 'mv pom.xml tmpPom/pom.xml'
checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]])
sh 'mvn clean test'
sh 'rm pom.xml'
sh 'mv tmpPom/pom.xml ../pom.xml'
}
}
}
post {
success {
script {
currentBuild.result = 'SUCCESS'
}
when {
branch 'master|release/*'
}
steps {
sh 'mvn deploy'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result,
)
}
failure {
script {
currentBuild.result = 'FAILURE'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result
)
}
}
}
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive.
So what you can do is test the conditions using an if in a script:
post {
success {
script {
if (env.BRANCH_NAME == 'master')
currentBuild.result = 'SUCCESS'
}
}
// failure block
}
Using a GitHub Repository and the Pipeline plugin I have something along these lines:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
make
'''
}
}
}
post {
always {
sh '''
make clean
'''
}
success {
script {
if (env.BRANCH_NAME == 'master') {
emailext (
to: 'engineers#green-planet.com',
subject: "${env.JOB_NAME} #${env.BUILD_NUMBER} master is fine",
body: "The master build is happy.\n\nConsole: ${env.BUILD_URL}.\n\n",
attachLog: true,
)
} else if (env.BRANCH_NAME.startsWith('PR')) {
// also send email to tell people their PR status
} else {
// this is some other branch
}
}
}
}
}
And that way, notifications can be sent based on the type of branch being built. See the pipeline model definition and also the global variable reference available on your server at http://your-jenkins-ip:8080/pipeline-syntax/globals#env for details.
Ran into the same issue with post. Worked around it by annotating the variable with #groovy.transform.Field. This was based on info I found in the Jenkins docs for defining global variables.
e.g.
#!groovy
pipeline {
agent none
stages {
stage("Validate") {
parallel {
stage("Ubuntu") {
agent {
label "TEST_MACHINE"
}
steps {{
sh "run tests command"
recordFailures('Ubuntu', 'test-results.xml')
junit 'test-results.xml'
}
}
}
}
}
post {
unsuccessful {
notify()
}
}
}
// Make testFailures global so it can be accessed from a 'post' step
#groovy.transform.Field
def testFailures = [:]
def recordFailures(key, resultsFile) {
def failures = ... parse test-results.xml script for failures ...
if (failures) {
testFailures[key] = failures
}
}
def notify() {
if (testFailures) {
... do something here ...
}
}

Set the build name and description from a Jenkins Declarative Pipeline

I would like to set the build name and description from a Jenkins Declarative Pipeline, but can't find the proper way of doing it. I tried using an environment bracket after the pipeline, using a node bracket in an agent bracket, etc. I always get syntax error.
The last version of my Jenkinsfile goes like so:
pipeline {
stages {
stage("Build") {
steps {
echo "Building application..."
bat "%ANT_HOME%/bin/ant.bat clean compile"
currentBuild.name = "MY_VERSION_NUMBER"
currentBuild.description = "MY_PROJECT MY_VERSION_NUMBER"
}
}
stage("Unit Tests") {
steps {
echo "Testing (JUnit)..."
echo "Testing (pitest)..."
bat "%ANT_HOME%/bin/ant.bat run-unit-tests"
}
}
stage("Functional Test") {
steps {
echo "Selenium..."
}
}
stage("Performance Test") {
steps {
echo "JMeter.."
}
}
stage("Quality Analysis") {
steps {
echo "Running SonarQube..."
bat "%ANT_HOME%/bin/ant.bat run-sonarqube-analysis"
}
}
stage("Security Assessment") {
steps {
echo "ZAP..."
}
}
stage("Approval") {
steps {
echo "Approval by a CS03"
}
}
stage("Deploy") {
steps {
echo "Deploying..."
}
}
}
post {
always {
junit '/test/reports/*.xml'
}
failure {
emailext attachLog: true, body: '', compressLog: true, recipientProviders: [[$class: 'CulpritsRecipientProvider'], [$class: 'DevelopersRecipientProvider']], subject: '[JENKINS] MY_PROJECT build failed', to: '...recipients...'
}
success {
emailext attachLog: false, body: '', compressLog: false, recipientProviders: [[$class: 'DevelopersRecipientProvider']], subject: '[JENKINS] MY_PROJECT build succeeded', to: '...recipients...'
}
}
}
Error is:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 11: Expected a step # line 11, column 5.
currentBuild.name = "MY_VERSION_NUMBER"
^
WorkflowScript: 12: Expected a step # line 12, column 5.
currentBuild.description = "MY_PROJECT MY_VERSION_NUMBER"
^
Ideally, I'd like to be able to read MY_PROJECT and MY_VERSION_NUMBER from the build.properties file, or from the Jenkins build log. Any guidance about that requirement would be appreciated as well.
UPDATE
Based on the answer I had below, the following worked:
stage("Build") {
steps {
echo "Building application..."
bat "%ANT_HOME%/bin/ant.bat clean compile"
script {
def props = readProperties file: 'build.properties'
currentBuild.displayName = "v" + props['application.version']
}
}
Now the build version is automatically set during the pipeline by reading the build.properties file.
I think this will do what you want. I was able to do it inside a script block:
pipeline {
stages {
stage("Build"){
steps {
script {
currentBuild.displayName = "The name."
currentBuild.description = "The best description."
}
... do whatever.
}
}
}
}
The script is kind of an escape hatch to get out of a declarative pipeline. There is probably a declarative way to do it but i couldn't find it. And one more note. I think you want currentBuild.displayName instead of currentBuild.name In the documentation for Jenkins globals I didn't see a name property under currentBuild.
If you want to set build name to a job from a parameter, you can use
currentBuild.displayName = "${nameOfYourParameter}".
Make sure you use double quotes instead of single quotes.
Job Configuration
Build job with parameter
Build History
REFERENCE: How to set build name in Pipeline job?

Jenkins Declarative Pipeline: How to inject properties

I have Jenkins 2.19.4 with Pipeline: Declarative Agent API 1.0.1. How does one use readProperties if you cannot define a variable to assign properties read to?
For example, to capture SVN revision number, I currently capture it with following in Script style:
```
echo "SVN_REVISION=\$(svn info ${svnUrl}/projects | \
grep Revision | \
sed 's/Revision: //g')" > svnrev.txt
```
def svnProp = readProperties file: 'svnrev.txt'
Then I can access using:
${svnProp['SVN_REVISION']}
Since it is not legal to def svnProp in Declarative style, how is readProperties used?
You can use the script step inside the steps tag to run arbitrary pipeline code.
So something in the lines of:
pipeline {
agent any
stages {
stage('A') {
steps {
writeFile file: 'props.txt', text: 'foo=bar'
script {
def props = readProperties file:'props.txt';
env['foo'] = props['foo'];
}
}
}
stage('B') {
steps {
echo env.foo
}
}
}
}
Here I'm using env to propagate the values between stages, but it might be possible to do other solutions.
The Jon S answer requires granting script approval because it is setting environment variables. This is not needed when running in same stage.
pipeline {
agent any
stages {
stage('A') {
steps {
writeFile file: 'props.txt', text: 'foo=bar'
script {
def props = readProperties file:'props.txt';
}
sh "echo $props['foo']"
}
}
}
}
To define general vars available to all stages, define values for example in props.txt as:
version=1.0
fix=alfa
and mix script and declarative Jenkins pipeline as:
def props
def VERSION
def FIX
def RELEASE
node {
props = readProperties file:'props.txt'
VERSION = props['version']
FIX = props['fix']
RELEASE = VERSION + "_" + FIX
}
pipeline {
stages {
stage('Build') {
echo ${RELEASE}
}
}
}

Categories

Resources