Jenkins pipeline parameter being evaluated to previous value - jenkins

I've got a pipeline which builds software, with a parameter used for the version.
The parameter defaults to a Groovy expression evaluating to the current date.
But when I run it, the value it's using is actually the date of the previous build.
Example:
Build #17 (25 Mar 2022, 10:37:57) prints 2022-03-25T10:37:51.471369100
Build #18 (25 Mar 2022, 11:08:33) prints 2022-03-25T10:37:57.857506500
Build #19 (25 Mar 2022, 11:09:52) prints Build version 2022-03-25T11:08:33.802312
Pipeline Script:
pipeline {
agent any
parameters {
string(
name: "BUILD_VERSION",
defaultValue: "Build version "+java.time.LocalDateTime.now()
)
}
stages {
stage("Print") {
steps {
echo params.BUILD_VERSION
}
}
}
}
What am I missing? How can I default the parameter to the date it's executed?

When you are using the the defaultValue attribute of the string parameter you are actually setting the default value for the next execution of the project, not for the current one - as the default value is updated only after the build started to run with the given parameters.
Therefore the next build will be executed with the value set by the previous one.
To overcome this you need to define a parameter that is updated before the builds starts to run, and then the build will use that parameter in the execution.
One way to do it is with the Extended Choice Parameter Plugin which will generated the default value on runtime when you click the Build With Parameters in your job. This way the default time value will be used for the current running build.
Here is the code example:
pipeline {
agent any
parameters {
extendedChoice(name: 'BUILD_VERSION', type: 'PT_TEXTBOX',
defaultGroovyScript: 'return java.time.LocalDateTime.now().toString()')
}
stages {
stage("Print") {
steps {
echo params.BUILD_VERSION
}
}
}
}

Related

Jenkins - Is it possible to use two different versions of terraform in a jenkins file

Currently we are using terraform 11 but we would like to start moving to 12. The idea is to move module by module, which means some modules will be using terraform version 11 and those that can run on 12 will be using version 12.
My question now is that in our Jenkins file, we have a stage which downloads terraform 11 and then different stages then to run terraform, is it possible to download terraform 12 as well and have some stages then use 11 and and others use 12?
stage('Download Terraform') {
steps{
sh "wget path/terraform/0.11.8/terraform-0.11.8.zip"
sh "unzip -o terraform-0.11.8.zip"
sh "rm terraform-0.11.8.zip"
}
}
stage('Create .terraformrc') {
steps {
sh "echo ~"
writeFile file: "/home/user/.terraformrc", text: """
credentials "" {
token = ""
}
"""
}
}
stage('Enable CloudTrail') {
steps {
{code}
}
}
stage('Create Automation Lambdas') {
steps {
{code}
}
}
In the above example, i would like the "Enable Cloudtrail" stage to run terraform 12 and the "Create Automation Lambdas" stage to run with terraform 11.....
This is how I solved this (sorry if this is not exactly what you needed):
Install terraform plugin
In jenkins UI > Global Tools Configuration add multiple terraform installations and name them in a consistent, predictable way:
In you pipeline you can now 'pick' the version to use:
pipeline {
agent { label terraformdevagent }
environment {
TF_HOME = tool('terraform-0.14.4')
PATH = "$TF_HOME:$PATH"
}}
You can make an optional parameter and a function that returns the parameter if not null and a hardcoded value if the parameter is null.
This way you can have hundreds of jobs under one TF version and still test in isolation the new version.
It can also let you handle exceptions and not block your rollouts because of a handful of projects that need to be refactored for the new version.
TF_HOME = tool( "${params.newTerraformVersion}?:'terraform-0.14.4'")

Cant use changeset in JenkinsFile script

I have a problem with a step in my jenkinsfile script. I am trying to use when changeset to determine if a particular set of files has changed as I want to only build when certain files are changed. I added this step to call a separate build job if the files are changed.
stage('File check') {
when { changeset "**/files"}
steps {
build 'Build and deploy'
}
}
However I get an error
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 5: Unknown conditional changeset. Valid conditionals are: allOf, anyOf, branch, environment, expression, not # line 5, column 21.
when { changeset "**/files"}
What am I missing? Is it a problem with my version of Jenkins/groovy? Im using Jenkins ver. 2.73.3.
Here is what works for me:
String projectDirectory = "terraform"
String changesetPathRegex = "**/${projectDirectory}/**"
stage('Build Dockerfile') {
when { changeset changesetPathRegex }
steps {
dir(projectDirectory) {
sh 'terraform plan'
}
}
}
Should look in the current repo's "terraform" folders and do the things if it sees a change there.

How to get output of jenkins pipeline in a specific format?

I am trying to implement Machine learning in my jenkins pipeline.
For that I need output data of pipeline for each build.
Some parameters that i need are:
Which user triggered the pipeline
Duration of pipeline
Build number with its details
Pipeline pass/fail
If fail, at which stage it failed.
Error in the failed stage. (Why it failed)
Time required to execute each stage
Specific output of each stage (For. eg. : If a stage contains sonarcube execution then output be kind of percentage of codesmells or code coverage)
I need to fetch these details for all builds. How can get it?
There is jenkins api that can be implemented in python but i was able to get only JOB_NAME, Description of job, IS job Enabled.
These details werent useful.
There are 2 ways to get some of data from your list.
1. Jenkins API
For first 4 points from the list, you can use JSON REST API for a specific build to get those data. Example API endpoint:
https://[JENKINS_HOST]/job/[JOB_NAME]/[BUILD_NUMBER]/api/json?pretty=true
1. Which user triggered the pipeline
This will be under actions array in response, identyfi object in array by "_class": "hudson.model.CauseAction" and in it you will have shortDescription key which will have that information:
"actions": [
{
"_class": "hudson.model.CauseAction",
"causes": [
{
"_class": "hudson.triggers.SCMTrigger$SCMTriggerCause",
"shortDescription": "Started by an SCM change"
}
]
},
2. Duration of pipeline
It can be found under key: "duration". Example
"duration": 244736,
3. Build number with its details
I don't know what details you need, but for build number look for "number" key:
"number": 107,
4. Pipeline pass/fail
"result": "SUCCESS",
If you need to extract this information for all builds, run GET request for job API https://[JENKINS_HOST]/job/[JOB_NAME]/api/json?pretty=trueand extract all builds, then run above-mentioned request per build you have extracted.
I will write later a dummy python script to do just that.
2. Dump data in Jenkinsfile
There is also a possibility to dump some that information from Jenkinfile in post action.
pipeline {
agent any
stages {
stage('stage 1') {
steps {
sh 'echo "Stage 1 time: ${YOUR_TIME_VAR}" > job_data.txt'
}
}
}
post {
always {
sh 'echo "Result: ${result}" > job_data.txt'
sh 'echo "Job name: ${displayName}" > job_data.txt'
sh 'echo "Build number: ${number}" > job_data.txt'
sh 'echo "Duration: ${duration}" > job_data.txt'
archiveArtifacts artifacts: 'job_data.txt', onlyIfSuccessful: false
}
}
}
List of available global variables for pipeline job can be found:
https://[JENKINS_HOST]/pipeline-syntax/globals#env
For rest, you will need to implement your own logic in Jenkinsfile.
Ad. 5
Create a variable which holds information about current stage. At the beginning of each stage change its value to the ongoing stage. At the end dump to file like rest variables. If pipeline will fail let's say on stage foo in post action this variable will have exact same value because if pipeline fails it won't go to next stage.
Ad. 6
I'm not sure what you want, a traceback, error code?
I guess you will probably need to implement your own logging function.
Ad. 7
Make a function for measuring time for each stage and dump value at the end.
Ad. 8
Also not sure what you mean. Like, build artifacts?
At the end of each build this file job_data.txt will be archived as build artifact which can be later downloaded.
If i will find more elegant and simple solution I'll edit this post.
Hope it helps in any way
EDIT 1
Here is the script I've mentioned earlier.
import requests
username = "USERNAME"
password = "PASSWORD"
jenkins_host = "JENKINS_HOST"
jenkins_job = "JOBNAME"
request_url = "{0:s}/job/{1:s}/api/json".format(
jenkins_host,
jenkins_job,
)
job_data = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in job_data.get('builds'):
builds.append(build.get('number'))
for build in builds:
build_url = "{0:s}/job/{1:s}/{2:d}/api/json".format(
jenkins_host,
jenkins_job,
build,
)
build_data = requests.get(build_url, auth=(username, password)).json()
build_name = build_data.get('fullDisplayName')
build_number = build_data.get('number')
build_status = build_data.get('result')
build_duration = build_data.get('duration')
for action in build_data.get('actions'):
if action.get("_class") == "hudson.model.CauseAction":
build_trigger = action.get('causes')
print(build_name)
print(build_status)
print(build_duration)
print(build_number)
print(build_trigger)
Please note you might need to authorize with API Token depending on your security settings.

Jenkins pipeline cannot access parameters once installed in GUI

Jenkins ver. 2.121.1
pipeline {
parameters {
string(
name: 'repo',
defaultValue:"foo",
description: "repo to build from")
}
agent any
stages {
stage('Checkout'){
steps {
echo params.repo
Expected:
echo prints "foo" or what ever value was set at "build with parameters" every time.
Actual:
The first time the pipeline is run the default value is printed.
Once the parameters are "installed" and visible in the GUI, every run after that the echo line prints empty string
Does anyone know what is causing this interference in the parameters being accessible?

Jenkins continuous delivery pipeline skip stage based on input

A simplified pipeline will look something like:
1. build
2. unit test
3. deploy to dev
4. integration tests
5. deploy to prod
For step #5 I've setup a Jenkins pipeline input command. We won't be deploying to prod on every commit so if we abort all those jobs it will have a big list of grey builds. Is it possible to have a skip option so the build can still be shown as green blue?
There is a better solution I just found. You can access the result of the input like by using the return value. The user has to check the checkbox, to run the optional stage. Otherwise the steps of the stage are skipped. If you skipp the whole stage, the stage will disappear and that "cleans" the stage view history.
stage('do optional stuff?') {
userInput = input(
id: 'userInput', message: "Some important question?", parameters: [
booleanParam(defaultValue: false, description: 'really?', name: 'myValue')
])
}
stage('optional: do magic') {
if (userInput) {
echo "do magic"
} else {
// do what ever you want when skipping this build
currentBuild.result = "UNSTABLE"
}
}
How about:
stage('Deploy') {
when { branch 'master' }
steps {
sh '...'
}
}
}
the stage will be skipped for commits on other branches and will be green.
Can't you do something like this, it will be blue/green whatever you choose from input, and you can then run the deployment depending on it too?
def deployToProduction = true
try{
input 'Deploy to Production'
}catch(e){
deployToProduction = false
}
if(deployToProduction){
println "Deploying to production"
}
Instead of using pipeline as a code Jenkins2 feature, you can setup Jobs with downstream/upstream configuration.
Build -> Unit test -> Deploy to Dev -> Integration tests -> Promote to Prod -> Deploy to Prod
At present it gives more control to choose which version of pipeline you wish to Prod.
For greater visibility you can configure Delivery Pipeline using Delivery-Pipeline-Plugin.

Resources