Is it possible to define a non-blocking input step in Jenkinsfile? - jenkins

I have Jenkins pipeline set up for Git branches with the last optional step of deploying to stage:
stage('Stage') {
if (gitBranch != "master") {
timeout(time: 1, unit: 'DAYS') {
input message: "Do you want to deploy ${shortCommit} from branch ${gitBranch} to STAGE?"
}
}
node {
stage('Deploy Stage') {
echo("Deploying to STAGE ${gitCommit}")
sh "NODE_ENV=stage yarn lerna-run --since ${sinceSha} deploy"
}
}
}
The problem is deploying a branch to stage is optional, but Jenkins doesn't return a success code to Github until it's done.
Is there any syntax to mark it as optional?

You can combine the timeout step with the input step, like we have it here:
/**
* Generates a pipeline {#code input} step that times out after a specified amount of time.
*
* The options for the timeout are supplied via {#code timeoutOptions}.
* The options for the input dialog are supplied via {#code inputOptions}.
*
* The returned Map contains the following keys:
*
* - proceed: true, if the Proceed button was clicked, false if aborted manually aborted or timed out
* - reason: 'user', if user hit Proceed or Abort; 'timeout' if input dialog timed out
* - submitter: name of the user that submitted or canceled the dialog
* - additional keys for every parameter submitted via {#code inputOptions.parameters}
*
* #param args Map containing inputOptions and timoutOptions, both passed to respective script
* #return Map containing above specified keys response/reason/submitter and those for parameters
*/
Map inputWithTimeout(Map args) {
def returnData = [:]
// see https://go.cloudbees.com/docs/support-kb-articles/CloudBees-Jenkins-Enterprise/Pipeline---How-to-add-an-input-step,-with-timeout,-that-continues-if-timeout-is-reached,-using-a-default-value.html
try {
timeout(args.timeoutOptions) {
def inputOptions = args.inputOptions
inputOptions.submitterParameter = "submitter"
// as we ask for the submitter, we get a Map back instead of a string
// besides the parameter supplied using args.inputOptions, this will include "submitter"
def responseValues = input inputOptions
echo "Response values: ${responseValues}"
// BlueOcean currently drops the submitterParameter
// https://issues.jenkins-ci.org/browse/JENKINS-41421
if (responseValues instanceof String) {
echo "Response is a String. BlueOcean? Mimicking the correct behavior."
String choiceValue = responseValues
String choiceKey = args.inputOptions.parameters.first().getName()
responseValues = [(choiceKey): choiceValue, submitter: null]
}
echo "Submitted by ${responseValues.submitter}"
returnData = [proceed: true, reason: 'user'] + responseValues
}
} catch (FlowInterruptedException err) { // error means we reached timeout
// err.getCauses() returns [org.jenkinsci.plugins.workflow.support.input.Rejection]
Rejection rejection = err.getCauses().first()
if ('SYSTEM' == rejection.getUser().toString()) { // user == SYSTEM means timeout.
returnData = [proceed: false, reason: 'timeout']
} else { // explicitly aborted
echo rejection.getShortDescription()
returnData = [proceed: false, reason: 'user', submitter: rejection.getUser().toString()]
}
} catch (err) {
// try to figure out, what's wrong when we manually abort the pipeline
returnData = [proceed: false, reason: err.getMessage()]
}
returnData
}
In addition to your requirements, this also returns, who submitted the dialog.

Related

Jenkinsfile syntax error: No such property

I'm trying to test a binary with latest commit id in Jenkins. Error happens at send slack message stage:
def kResPath = "/tmp/res.json" // global variable, where json file is dumped to; declared at the very beginning of Jenkinsfile
def check_result_and_notify(tier, result) {
def kExpected = 3
def kSuccessSlackColor = "#00CC00"
message = "Test ${tier} result: ${result}\n"
def test_result = readJSON file: kResPath
// 0 means index, "benchmarks", "real-time" is the key
real_time = test_result["benchmarks"][0]["real_time"]
if (real_time > kExpected) {
message += String.format("real time = %f, expected time = %f", real_time, kExpected)
}
slackSend(color: ${kSuccessSlackColor}, message: message.trim(), channel: "test-result")
}
The json file looks like:
{
"benchmarks": [
{
"real_time": 4,
},
{
"real_time": 5,
}
],
}
The error message I've received is hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: kResPath for class: WorkflowScript
Can someone tell me what's wrong with my code?
Is there any way I could test it locally so that I don't need to commit it every single time? I googled and find it needs server id and password, which I don't think accessible to me :(
Your kResPath variable is undefined in the scope of that function or method (unsure which based on context). You can pass it as an argument:
def check_result_and_notify(tier, result, kResPath) {
...
}
check_result_and_notify(myTier, myResult, kResPath)
and even specify a default if you want:
def check_result_and_notify(tier, result, kResPath = '/tmp/res.json')

get String value from Jenkins console output

In the Jenkins output, I had the following assert error
but I need to get the String error from the error assert or any text. I'm using I'm My JenkinsFile:
def matcher = manager.getLogMatcher('.*Delete organization Account failed: *')
but generates the following error:
So I just want to check, that the log contains a specific string and if the texts exists make the build failed currentBuild.result = "FAILED", saving the text to send it by slack
You can put the condition in below way :
if (manager.logContains('.*Delete organization Account failed:*')) {
error("Build failed because of Delete organization Account..")
}
This is how it worked for me:
import hudson.model.*
node {
.....
if (Slack.toBoolean() ) {
def matcher = manager.getLogMatcher(".*Error.*")
if(matcher.matches()) {
pbn=matcher.group(0)
println pbn
slack_message = "`BUILD ERROR`: ${pbn} "
println slack_message
matcher = null // fix NotSerializableException
slackSend(channel: "#reports", message: slack_message, color: '#172530');
}
}
}

How to Jenkins Groovy scripting for live fetching of Docker image + authentication

I have a script groovy, this script for live fetching of docker image,
I want to add the authentication function with the private repository, but I am not familiar with groovy, who can help me, thanks
import groovy.json.JsonSlurper
// Set the URL we want to read from, it is MySQL from official Library for this example, limited to 20 results only.
docker_image_tags_url = "https://registry.adx.abc/v2/mysql/tags/list"
try {
// Set requirements for the HTTP GET request, you can add Content-Type headers and so on...
def http_client = new URL(docker_image_tags_url).openConnection() as HttpURLConnection
http_client.setRequestMethod('GET')
// Run the HTTP request
http_client.connect()
// Prepare a variable where we save parsed JSON as a HashMap, it's good for our use case, as we just need the 'name' of each tag.
def dockerhub_response = [:]
// Check if we got HTTP 200, otherwise exit
if (http_client.responseCode == 200) {
dockerhub_response = new JsonSlurper().parseText(http_client.inputStream.getText('UTF-8'))
} else {
println("HTTP response error")
System.exit(0)
}
// Prepare a List to collect the tag names into
def image_tag_list = []
// Iterate the HashMap of all Tags and grab only their "names" into our List
dockerhub_response.results.each { tag_metadata ->
image_tag_list.add(tag_metadata.name)
}
// The returned value MUST be a Groovy type of List or a related type (inherited from List)
// It is necessary for the Active Choice plugin to display results in a combo-box
return image_tag_list.sort()
} catch (Exception e) {
// handle exceptions like timeout, connection errors, etc.
println(e)
}
The problem has been resolved, thank you everyone for your help
// Import the JsonSlurper class to parse Dockerhub API response
import groovy.json.JsonSlurper
// Set the URL we want to read from, it is MySQL from official Library for this example, limited to 20 results only.
docker_image_tags_url = "https://registry.adx.vn/v2/form-be/tags/list"
try {
// Set requirements for the HTTP GET request, you can add Content-Type headers and so on...
def http_client = new URL(docker_image_tags_url).openConnection() as HttpURLConnection
http_client.setRequestMethod('GET')
String userCredentials = "your_user:your_passwd";
String basicAuth = "Basic " + new String(Base64.getEncoder().encode(userCredentials.getBytes()));
http_client.setRequestProperty ("Authorization", basicAuth);
// Run the HTTP request
http_client.connect()
// Prepare a variable where we save parsed JSON as a HashMap, it's good for our use case, as we just need the 'name' of each tag.
def dockerhub_response = [:]
// Check if we got HTTP 200, otherwise exit
if (http_client.responseCode == 200) {
dockerhub_response = new JsonSlurper().parseText(http_client.inputStream.getText('UTF-8'))
} else {
println("HTTP response error")
System.exit(0)
}
// Prepare a List to collect the tag names into
def image_tag_list = []
// Iterate the HashMap of all Tags and grab only their "names" into our List
dockerhub_response.tags.each { tag_metadata ->
image_tag_list.add(tag_metadata)
}
// The returned value MUST be a Groovy type of List or a related type (inherited from List)
// It is necessary for the Active Choice plugin to display results in a combo-box
return image_tag_list.sort()
} catch (Exception e) {
// handle exceptions like timeout, connection errors, etc.
println(e)
}
here is the result

How to show timestamps in short format in Jenkins Blue Ocean?

Using Timestamper plugin 1.11.2 with globally enabled timestamps, using the default format, I get the following console output:
00:00:41.097 Some Message
In Blue Ocean the output shows like:
[2020-04-01T00:00:41.097Z] Some Message
How can I make it so that Blue Ocean uses the short timestamp format? The long format is somewhat unreadable and clutters the details view of the steps.
I've looked at the Pipeline Options too, but there is only the timestamps option which doesn't have a parameter to specify the format.
Note: This question isn't a dupe, because it asks for differences in time zone only.
Edit:
⚠️ Unfortunately this workaround doesn't work in the context of node, see JENKINS-59575. Looks like I have to finally get my hands dirty with plugin development, to do stuff like that in a supported way.
Anyway, I won't delete this answer, as the code may still be useful in other scenarios.
Original answer:
As a workaround, I have created a custom ConsoleLogFilter. It can be applied as a pipeline option, a stage option or at the steps level. If you have the timestamp plugin installed, you should disable the global timestamp option to prevent duplicate timestamps.
Typically you would define the low-level code in a shared library. Here is a sample that can be copy-pasted right into the pipeline script editor (you might have to disable Groovy sandbox):
import hudson.console.LineTransformationOutputStream
import hudson.console.ConsoleLogFilter
import java.nio.charset.Charset
import java.nio.charset.StandardCharsets
pipeline{
agent any
/*
options{
// Enable timestamps for the whole pipeline, using default format
//withContext( myTimestamps() )
// Enable timestamps for the whole pipeline, using custom format
//withContext( myTimestamps( dateFormat: 'HH:mm:ss', prefix: '', suffix: ' - ' ) )
}
*/
stages {
stage('A') {
options {
// Enable timestamps for this stage only
withContext( myTimestamps() )
}
steps {
echo 'Hello World'
}
}
stage('B') {
steps {
echo 'Hello World'
// Enable timestamps for some steps only
withMyTimestamps( dateFormat: 'HH:mm:ss') {
echo 'Hello World'
}
}
}
}
}
//----- Code below should be moved into a shared library -----
// For use as option at pipeline or stage level, e. g.: withContext( myTimestamps() )
def myTimestamps( Map args = [:] ) {
return new MyTimestampedLogFilter( args )
}
// For use as block wrapper at steps level
void withMyTimestamps( Map args = [:], Closure block ) {
withContext( new MyTimestampedLogFilter( args ), block )
}
class MyTimestampedLogFilter extends ConsoleLogFilter {
String dateFormat
String prefix
String suffix
MyTimestampedLogFilter( Map args = [:] ) {
this.dateFormat = args.dateFormat ?: 'YY-MM-dd HH:mm:ss'
this.prefix = args.prefix ?: '['
this.suffix = args.suffix ?: '] '
}
#NonCPS
OutputStream decorateLogger( AbstractBuild build, OutputStream logger )
throws IOException, InterruptedException {
return new MyTimestampedOutputStream( logger, StandardCharsets.UTF_8, this.dateFormat, this.prefix, this.suffix )
}
}
class MyTimestampedOutputStream extends LineTransformationOutputStream {
OutputStream logger
Charset charset
String dateFormat
String prefix
String suffix
MyTimestampedOutputStream( OutputStream logger, Charset charset, String dateFormat, String prefix, String suffix ) {
this.logger = logger
this.charset = charset
this.dateFormat = dateFormat
this.prefix = prefix
this.suffix = suffix
}
#NonCPS
void close() throws IOException {
super.close();
logger.close();
}
#NonCPS
void eol( byte[] bytes, int len ) throws IOException {
def lineIn = charset.decode( java.nio.ByteBuffer.wrap( bytes, 0, len ) ).toString()
def dateFormatted = new Date().format( this.dateFormat )
def lineOut = "${this.prefix}${dateFormatted}${this.suffix}${lineIn}\n"
logger.write( lineOut.getBytes( charset ) )
}
}
Example output for stage "B":
Credits:
I got the idea from this answer.

How to list culprit users when Jenkins build fails (sending by Slack)

How I can list all culprits users related to the broken builds since the last successful build and the current one?
Besides that, how to compile all this information and then send by Slack?
The script below describes how to configure the post processing step to send culprits users from history of broken builds.
#!groovy​
pipeline {
agent { label 'pipeline-maven'}
post {
failure {
script {
def userDetailsService = load("get-users-details.groovy")
env.slack_msg = userDetailsService.getFailedBuildHistory()
}
slackSend baseUrl: 'https://xxx.slack.com/services/hooks/jenkins-ci/',
channel: '#xxxx',
color: 'bad',
token: 'aIPJis6V4P9VOpTFhUtCQRRL',
message: "Broken build ${currentBuild.fullDisplayName}\n${slack_msg}"
}
}
}
The script below (get-users-details.groovy) is responsible to enumerate all culprits based on the previous broken build history.
import jenkins.model.Jenkins
def String getFailedBuildHistory() {
def message = ""
// Iterate over previous broken builds to find culprits
def fullName = "pipeline-test"
def jobData = Jenkins.instance.getItemByFullName(fullName)
def lastStableBuild = jobData.getLastStableBuild()
def lastBuildNumber = jobData.getLastBuild().getNumber() - 1 // We subtract the current executing build from the list
if (lastStableBuild != null && lastStableBuild.getNumber() != lastBuildNumber) {
def culpritsSet = new HashSet();
message += "Responsibles:\n"
// From oldest to newest broken build, since the last sucessful build, find the culprits to notify them
// The list order represents who is more responsible to fix the build
for (int buildId = lastStableBuild.getNumber() + 1; buildId <= lastBuildNumber; buildId++) {
def lastBuildDetails = Jenkins.getInstance().getItemByFullName(fullName).getBuildByNumber(buildId)
if (lastBuildDetails != null) {
lastBuildDetails.getCulpritIds().each({ culprit ->
if (!culpritsSet.contains(culprit)) {
message += " ${culprit} (build ${lastBuildDetails.getNumber()})\n"
culpritsSet.add(culprit)
}
})
}
}
}
// Complement the message with information from the current executing build
if (currentBuild.getCurrentResult() != 'SUCCESS') {
def culprits = currentBuild.changeSets.collectMany({ it.toList().collect({ it.author }) }).unique()
if (culprits.isEmpty()) {
// If there is no change log, use the build executor user
def name = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').userName
message += " ${name} (current build ${currentBuild.getId()})"
} else {
// If there is change log, use the committer user
culprits.each({ culprit ->
message += " ${culprit} (current build ${currentBuild.getId()})"
})
}
}
return message
}
return [
getFailedBuildHistory: this.&getFailedBuildHistory
]

Resources