How can I get the latest Jenkins version via API call? - jenkins

I'm trying to write a script that creates a Docker image using a Jenkins image as base, that is, the first line of my Dockerfile is...
FROM jenkins/jenkins:2.249.3
However I wanna be smart and write a script that gets the latest Jenkins stable version and sed that into my Dockerfile, like this
Dockerfile:
FROM jenkins/jenkins:JENKINS_LATEST_STABLE_VER
$ export JENKINS_LATEST_STABLE_VER=`some_api_call`
$ sed -i "s/JENKINS_LATEST_STABLE_VER/$JENKINS_LATEST_STABLE_VER/g" Dockerfile
$ docker build -t docker_url/jenkins:$JENKINS_LATEST_STABLE_VER .
$ docker push docker_url/jenkins:$JENKINS_LATEST_STABLE_VER
I'm aware of jenkins/jenkins:lts but I NEED the actual version number, e.g., 2.249.3. What is "some_api_call" ?

I saw this post, and tried this. However it returns Alpine images, but I need CentOS ones. For the life of me I can't figure out the URL for the Centos images? I essentially want to get this list.
// Import the JsonSlurper class to parse Dockerhub API response
import groovy.json.JsonSlurper
// Set the URL we want to read from, it is MySQL from official Library for this example, limited to 20 results only.
// docker_image_tags_url = "https://hub.docker.com/v2/repositories/library/mysql/tags/?page_size=20"
docker_image_tags_url = "https://hub.docker.com/v2/repositories/library/jenkins/tags?page_size=30"
try {
// Set requirements for the HTTP GET request, you can add Content-Type headers and so on...
def http_client = new URL(docker_image_tags_url).openConnection() as HttpURLConnection
http_client.setRequestMethod('GET')
// Run the HTTP request
http_client.connect()
// Prepare a variable where we save parsed JSON as a HashMap, it's good for our use case, as we just need the 'name' of each tag.
def dockerhub_response = [:]
// Check if we got HTTP 200, otherwise exit
if (http_client.responseCode == 200) {
dockerhub_response = new JsonSlurper().parseText(http_client.inputStream.getText('UTF-8'))
} else {
println("HTTP response error")
System.exit(0)
}
// Prepare a List to collect the tag names into
def image_tag_list = []
// Iterate the HashMap of all Tags and grab only their "names" into our List
dockerhub_response.results.each { tag_metadata ->
image_tag_list.add(tag_metadata.name)
}
// The returned value MUST be a Groovy type of List or a related type (inherited from List)
// It is necessary for the Active Choice plugin to display results in a combo-box
return image_tag_list.sort()
} catch (Exception e) {
// handle exceptions like timeout, connection errors, etc.
println(e)
}

Related

How can i stop a remote build in Jenkins if a parameter in the url is not valid?

I run remote builds in Jenkins as follows:
JENKINS_URL/job/JOBNAME/build?token=TOKEN
If i add an extra parameter on the query string as follows:
JENKINS_URL/job/JOBNAME/build?token=TOKEN&User=test#test.com&Key=Wxfder$324
As the first step in the build I want to extract these values ie token, User and Key and do some validation and if not valid , then stop the job.
Is there a Build Step i can use, how can i do this ?
One way to do this is by passing the data you need appending to the build cause. Refer to the following example.
The URL
Note the content assigned to cause= parameter
http://localhost:8080/job/Scripted/build?token=12345678&cause=User:test#test.com,Key:Wxfder$324
The Pipeline
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def cause = currentBuild.getBuildCauses()[0]
def note = cause.getString("note")
echo "${note}"
}
}
}
}
}
Above will give you the following Output.
[Pipeline] echo
User:test#test.com,Key:Wxfder$324

How to get the current Jenkins pipeline StepContext

I have a step in a pipeline that pulls objects from the context and uses them. However, I need to access those objects outside of the steps to feed into different steps, and the second step doesn't expose it.
stage() {
steps {
script {
def status = waitForQualityGate()
// Use the taskId
}
}
}
}
The waitForQualityGate() call only returns a boolean, so I can't access it there.
I could instead manually initialize the step, like so:
script {
def qualityGate = new WaitForQualityGateStep()
def taskId = qualityGate.getTaskId()
}
but the taskId is null. If I try to run the start methods manually on the step:
script {
def qualityGate = new WaitForQualityGateStep()
qualityGate.start().start()
def taskId = qualityGate.getTaskId()
}
It fails with the message:
java.lang.IllegalStateException: you must either pass in a StepContext to the StepExecution constructor, or have the StepExecution be created automatically
The WaitForQualityGateStep has the info I need, but I can't initialize it without having a StepContext (which is an Abstract class). How can I get one from the pipeline?
You can define the variable before the pipeline and in the step just set its value. This way the variable is visible across the pipeline.
I still have no idea how to manually get a step context to manually execute a step, but in case anyone else finds this by trying to get information out of the Sonar plugin, this is how I got the task ID that I needed.
def output = sh(script: "mvn sonar:sonar", returnStdout: true)
echo output // The capture prevents printing to console
def taskUri = output.find(~'/api/ce/task\\?id=[\\w-]*')

Jenkins How To Run Same Job for the Same Count the Parameters are defined?

My requirement is that I have written a bash script which monitors telnet on several ip(s) and ports. I have used the CSV which contains the input data and the script will read each row in the CSV and checks if the ip(s) can be telnet.
However I have requirement to jenkinize it, and I am wondering if there a way I can define my parameter in the Jenkins Job with different combination or values
say for example:
PARAM_KEY : VAL_1
PARAM_KEY : VAL_2
PARAM_KEY : VAL_3
and so on thus I can use the PARAM_KEY in the script and the Jenkins job gets executed for all the parameters defined i.e. based on the number of PARAMETERS defined i.e. 3 in above case.
Can any one guide me on this requirement.
If you mean to run 1 job and iterate over the ips inside, you can parse the CSV file inside a pipeline or pass it as a parameter ( and then split it )
// example of pipeline code
node ('slave80') {
csvString = "1.1.1.1,2.2.2.2,3.3.3.3" // can be sent as parameter
def ips = csvString.split(',')
ips.each { ip ->
sh """
./bash_script ${ip}
"""
}
}

Jenkins pipeline capture output of jenkins steps

How can I archive (as a file) the console output of a jenkins pipeline step such as docker.build(someTag)?
Background:
I use jenkins pipelines to build a whole bunch of microservices.
I want to extract all the relevant information from the jenkins console into archived files, so that devs don't have to look at the console which confuses them. This works fine for sh steps were I can redirect stdout and stderr but how can I do something similar for a jenkins pipeline step?
As a workaround we use following LogRecorder class:
class LogRecorder implements Serializable {
def logStart
def logEnd
def currentBuild
def steps
LogRecorder(currentBuild) {
this.currentBuild = currentBuild
}
void start() {
logStart = currentBuild.getRawBuild().getLog(2000)
}
String stop() {
logEnd = currentBuild.getRawBuild().getLog(2000)
getLog()
}
String getLog() {
def logDiff = logEnd - logStart
return logDiff.join('\n')
}
}
Depending on your needs you may want to adjust the number of log lines in the calls to getLog().
Possible Usage:
LogRecorder logRecorder = new LogRecorder(currentBuild)
logRecorder.start()
docker.build(someTag)
testResult.stdOut = logRecorder.stop()
Please be aware that it may happen - most probably due to caching issues - that the very last line(s) of the log are sometimes missing. Maybe a sleep would help here. But so far this was not required here.
Here's what I'm using to capture the sha256 of the built docker image
docker.build(someTag)
def dockerSha256 = sh(returnStdout: true, script: "docker image inspect $someTag | jq .[0].Id").trim()
I'm using 'jq' to parse the json response
or the groovy way
def json = sh(returnStdout: true, script: "docker image inspect $someTag").trim()
def obj = new JsonSlurper().parseText(json)
println "raw json: " + obj
println "groovy docker sha256: " + obj[0].Id

include downstream/child Jenkins job's console output into triggering job's console output

2 Jenkins jobs: A and B.
A triggers B as blocking build step ("Block until the triggered projects finish their builds"). Is there a way to include B's console output into A's console output?
Motivation: for browser use of Jenkins A's console output contains a link to B's console output which is fine. But when using Jenkins via command line tools (jenkins-cli) there's no quick and easy way to see B's console output.
Any ideas?
Interesting. I'd try something like this.
From http://jenkinsurl/job/jobname/lastBuild/api/
Accessing Progressive Console Output
You can retrieve in-progress console output by making repeated GET requests with a parameter. You'll basically send GET request to this URL (or this URL if you want HTML that can be put into tag.) The start parameter controls the byte offset of where you start.
The response will contain a chunk of the console output, as well as the X-Text-Size header that represents the bytes offset (of the raw log file). This is the number you want to use as the start parameter for the next call.
If the response also contains the X-More-Data: true header, the server is indicating that the build is in progress, and you need to repeat the request after some delay. The Jenkins UI waits 5 seconds before making the next call. When this header is not present, you know that you've retrieved all the data and the build is complete.
So you can trigger a downstream job, but don't "block until downstream completes". Instead, add an extra step (execute shell, probably) and write a script that will read the console output of the other job as indicated above, and display it in console output of current job. You will have to detect when the child job finished by looking for X-More-Data: true header, as detailed above.
I know this is an old question, but i had to this myself recently. I figure this would help someone else looking to do the same. Here's a Groovy script that will read a given job's progressiveText URL. The code is written in such a way that it should be plug and play. Make sure to set the jenkinsBase and jobName first. The approach is no different to what has already been mentioned.
Here's a short set of instructions on how to use this: (1) Configure downstream job so that anonymous users hasRead and ViewStatus rights. (2) In the upstream job, create a Trigger/call builds on other projects step that will call the downstream job. (3) Do not check the "Block until the triggered projects finish their builds. (4) Right after that step, create an Execute Groovy script step and paste the following code:
def jenkinsBase = // Set to Jenkins base URL here
def jobName = // Set to jenkins job name
def jobNumber = 'lastBuild' // Tail last build
def address = null
def response = null
def start = 0 // Start at offset 0
def cont = true // This semaphore holds the value of X-More-Data header value
try {
while (cont == true) { // Loop while X-More-Data value is equal to true
address = "${jenkinsBase}/job/${jobName}/${jobNumber}/logText/progressiveText?start=${start}"
def urlInfo = address.toURL()
response = urlInfo.openConnection()
if (response.getResponseCode() != 200) {
throw new Exception("Unable to connect to " + address) // Throw an exception to get out of loop if response is anything but 200
}
if (start != response.getHeaderField('X-Text-Size')) { // Print content if the starting offset is not equal the value of X-Text-Size header
response.getInputStream().getText().eachLine { line ->
println(line)
}
}
start = response.getHeaderField('X-Text-Size') // Set new start offset to next byte
cont = response.getHeaderField('X-More-Data') // Set semaphore to value of X-More-Data field. If this is anything but true, we will fall out of while loop
sleep(3000) // wait for 3 seconds
}
}
catch (Exception ex) {
println (ex.getMessage())
}
This script can be further improved by programatically getting the downstream job number.
There is also a Python version of this approach here.

Resources