Write key/value data through Jenkins API - jenkins

I already use Jenkins API for some tasks in my build pipeline. Now, there is a task that I want to persist some simple dynamic data say like "50.24" for each build. Then be able to retrieve this data back in a different job.
More concretely, I am looking for something on these lines
POST to http://localhost:8080/job/myjob//api/json/store
{"code-coverage":"50.24"}
Then in a different job
GET
http://localhost:8080/job/myjob//api/json?code-coverage
One idea is to do archiveArtifacts and save it into a file and then read it back using the API/file. But I am wondering if there is plugin or a simple way to write some data for this job.

If you need to send a variable from one build to another:
The parametrized build is the easiest way to do this:
https://wiki.jenkins.io/display/JENKINS/Parameterized+Build
the URL would look like:
http://server/job/myjob/buildWithParameters?PARAMETER=Value
If you need to share complex data, you can save some files in your workspace and use it (send the absolute path) from another build.
If you need to re-use a simple variable computed during your build
I would go for using an environment var, updated during your flow:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
environment {
DISABLE_AUTH = 'true'
DB_ENGINE = 'sqlite'
}
stages {
stage('Build') {
steps {
sh 'printenv'
}
}
}
}
All the details there:
https://jenkins.io/doc/pipeline/tour/environment/
If you need to re-use complex data between two builds
You have two case there, it's if your build are within the same workspace or not.
In the same workspace, it's totally fine to write your data in a text file, that is re-used later, by another job.
archiveArtifacts plugin is convenient if your usecase is about extracting test results from logs, and re-use it later. Otherwise you will have to write the process yourself.
If your second job is using another workspace, you will need to provide the absolute path to your child-job. In order for it to copy and process it.

Related

Jenkins: Is there a way I can load a file as a jenkins build parameter?

I currently have a git repo which has a text file. I want to load its contents as a build parameter for a jenkins job.
One way would be to manually copy the contents of this file in Jenkins multi-line string parameter. But, since the content is in git already I want to keep it coupled.
Not sure, if this is even possible using Jenkins?
I am using Jenkins Job DSL to generate the job.
EDIT : You can find several different ways of achieving this in the following answer Jenkins dynamic declarative pipeline parameters
I think you can achieve it the following way (scripted pipeline).
node {
stage("read file") {
sh('echo -n "some text" > afile.txt')
def fileContent = readFile('afile.txt')
properties([
parameters([
string(name: 'FILE_CONTENT', defaultValue: fileContent)
])
])
}
stage("Display properties") {
echo("${params.FILE_CONTENT}")
}
}
The first time you execute it there will be no parameter choice. The second time, you'll have the option to build with parameter and the content will be the content of your file.
The bad thing with this approach is that it's always in sync with the previous execution, i.e. when you start the build on a commit where you changed the content of your file, it will prefill the parameter with the content of the file as per the last execution of the build.
The only way I know around this, is to split your pipeline into two pipelines. The first one reads the content of the file and then triggers the second one with the file content as build parameter with the build step.
If you find a better way let us know.
Why don't you have jenkins pull the repo as part of the Job, and then parse the contents of the parameters (in say, json, for example from a file within the repo) and then continue executing with those parameters?

Trigger Multibranch Job from another

I have a job in Jenkins and I need to trigger another one when it ends (if it ends right).
The second job is a multibranch, so I want to know if there's any way to, when triggering this job, pass the branch I want to. For example, if I start the first job in the branch develop, I need it to trigger the second one for the develop branch also.
Is there any way to achieve this?
Just think about the multibranch job being a folder containing the real jobs named after the available branches:
Using Pipeline Job
When using the pipeline build step you'll have to use something like:
build(job: 'JOB_NAME/BRANCH_NAME'). Of course you may use a variable to specify the branch name.
Using Freestyle Job
When triggering from a Freestyle job you most probably have to
Use the parameterized trigger plugin as the plain old downstream build plugin still has issues triggering pipeline jobs (at least the version we're using)
As job name use the same pattern as described above: JOB_NAME/BRANCH_NAME
Should be possible to use a job parameter to specify the branch name here. However I didn't give it a try, though.
Yes, you can call downstream job by adding post build step: Trigger/Call build on other projects(you may need to install "Parameterized Trigger Plugin"):
where in Parameters section you define vars for the downstream job associated with vars from current job.
Also multibranch_PARAM1 and *PARAM2 must be configured in the downstreamjob:
Sometimes you want to call one or more subordinate multibranch jobs and have them build all of their branches, not just one. A script can retrieve the branch names and build them.
Because the script calls the Jenkins API, it should be in a shared library to avoid sandbox restrictions. The script should clear non-serializable references before calling the build step.
Shared library script jenkins-lib/vars/mbhelper.groovy:
def callMultibranchJob(String name) {
def item = jenkins.model.Jenkins.get().getItemByFullName(name)
def jobNames = item.allJobs.collect {it.fullName}
item = null // CPS -- remove reference to non-serializable object
for (jobName in jobNames) {
build job: jobName
}
}
Pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
library 'jenkins-lib'
mbhelper.callMultibranchJob 'multibranch-job-one'
mbhelper.callMultibranchJob 'multibranch-job-two'
}
}
}
}
}

Jenkins - Display text from a file in a parametrized build depending on previous choice

Is there any way to display an informative text from a file located in workspace, on a parametrized build step depending on an previous condition (like using active Choice plugin)?
activeChoiceReactiveParam('branch') {
description('Select the branch you are going to use')
choiceType('SINGLE_SELECT')
groovyScript {
script('["develop", "master"')
fallbackScript('"Error. No branch to select."')
}
filterable(true)
}
You can pass on text and parameters from one job to another using a properties file and inject it as environment variables in your desired job. The EnvInject plugin lets you do injection of that properties file in the desired job.

Jenkins Job DSL Plugin: How to Modify Parameters on other jobs

I want to create a job in Jenkins which modifies an existing parameter on another job.
I'm using the Job DSL Plugin. The code I'm using is:
job('jobname'){
using('jobname')
parameters {
choiceParam('PARAMETER1',['newValue1', 'newValue2'],'')
}
}
However, this only adds another parameter with the same name in the other job.
I'm trying the alternative to delete all parameters and start from scratch, but I haven't found the way to do that using Job DSL (not even with the Configure block).
Another alternative would be to define the other job completely and start from scratch, but that would make the job too complicated, specially if I want to apply this change to many jobs at a time.
¿Is there a way to edit or delete lines on the config.xml file using the Job DSL plugin?

Jenkins: What is a good way to store a variable between two job runs?

I have a time-triggered job which needs to retrieve certain values stored in a previous run of this job.
Is there a way to store values between job runs in the Jenkins environment?
E.g., I can write something like next in a shell script action:
XXX=`cat /hardcoded/path/xxx`
#job itself
echo NEW_XXX > /hardcoded/path/xxx
But is there a more reliable approach?
A few options:
Store the data in the workspace. If the data isn't critical (i.e. it's ok to nuke it when the workspace is nuked) that should be fine. I only use this to cache expensive-to-compute data such as prebuilt library dependancies.
Store the data in some fixed location in the filesystem. You'll make jenkins less self-contained and thus make migrations+backups more complex - but probably not by much; especially if you store the data in some custom user-subdirectory of jenkins. parallel builds will also be tricky, and distributed builds likely impossible. Jenkins has a userContent subdirectory you could use for this - that way the file is at least part of the jenkins install and thus more easily migrated or backed up. I do this for the (rather large) code coverage trend files for my builds.
Store the data on a different machine (e.g. a database). This is more complicated to set up, but you're less dependant on the local machine's details, and it's probably easier to get distributed and parallel builds working. I've done this to maintain a live changelog.
Store the data as a build artifact. This means looking at previous build's artifacts. It's safe and repeatable, and because Uri's are used to access such artifacts, OK for distributed builds too. However, you need to deal with failed builds (should you look back several versions? start from scratch?) and you'll be storing many copies, which is just fine if it's 1KB but less fine if it's 1GB. Another downside here is that you'll probably need to open up jenkin's security settings quite far to allow annonymous access to artifacts (since you're just downloading from a uri).
The appropriate solution will depend on your situation.
I would pass the variable from the first job to the second as a parameter in a parameterized build. See this question for more info on how to trigger a parameterized build from another build.
If you are using Pipelines and you're variable is of a simple type, you can use a parameter to store it between runs of the same job.
Using the properties step, you can configure parameters and their default values from within the pipeline. Once configured you can read them at the start of each run and save them (as default value) at the end. In the declarative pipeline it could look something like this:
pipeline {
agent none
options {
skipDefaultCheckout true
}
stages {
stage('Read Variable'){
steps {
script {
try {
variable = params.YOUR_VARIABLE
}
catch (Exception e) {
echo("Could not read variable from parameters, assuming this is the first run of the pipeline. Exception: ${e}")
variable = ""
}
}
}
}
stage('Save Variable for next run'){
steps {
script {
properties([
parameters([
string(defaultValue: "${variable}", description: 'Variable description', name: 'YOUR_VARIABLE', trim: true)
])
])
}
}
}
}

Resources