Jenkins get/set data external to job - jenkins

Does jenkins have any way to set global properties from a job? We have many such needs for this - but specifically - we have a number of slaves, across unix and windows, and various different permissions locations - so it's not easy to have a connected file system. We have various levels of maturity that we promote through - so for instance, we want to promote some build number to UAT - and then promote whatever number is in UAT to training and so on. So - really, in the "release to uat" - we want to store some idea of which build number was released - and read that from the "release to training" job. At the moment we are hacking it by restricting them to run from the same slave, and writing it to a file, which is very much not ideal.

I may not have totally understood your question but you can perform a lot of work with the built in groovy scripting function in jenkins, including reading parameters from other jobs, and rewriting or initializing the parameters in the current job. You can use parameters like this to record information that can be retrieved on demand by other jobs
For instance you can find the build number of the last successful build of a certain project:
import hudson.model.*
def hif = Hudson.instance
def a = hif.getItems(hudson.model.Project).find{it.displayName.toUpperCase()=='MY_PROJECTNAME'}.getBuilds().findAll{it.result==Result.SUCCESS }.first()
out.println a.number //build number
out.println a.buildVariableResolver.resolve('someVariable')// some parameter used to call a
(you could include any other criteria at this point)
If you want to save information to a parameter that can later be read by another bulid step or another job then you first create the parameter in the job config, then write to it in code like so:
import hudson.model.*
def hif = Hudson.instance
def buildMap = build.getBuildVariables()
buildMap['MySpecialVar']='SomeValue'
setBuildParameters(buildMap)
def setBuildParameters(map) {
def npl = new ArrayList<StringParameterValue>()
for (e in map) {
npl.add(new StringParameterValue(e.key.toString(), e.value.toString()))
}
def newPa = null
def oldPa = build.getAction(ParametersAction.class)
if (oldPa != null) {
build.actions.remove(oldPa)
newPa = oldPa.createUpdated(npl)
} else {
newPa = new ParametersAction(npl)
}
build.actions.add(newPa)
}
Combining these techniques you could for instance:
Save a bunch of information as 'output parameters' in job one
Find the most recent successful instance of job one and read its parameters
If necessary save those parameters to job2's parameter list so they are accessible from other build steps.
OR
If you are happy to use files then you may be able to use the archive plugin, where you would write to a file and then archive it as a post build action. The file would be saved to the master, and you could use the 'copy artifacts from another project' option in the second build to retrieve the file. You can use parameter filters and the techniques above to pick the right build.

Setting an environment variable permanently is entirely dependent on the underlying Operating System.
For example on Windows, the SetX command can be used, however note that SetX only takes affect after the next process is created by the system the inherits from global configuration. So, if you run SetX, and then run another job, it will not notice the change. However if you run SetX, and then restart Jenkins process (from which all child jobs inherit variables), then the other job will notice the change.
Not sure how to set permanent variables in Linux, but a quick Google search returns this answer: https://unix.stackexchange.com/questions/117467/how-to-permanently-set-environmental-variables

Related

Skip Jenkins Stage if its last excution was successfull

I am looking for way to speed up our Build-Pipeline.
The biggest impact would be to do certain things only if they have not been done.
So basically, I have already parameterized some optional stages which works fine, but there are some things I'd like to skip if there was a execution before which was successfull.
I have searched the docs, especially the section ref. when but the was nothing that did the job.
So the question is: Is there something I can do in a declarative pipeline to always skip a stage, except when
It has never been run successfully before
The last time it ran was not successful (eg. if I allow for its execution to be forced by passing a parameter)
I though about using the Build Number (eg only run it on the first execution of the pipeline), but this does't cut it for 2 reasons
I'm using milestones to prevent multiple executions of the pipe for a given branch/PR
If the first run fails, it would never be tried again
Oh, and I also thought about putting all of the logic in post -> regression and forcing the stage to fail on the first pipeline run, but this doesn't seem to be a good idea either.
One option ,although not ideal for large scale, can be to store the diffrent states as Global Environment Variables. (Manage Jenkins -> Configure System -> Global properties -> Environment variables).
These parameters are available for all jobs and can store parameters in a 'server wide' scope.
You can then update or set them via code using the following groovy method:
#NonCPS
def updateGlobalEnvVariable(String name, String value) {
def globalNodeProperties = Jenkins.getInstance().getGlobalNodeProperties()
def envVarsNodePropertyList = globalNodeProperties.getAll(hudson.slaves.EnvironmentVariablesNodeProperty.class)
if (envVarsNodePropertyList == null || envVarsNodePropertyList.size() == 0) {
def envVarsNodePropertyClass = this.class.classLoader.loadClass('hudson.slaves.EnvironmentVariablesNodeProperty')
globalNodeProperties.add(envVarsNodePropertyClass.newInstance())
}
envVarsNodePropertyList.get(0).getEnvVars().put(name, value)
}
This function will create the parameter if it does not exists or update its value in case it already exists. Also this function should better be placed in a Shared Library from which it will be available for all pipelines.
A nice advantage for this technique is that you can always 'reset' the different stages from the configuration page, However if you need to store multiple stages it can overflow the configuration page with a bit too much information.
Maybe you could use custom shared-workspace and then create/store some state file or something that could be used by your next executions like lastexecfailed.state and then try to locate the file on the shared workspace at the beginning of your execution.

Jenkins Pipeline - I need a global configuration object, bad idea to treat a file in my workspace root as such?

I'm writing a somewhat complex global pipeline library. The library really just orchestrates a complex build, with a whole bunch of steps exposed as vars/* and a single src/com/myorg/pipeline/utils.groovy class that handles all common pieces of functionality. Each Jenkinsfile defines all 'build' specific config, and passes it to a vars/myBuildFlavor.groovy step that then calls all steps required for that flavor of the build. The vars/myBuildFlavor.groovy step also reads on a server config file that contains all config that is global to each Jenkins instance.
This setup works incredibly well. It allows users to either piece together their own builds from the steps I've exposed in the global library, or just set all build properties in their Jenkinsfile and call an existing flavor of a build that I've exposed as a step. What I'm struggling with is how I can access configuration values from both the 'build' and 'server' configuration, plus I have some random properties from steps early on in the build that I want to save and use later in the build. What is incredibly annoying is that I have to pass the entire context of the script around with 'this', or have extremely long method signatures to handle the juggling of all of these values.
What I'm thinking may be a good idea is to write a file in the workspace root that contains all build and server config values, plus any properties that I need later on in the build. Has anyone had to deal with this previously? Any major issues with my approach? Better ideas?
I haven't tried this, but you make me want to make sure this works. If you don't beat me to it, I'll give it a shot, so please report back...
The things in vars are created as singletons. So I think you should be able to do something like this:
// vars/customConfig.groovy
class customConfig implements Serializable {
private String url
private Map allTheThings
def setUrl(myUrl) {
url = myUrl
}
def getUrl() {
url
}
def setAllTheThings(Map configMap) {
allTheThings = configMap
}
def getAllTheThings() {
return allTheThings
}
def coolMethod(myVar) {
echo "This method does something cool with the ${myVar} and with ${name}"
}
}
Then access these things like:
customConfig.url = 'https://www.google.com'
echo ${customConfig.url}"
customConfig.coolMethod "FOOBAR"
customConfig.allTheThings.configItem1 = "BAZ"
customConfig.allTheThings.configItem2 = 12345
echo "${customConfig.allTheThings.configItem2} is an Int"
Since it is a "global var" or a singleton, I think you can use it everywhere and the values are all shared.
Let me know if this does what I think it will do.

Can a workflow step access environment variables provided by an EnvironmentContributingAction?

A custom plugin we wrote for an older version of Jenkins uses an EnvironmentContributingAction to provide environment variables to the execution so they could be used in future build steps and passed as parameters to downstream jobs.
While attempting to convert our build to workflow, I'm having trouble accessing these variables:
node {
// this step queries an API and puts the results in
// environment variables called FE1|BE1_INTERNAL_ADDRESS
step([$class: 'SomeClass', parameter: foo])
// this ends up echoing 'null and null'
echo "${env.FE1_INTERNAL_ADDRESS} and ${env.BE1_INTERNAL_ADDRESS}"
}
Is there a way to access the environment variable that was injected? Do I have to convert this functionality to a build wrapper instead?
EnvironmentContributingAction is currently limited to AbstractBuilds, which WorkflowRuns are not, so pending JENKINS-29537 which I just filed, your plugin would need to be modified somehow. Options include:
Have the builder add a plain Action instead, then register an EnvironmentContributor whose buildEnvironmentFor(Run, …) checks for its presence using Run.getAction(Class).
Switch to a SimpleBuildWrapper which defines the environment variables within a scope, then invoke it from Workflow using the wrap step.
Depend on workflow-step-api and define a custom Workflow Step with comparable functionality but directly returning a List<String> or whatever makes sense in your context. (code sample)
Since PR-2975 is merged, you are able to use new interface:
void buildEnvVars(#Nonnull Run<?, ?> run, #Nonnull EnvVars env, #CheckForNull Node node)
It will be used by old type of builds as well.

Jenkins Plugin: create a new job programmatically

How to create a new Jenkins job within a plugin?
I have a Jenkins plugin that listens to a message queue and, when a message arrives, fires a new event to create a new job (or start a run).
I'm looking for something like:
Job myJob = new Job(...);
I know I can use REST API or CLI but since I'm in the plugin I'd use java internal solution.
Use Job DSL Plugin.
From the plugin page:
Jenkins is a wonderful system for managing builds, and people love using its UI to configure jobs. Unfortunately, as the number of jobs grows, maintaining them becomes tedious, and the paradigm of using a UI falls apart. Additionally, the common pattern in this situation is to copy jobs to create new ones, these "children" have a habit of diverging from their original "template" and consequently it becomes difficult to maintain consistency between these jobs.
The Jenkins job-dsl-plugin attempts to solve this problem by allowing jobs to be defined with the absolute minimum necessary in a programmatic form, with the help of templates that are synced with the generated jobs. The goal is for your project to be able to define all the jobs they want to be related to their project, declaring their intent for the jobs, leaving the common stuff up to a template that were defined earlier or hidden behind the DSL.
You can create a new hudson/jenkins job by simply doing:
FreeStyleProject proj = Hudson.getInstance().createProject(FreeStyleProject.class, NAMEOFJOB);
If you want to be able to handle updates (and you already have the config.xml):
import hudson.model.AbstractItem
import javax.xml.transform.stream.StreamSource
import jenkins.model.Jenkins
final jenkins = Jenkins.getInstance()
final itemName = 'name-of-job-to-be-created-or-updated'
final configXml = new FileInputStream('/path/to/config.xml')
final item = jenkins.getItemByFullName(itemName, AbstractItem.class)
if (item != null) {
item.updateByXml(new StreamSource(configXml))
} else {
jenkins.createProjectFromXML(itemName, configXml)
}
Make sure though you have the core .jar file before doing this though.

Jenkins: What is a good way to store a variable between two job runs?

I have a time-triggered job which needs to retrieve certain values stored in a previous run of this job.
Is there a way to store values between job runs in the Jenkins environment?
E.g., I can write something like next in a shell script action:
XXX=`cat /hardcoded/path/xxx`
#job itself
echo NEW_XXX > /hardcoded/path/xxx
But is there a more reliable approach?
A few options:
Store the data in the workspace. If the data isn't critical (i.e. it's ok to nuke it when the workspace is nuked) that should be fine. I only use this to cache expensive-to-compute data such as prebuilt library dependancies.
Store the data in some fixed location in the filesystem. You'll make jenkins less self-contained and thus make migrations+backups more complex - but probably not by much; especially if you store the data in some custom user-subdirectory of jenkins. parallel builds will also be tricky, and distributed builds likely impossible. Jenkins has a userContent subdirectory you could use for this - that way the file is at least part of the jenkins install and thus more easily migrated or backed up. I do this for the (rather large) code coverage trend files for my builds.
Store the data on a different machine (e.g. a database). This is more complicated to set up, but you're less dependant on the local machine's details, and it's probably easier to get distributed and parallel builds working. I've done this to maintain a live changelog.
Store the data as a build artifact. This means looking at previous build's artifacts. It's safe and repeatable, and because Uri's are used to access such artifacts, OK for distributed builds too. However, you need to deal with failed builds (should you look back several versions? start from scratch?) and you'll be storing many copies, which is just fine if it's 1KB but less fine if it's 1GB. Another downside here is that you'll probably need to open up jenkin's security settings quite far to allow annonymous access to artifacts (since you're just downloading from a uri).
The appropriate solution will depend on your situation.
I would pass the variable from the first job to the second as a parameter in a parameterized build. See this question for more info on how to trigger a parameterized build from another build.
If you are using Pipelines and you're variable is of a simple type, you can use a parameter to store it between runs of the same job.
Using the properties step, you can configure parameters and their default values from within the pipeline. Once configured you can read them at the start of each run and save them (as default value) at the end. In the declarative pipeline it could look something like this:
pipeline {
agent none
options {
skipDefaultCheckout true
}
stages {
stage('Read Variable'){
steps {
script {
try {
variable = params.YOUR_VARIABLE
}
catch (Exception e) {
echo("Could not read variable from parameters, assuming this is the first run of the pipeline. Exception: ${e}")
variable = ""
}
}
}
}
stage('Save Variable for next run'){
steps {
script {
properties([
parameters([
string(defaultValue: "${variable}", description: 'Variable description', name: 'YOUR_VARIABLE', trim: true)
])
])
}
}
}
}

Resources