So my google-fu has failed me and the Jenkins O'Reilly book isn't helping either.
I have a Jenkins setup with a master and 20-odd nodes. I make heavy use of custom environment variables on the nodes, as many of them perform similar tasks, only slightly different platforms.
I'm now in a position that I have a job that runs on the master (by necessity), which needs to know certain node properties for any given node, including some of the environment variables that I've set up.
Is there any way to reference these? My alternative seems to be to have hundreds of environment variables in the master in the form node1_var1, node2_var1, node1_var2, node2_var2 etc., which just seems messy. The master clearly has knowledge of the variables, as that's where the configuration for them is done, but I just can't find a way to specify them in a job.
Any help (or ridicule and pointing out of obvious answers) much appreciated...
Here's a simple Groovy script that prints the list of environment variables for each slave:
for (slave in jenkins.model.Jenkins.instance.slaves) {
println(slave.name + ": ")
def props = slave.nodeProperties.getAll(hudson.slaves.EnvironmentVariablesNodeProperty.class)
for (prop in props) {
for (envvar in prop.envVars) {
println envvar.key + " -> " + envvar.value
}
}
}
Warning: I am not an experienced Groovy programmer, so I may not be using the appropriate idioms of Groovy.
You can run this from the Jenkins script console in order to experiment. You can also run a "System Groovy Script" as a build step. Both of the above require the Groovy plugin. If you don't use Groovy in your job, you could use this script to write a properties file that you load in the part of your build that does the real work.
Related
I have a Jenkins Freestyle project with multiple Active Choice Parameters, the groovy scripts use some common constant values (also used at later build steps).
Example:
def constant1 = 'bla'
def constant2 = 'blub'
def constant3 = 'umpf'
def val1 = doSomething("$constant1", "$constant2")
return ["$val1", "$constant3"]
Right now these constant values are hard-coded and duplicated in the groovy scripts.
What is the best way of defining such constants?
Remarks:
I thought about moving the groovy code into a scriptler "library" (not tried yet, no experiences yet)
there will be multiple projects, mostly identical, except some constant values will be different
there are other users and other projects on the same jenkins instance (ideally these shall not "see" my constants)
Ideally the configuration of the constants can be done in the Jenkins GUI
Experimented with
"inject env vars to build process" and "properties content"
"prepare env for the run" and "properties content"
From the perspective of a GUI presentation that would be ideal, but seems not to apply to dynamic params phase yet (?)
I have seen solutions reading a props file (like this) but ideally I would like to configure the values inside Jenkins.
System env vars are not an option.
I'm attempting to write a global function script that uses groovy.sql.SQL.
When adding the annotation #GrabConfig(systemClassLoader=true) I get an exception when using the global function in Jenkinsfile.
Here is the exception:
hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
General error during conversion: No suitable ClassLoader found for grab
Here is my code:
#GrabResolver(name='nexus', root='http://internal.repo.com')
#GrabConfig(systemClassLoader=true)
#Grab('com.microsoft.sqlserver:sqljdbc4:4.0')
import groovy.sql.Sql
import com.microsoft.sqlserver.jdbc.SQLServerDriver
def call(name) {
echo "Hello world, ${name}"
Sql.newInstance("jdbc:sqlserver://ipaddress/dbname", "username","password", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
// sql.execute "select count(*) from TableName"
}
Ensure that the "Use groovy sandbox" checkbox is unticked (it's below the pipeline script text box).
As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, not every Groovy feature is supported.
I'm not sure about #Grab support. It is discussed in JENKINS-26192 (which is declared as resolved, so maybe it works now).
Extract from a very interesting comment:
If you need to perform some complex or expensive tasks with
unrestricted Groovy physically running on a slave, it may be simplest
and most effective to simply write that code in a *.groovy file in
your workspace (for example, in an SCM checkout) and then use tool and
sh/bat to run Groovy as an external process; or even put this stuff
into a Gradle script, Groovy Maven plugin execution, etc. The workflow
script itself should be limited to simple and extremely lightweight
logical operations focused on orchestrating the overall flow of
control and interacting with other Jenkins features—slave allocation,
user input, and the like.
In short, if you can move that custom part that needs SQL to an external script and execute that in a separate process (called from your Pipeline script), that should work. But doing this in the Pipeline script itself is more complicated.
We do a lot of Jenkins System Groovy Scripts to check our Jenkins configuration for things, such as someone allowing Anonymous access when they shouldn't. But there are times when we want to flag a job to be ignored in these self-audits.
My thought was to set an Environment Variable via the EnvInject plugin. But I can't see where you can use the Groovy System Scripts to get these values?
Anyone know how to do this? Alternatives to this method would also be helpfull.
You can use obtain the environmental variables from a System Groovy Script using the following:
def myVar = build.getEnvironment(listener).get('JOB_NAME')
println "JOB_NAME = " + myVar
Does jenkins have any way to set global properties from a job? We have many such needs for this - but specifically - we have a number of slaves, across unix and windows, and various different permissions locations - so it's not easy to have a connected file system. We have various levels of maturity that we promote through - so for instance, we want to promote some build number to UAT - and then promote whatever number is in UAT to training and so on. So - really, in the "release to uat" - we want to store some idea of which build number was released - and read that from the "release to training" job. At the moment we are hacking it by restricting them to run from the same slave, and writing it to a file, which is very much not ideal.
I may not have totally understood your question but you can perform a lot of work with the built in groovy scripting function in jenkins, including reading parameters from other jobs, and rewriting or initializing the parameters in the current job. You can use parameters like this to record information that can be retrieved on demand by other jobs
For instance you can find the build number of the last successful build of a certain project:
import hudson.model.*
def hif = Hudson.instance
def a = hif.getItems(hudson.model.Project).find{it.displayName.toUpperCase()=='MY_PROJECTNAME'}.getBuilds().findAll{it.result==Result.SUCCESS }.first()
out.println a.number //build number
out.println a.buildVariableResolver.resolve('someVariable')// some parameter used to call a
(you could include any other criteria at this point)
If you want to save information to a parameter that can later be read by another bulid step or another job then you first create the parameter in the job config, then write to it in code like so:
import hudson.model.*
def hif = Hudson.instance
def buildMap = build.getBuildVariables()
buildMap['MySpecialVar']='SomeValue'
setBuildParameters(buildMap)
def setBuildParameters(map) {
def npl = new ArrayList<StringParameterValue>()
for (e in map) {
npl.add(new StringParameterValue(e.key.toString(), e.value.toString()))
}
def newPa = null
def oldPa = build.getAction(ParametersAction.class)
if (oldPa != null) {
build.actions.remove(oldPa)
newPa = oldPa.createUpdated(npl)
} else {
newPa = new ParametersAction(npl)
}
build.actions.add(newPa)
}
Combining these techniques you could for instance:
Save a bunch of information as 'output parameters' in job one
Find the most recent successful instance of job one and read its parameters
If necessary save those parameters to job2's parameter list so they are accessible from other build steps.
OR
If you are happy to use files then you may be able to use the archive plugin, where you would write to a file and then archive it as a post build action. The file would be saved to the master, and you could use the 'copy artifacts from another project' option in the second build to retrieve the file. You can use parameter filters and the techniques above to pick the right build.
Setting an environment variable permanently is entirely dependent on the underlying Operating System.
For example on Windows, the SetX command can be used, however note that SetX only takes affect after the next process is created by the system the inherits from global configuration. So, if you run SetX, and then run another job, it will not notice the change. However if you run SetX, and then restart Jenkins process (from which all child jobs inherit variables), then the other job will notice the change.
Not sure how to set permanent variables in Linux, but a quick Google search returns this answer: https://unix.stackexchange.com/questions/117467/how-to-permanently-set-environmental-variables
I have a time-triggered job which needs to retrieve certain values stored in a previous run of this job.
Is there a way to store values between job runs in the Jenkins environment?
E.g., I can write something like next in a shell script action:
XXX=`cat /hardcoded/path/xxx`
#job itself
echo NEW_XXX > /hardcoded/path/xxx
But is there a more reliable approach?
A few options:
Store the data in the workspace. If the data isn't critical (i.e. it's ok to nuke it when the workspace is nuked) that should be fine. I only use this to cache expensive-to-compute data such as prebuilt library dependancies.
Store the data in some fixed location in the filesystem. You'll make jenkins less self-contained and thus make migrations+backups more complex - but probably not by much; especially if you store the data in some custom user-subdirectory of jenkins. parallel builds will also be tricky, and distributed builds likely impossible. Jenkins has a userContent subdirectory you could use for this - that way the file is at least part of the jenkins install and thus more easily migrated or backed up. I do this for the (rather large) code coverage trend files for my builds.
Store the data on a different machine (e.g. a database). This is more complicated to set up, but you're less dependant on the local machine's details, and it's probably easier to get distributed and parallel builds working. I've done this to maintain a live changelog.
Store the data as a build artifact. This means looking at previous build's artifacts. It's safe and repeatable, and because Uri's are used to access such artifacts, OK for distributed builds too. However, you need to deal with failed builds (should you look back several versions? start from scratch?) and you'll be storing many copies, which is just fine if it's 1KB but less fine if it's 1GB. Another downside here is that you'll probably need to open up jenkin's security settings quite far to allow annonymous access to artifacts (since you're just downloading from a uri).
The appropriate solution will depend on your situation.
I would pass the variable from the first job to the second as a parameter in a parameterized build. See this question for more info on how to trigger a parameterized build from another build.
If you are using Pipelines and you're variable is of a simple type, you can use a parameter to store it between runs of the same job.
Using the properties step, you can configure parameters and their default values from within the pipeline. Once configured you can read them at the start of each run and save them (as default value) at the end. In the declarative pipeline it could look something like this:
pipeline {
agent none
options {
skipDefaultCheckout true
}
stages {
stage('Read Variable'){
steps {
script {
try {
variable = params.YOUR_VARIABLE
}
catch (Exception e) {
echo("Could not read variable from parameters, assuming this is the first run of the pipeline. Exception: ${e}")
variable = ""
}
}
}
}
stage('Save Variable for next run'){
steps {
script {
properties([
parameters([
string(defaultValue: "${variable}", description: 'Variable description', name: 'YOUR_VARIABLE', trim: true)
])
])
}
}
}
}