I would like to define a deployment (CD) template that is capable of wrapping build (CI) jobs of various types. I would like the CD template to be completely ignorant of how the downstream CI job is built.
So far my best solution is to define an auxiliary template which is an attribute of the CD template. This auxiliary template includes a reference to the CI job (via the 'select item' type) and a text attribute 'branch'. I found that for the most part this is enough info to trigger the CI job, but is there a better way to do this?
Ideally I would like to be able to define the CI job's parameters directly in the CD job's configuration without needing to have the auxiliary template middle man.
EDIT:
Here's an example of how I'm calling a downstream build job in a different template:
parameterList = []
parameterList.add(new hudson.model.StringParameterValue('stashRepo', stash))
parameterList.add(new hudson.model.StringParameterValue('branch', branch))
parameterList.add(new hudson.model.StringParameterValue('configDir', configDir))
parameterList.add(new hudson.model.StringParameterValue('jdkVersion', jdkVersion))
build job: configBuildJob.buildJob.getFullName(), parameters: parameterList
The problem with this is the template makes the assumption that these build jobs have these exact parameters. It would be nice to be able to somehow set them in the templates configuration instead, so that I don't have to make assumptions about the downstream jobs in my code.
Related
In Jenkins declarative pipeline we can define build parameters like
pipeline {
…
parameters {
string(name: 'PARAMETER', defaultValue: 'INITIAL_DEFAULT')
choice(name: 'CHOICE', choices: ['THIS', 'THAT'])
}
…
}
However the parameter definitions of the job are only updated when the job runs after the build parameters dialog was already shown. That is, when I change the INITIAL_DEFAULT to something else, the next build will still default to INITIAL_DEFAULT and only the one after that will use the new value.
The same problem is with the choices, and there it is even more serious, because string default can be overwritten easily when starting the build, but if the new option isn't there, it cannot be selected at all.
So is there a way to define functions or expressions that will be executed before the parameter dialog to calculate current values (from files, variable in global settings or any other suitable external configuration)?
I remember using some plugins for this in the past with free-style jobs, but searching the plugin repository I can't find any that would mention how to use it with pipelines.
I don't care too much that the same problem applies to adding and removing parameters, because that occurs rarely. But we have some parameters where the default changes often and we need the next nightly to pick up the updated value.
It turns out the extended-choice-parameter does work with pipeline, and the configurations can be generated by the directive generator. It looks something like
extendedChoice(
name: 'PARAMETER',
type: 'PT_TEXTBOX',
defaultPropertyFile: '/var/lib/jenkins/something.properties',
defaultPropertyKey: 'parameter'
)
(there are many more options available in the generator)
Groovy script to get global environment variables can be had from this other answer.
I have a parameterized project. With the variable VAR1.
I'm using the the Xray for JIRA Jenkins Plugin for Jenkins. There you can fill four parameters:
JIRA Instance
Issues
Filter
File Path
I'm new to Jenkins but what I have learned so far, that you can't fill this fields with environment variables. Something like
Issues: ${VAR1} - doesn't work.
So I thought I can do this with a pipeline. When I click on Pipeline Syntax and chose step: General Build Step I can choose Xray: Cucumber Features Export Task. Then I fill the fields with my environment variable and click Generate Pipeline Script The output is as follows:
step <object of type com.xpandit.plugins.xrayjenkins.task.XrayExportBuilder>
That doesn't work. What I'm doing wrong?
All you're doing is OK, but what you want is not supported by Jenkins whether it is pipeline or not, since the parameters' load is happening prior to the pipeline-flow or the definition of the ${VAR1}.
You can try to overcome this by defining the 'Issues' value as a pipeline internal value instead of a parameter and base it on the ${VAR1} value.
If it must be a parameter, use 2 jobs where one defines the value of 'Issues' based on a the ${VAR1} and pass it to the other job that gets the 'Issues' as a fixed value.
I want to create a job in Jenkins which modifies an existing parameter on another job.
I'm using the Job DSL Plugin. The code I'm using is:
job('jobname'){
using('jobname')
parameters {
choiceParam('PARAMETER1',['newValue1', 'newValue2'],'')
}
}
However, this only adds another parameter with the same name in the other job.
I'm trying the alternative to delete all parameters and start from scratch, but I haven't found the way to do that using Job DSL (not even with the Configure block).
Another alternative would be to define the other job completely and start from scratch, but that would make the job too complicated, specially if I want to apply this change to many jobs at a time.
¿Is there a way to edit or delete lines on the config.xml file using the Job DSL plugin?
I've got a Jenkins job hierarchy looking something like this:
Collection/
ParentJob
Children/
Foo
Bar
Baz
ParentJob has a few different configurations and the jobs in Children/ need to be built for each of those configurations. These configurations are basically just checking out different branches and building those branches. Additionally, part of each configuration of ParentJob has to be completed before the child jobs can run.
How can I trigger all the jobs in Children after the necessary parts of each ParentJob configuration are finished?
My first thought was to just put a build 'Children/*' step in ParentJob's pipeline, but it seems Jenkins does not support wildcards in that context.
I could explicitly list all the jobs, but that would be tedious (there's several dozen child jobs) and prone to breakage as child jobs may be added or removed.
Ideally a solution would allow me to just set up a new child job without touching anything else and have it automatically triggered the next time ParentJob runs.
You could get a list of the child jobs and use a for loop to build them. I haven't tested this, but I see no reason why it would not work.
I structure my job folders in a similar fashion to take advantage of naming conventions and for role-based security.
import jenkins.model.*
def childJobNames = Jenkins.instance.getJobNames().findAll { it.contains("Collection/Children") }
for (def childJobName : childJobsNames)
{
build job: childJobName, parameters: [
string(name: 'Computer', value: manager.ComputerName)
],
wait: false
}
http://javadoc.jenkins.io/index.html?hudson/model/Hudson.html
You need to use the Jenkins Workflow or Pipeline, and then you can run a stage, then some in parallel, and then a sequential set of stages, etc. This StackOverflow Question and Answer seems to be a good reference.
I have a pipeline template that has an attribute called "testJobs". The idea is to allow the implementing job to perform its build and then trigger a list of downstream jobs to perform validation.
My biggest problem with this is I cannot figure out how to pass along parameters from the calling pipeline.
For example, my pipeline has a build parameter which defines the git branch to pull. When iterating through the post-build "testJobs" I want to pass that branch along as a parameter, if that is a parameter on the downstream job.
Is there a way to define the parameters using variables within the auxiliary template?