Start jenkins job immediately after creation by seed job, with parameters? - jenkins

Start jenkins job immediately after creation by seed job
I can start a job from within the job dsl like this:
queue('my-job')
But how do I start a job with argument or parameters? I want to pass that job some arguments somehow.

Afaik, you can't.
But what you can do is creating it from a pipeline (jobDsl step), then run it. Something more or less like...
pipeline {
stages {
stage('jobs creation') {
steps {
jobDsl targets: 'my_job.dsl',
additionalParameters: [REQUESTED_JOB_NAME: "my_job's_name"]
build job: "my_job's_name",
parameters: [booleanParam(name: 'DRY_RUN', value: true)]
}
}
}
}
With a barebones 'my_job.dsl'...
pipelineJob(REQUESTED_JOB_NAME) {
definition {
// blah...
}
}
NOTE: As you see, I explicitly set the name of the job from the calling pipeline (the REQUESTED_JOB_NAME var) because otherwise I don't know how to make the jobDSL code to return the name of the job it creates back to the calling pipeline.
I use this "trick" to avoid the "job params go one run behind" problem. I use the DRY_RUN param of the job (I use a hidden param, in fact) to run a "do-nothing" build as its name implies, so by the time others need to use the job for "real stuff" its params section has already been properly parsed.

Related

Jenkins: Parameters disappear from pipeline job after running the job

I've been trying to construct multiple jobs from a list and everything seems to be working as expected. But as soon as I execute the first build (which works correctly) the parameters in the job disappears. This is how I've constructed the pipelineJob for the project.
import javaposse.jobdsl.dsl.DslFactory
def repositories = [
[
id : 'jenkins-test',
name : 'jenkins-test',
displayName: 'Jenkins Test',
repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git'
]
]
DslFactory dslFactory = this as DslFactory
repositories.each { repository ->
pipelineJob(repository.name) {
parameters {
stringParam("BRANCH", "master", "")
}
logRotator{
numToKeep(30)
}
authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>')
displayName(repository.displayName)
description("Builds deploy pipelines for ${repository.displayName}")
definition {
cpsScm {
scm {
git {
branch('${BRANCH}')
remote {
url(repository.repo)
credentials('<CREDENTIAL_NAME>')
}
extensions {
localBranch('${BRANCH}')
wipeOutWorkspace()
cloneOptions {
noTags(false)
}
}
}
scriptPath('Jenkinsfile)
}
}
}
}
}
After running the above script, all the required jobs are created successfully. But then once I build any job, the parameters disappear.
After that when I run the seed job again, the job starts showing the parameter. I'm having a hard time figuring out where the problem is.
I've tried many things but nothing works. Would appreciate any help. Thanks.
This comment helped me to figure out similar issue with my .groovy file:
I called parameters property twice (one at the node start and then tried to set other parameters in if block), so the latter has overwritten the initial parameters.
BTW, as per the comments in the linked ticket, it is an issue with both scripted and declarative pipelines.
Fixed by providing all job parameters in each parameters call - for the case with ifs.
Though I don't see repeated calls in the code you've provided, please check the full groovy files for your jobs and add all parameters to all parameters {} blocks.

Jenkins Parameterized Project For Multiple Parameters

I have Jenkins Pipeline which is triggering for different projects. However the only difference in all the pipelines is just the name.
So I have added a parameter ${project} in parameter of jenkins and assigned it a value of the name of the project.
We have a number of projects and I am trying to find a better way through which I can achieve this.
I am thinking how can we make the parameter run with different parameters for all the projects without actually creating different projects under jenkins.
I am pasting some screenshot for you to understand what exactly I want to achieve.
As mentioned here, this is a radioserver project, having a pipeline which has ${project} in it.
How can I give multiple values to that {project} from single jenkins job?
IF you have any doubts please message me or add a comment.
You can see those 2 projects I have created, it has all the contents same but just the parameterized value is different, I am thinking how can I give the different value to that parameter.
As you can see the 2 images is having their default value as radioserver, nrcuup. How can I combine them and make them run seemlessly ?
I hope this will help. Let me know if any changes required in answer.
You can use conditions in Jenkins. Based on the value of ${PROJECT}, you can then execute the particular stage.
Here is a simple example of a pipeline, where I have given choices to select the value of parameter PROJECT i.e. test1, test2 and test3.
So, whenever you select test1, jenkins job will execute the stages that are based on test1
Sample pipeline code
pipeline {
agent any
parameters {
choice(
choices: ['test1' , 'test2', 'test3'],
description: 'PROJECT NAME',
name: 'PROJECT')
}
stages {
stage ('PROJECT 1 RUN') {
when {
expression { params.PROJECT == 'test1' }
}
steps {
echo "Hello, test1"
}
}
stage ('PROJECT 2 RUN') {
when {
expression { params.PROJECT == 'test2' }
}
steps {
echo "Hello, test2"
}
}
}
}
Output:
when test1 is selected
when test2 is selected
Updated Answer
Yes, it is possible to trigger the job periodically with a specific parameter value using the Jenkins plugin Parameterized Scheduler
After you save the project with some parameters (like above mentioned pipeline code), go back again to the Configure and under Build Trigger, you can see the option of Build periodically with parameters
Example:
I will here run the job for PROJECT=test1 every even minutes and PROJECT=test2 every uneven minutes. So, below is the configuration
*/2 * * * * %PROJECT=test1
1-59/2 * * * * %PROJECT=test2
Please change the crontab values according to your need
Output:

Jenkins pipeline - how to load a Jenkinsfile without first calling node()?

I have a somewhat unique setup where I need to be able to dynamically load Jenkinsfiles that live outside of the src I'm building. The Jenkinsfiles themselves usually call node() and then some build steps. This causes multiple executors to be eaten up unnecessarily because I need to have already called node() in order to use the load step to run a Jenkinsfile, or to execute the groovy if I read the Jenkinsfile as a string and execute it.
What I have in the job UI today:
#Library(value='myGlobalLib#head', changelog=fase) _
node{
load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
The Jenkinsfile that's loaded usually also calls node(). For example:
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
This causes 2 executors to be consumed during the pipeline run. Ideally, I load the Jenkinsfiles without first calling node(), but whenever I try, I get an error message stating:
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
Is there any way to load a Jenkinsfile or execute groovy without first having hudson.FilePath context? I can't seem to find anything in the doc. I'm at the point where I'm going to preprocess the Jenkinsfiles to remove their initial call to node() and call node() with the value the Jenkinsfile was using, then load the rest of the file, but, that's somewhat too brittle for me to be happy with.
When using load step Jenkins evaluates the file. You can wrap your Jenkinsfile's logics into a function (named run() in my example) so that it will load but not run automatically.
def run() {
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
}
// This return statement is important in the end of Jenkinsfile
return this
Call it from your job script like this:
def jenkinsfile
node{
jenkinsfile = load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
jenkinsfile.run()
This way there is no more nested node blocks because the first gets closed before run() function is called.

Select job as parameter in Jenkins (Declarative) Pipeline

I would like to set a parameter in Jenkins Declarative Pipeline enabling the user to select one of the jobs defined on Jenkins. Something like:
parameters {
choice(choices: getJenkinsJobs())
}
How can this be achieved?
Background info: I would like to implement a generic manual promotion job with the Pipeline, where the user would select a build number and the job name and the job would get promoted.
I dislike the idea of using the input step as it prevents the job from completing and I can't get e.g. the junit reports on tests.
You can iterate over all existing hudson.model.Job instances and get their names. The following should work
#NonCPS
def getJenkinsJobs() {
Jenkins.instance.getAllItems(hudson.model.Job)*.fullName.join('\n')
}
pipeline {
agent any
parameters {
choice(choices: getJenkinsJobs(), name: 'JOB')
}
//...
}
Use http://wiki.jenkins-ci.org/display/JENKINS/Extended+Choice+Parameter+plugin
and use basic groovy script as a input.
Refer the below URL for how to list the build/jobs.
https://themettlemonkey.wordpress.com/2013/01/29/jenkins-build-number-drop-down/

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Resources