Executing multiple jenkins jobs - jenkins

Can someone please let me know how i can trigger multiple Jenkins jobs? We have 5-6 jobs that we need to trigger manually post some technical upgrades. We are having to navigate to these jobs manually and click on 'Build'. Is there anyway i can create a new job or shell script that will help me in triggering all of these jobs with a single click/run.

Simplest approach will be creating a new pipeline which will trigger other pipelines.
You can also make stages parallel to faster execution if order is not issue.
You can find detailed answer and approach here

Yes, you can do it in both ways that you mentioned. Creating a new job that will run all these other jobs as downstream jobs is the fastest and easiest.
To do this, create a new job, and if you are using declarative Jenkins, then specify the jobs to be triggered:
stage ('trigger-multiple-jobs') {
parallel {
stage ('first job') {
steps {
build([
job : 'JobName',
wait : false,
parameters: [
string(name: 'PARAM_1', value: "${PARAM_1}")
]
])
}
}
stage ('second job') {
steps {
build([
job : 'JobName2',
wait : false
])
}
}
}
}
You can alternatively create a freestyle job, navigate down to the Trigger downstream jobs section, and set the jobs you want to be triggered from the drop down.
If you want to use a shell script, then you can trigger the jobs using API calls. This is described well in this answer.

Related

Start jenkins job immediately after creation by seed job, with parameters?

Start jenkins job immediately after creation by seed job
I can start a job from within the job dsl like this:
queue('my-job')
But how do I start a job with argument or parameters? I want to pass that job some arguments somehow.
Afaik, you can't.
But what you can do is creating it from a pipeline (jobDsl step), then run it. Something more or less like...
pipeline {
stages {
stage('jobs creation') {
steps {
jobDsl targets: 'my_job.dsl',
additionalParameters: [REQUESTED_JOB_NAME: "my_job's_name"]
build job: "my_job's_name",
parameters: [booleanParam(name: 'DRY_RUN', value: true)]
}
}
}
}
With a barebones 'my_job.dsl'...
pipelineJob(REQUESTED_JOB_NAME) {
definition {
// blah...
}
}
NOTE: As you see, I explicitly set the name of the job from the calling pipeline (the REQUESTED_JOB_NAME var) because otherwise I don't know how to make the jobDSL code to return the name of the job it creates back to the calling pipeline.
I use this "trick" to avoid the "job params go one run behind" problem. I use the DRY_RUN param of the job (I use a hidden param, in fact) to run a "do-nothing" build as its name implies, so by the time others need to use the job for "real stuff" its params section has already been properly parsed.

How to integrate Jobs in Jenkins using Pipeline

I have Four Jobs- A,B,C,D
A- Build
B- Test
C- Sonar Analysis
D- Deploy
My scenario-
1- I need to create a Pipeline
A->B->C
2- I Need to create a other Pipeline
A->B->D
My issue is--
1- If I select "Trigger Parameterized builds on other projects " and add Job B under Job A, I can't use Job A for my second scenario.
How should I use Job A for both the pipelines without effecting.
If I understand the question correctly, you are looking to create two pipelines. In the pipelines, you can define which jobs to build using stages.
For your requirement, you need to create two pipelines and define stages according to your needs. Trigger Parameterized builds on other projects is not a suitable option for you.
stage('Build A') {
build job: 'A' , parameters: <Give_your_own_parameters>
}
stage('Build B') {
build job: 'B' , parameters: <Give_your_own_parameters>
}
stage('Build C') {
build job: 'C' , parameters: <Give_your_own_parameters>
}
You can also get the syntax from Pipeline Syntax in the Pipeline Section of the pipeline you are building.
You can ease the process by creating a duplicate project of A, and B. Project can easily be duplicated during New Item> Copy details from Project A.
This is not easy feasible for a simple reason : you are not using Jenkins jobs the way they were meant to be used.
The concept of a job in Jenkins is that a job is a sequence of actions. A job does not have a single responsibility such as just building, just testing or just deploying. In your case, you should have 3 "actions" in a job, and 3 "actions" in another job.
The freestyle job approach
The common approach would be something like :
Build job
Action 1 : build
Action 2 : test
Action 3 : run Sonar analysis
Deploy job
Action 1 : build
Action 2 : test
Action 3 : deploy
Unless I'm missing something here, you don't want to separate these 3/4 actions in 4 separate jobs, as this would be highly inneficient. For example, test phase and Sonar analysis should probably be run just after a code build has been made, so you want to share the same workspace to be able to test your built code.
The pipeline approach
Another - preferred - approach would be to use actual Jenkins pipelines, i.e. Groovy scripts that will allow you to define your steps as functions and then reuse them in both your "Build job" and "Deploy job".
As an example, you could have a functions.groovy containing your build/test functions :
functions.groovy
def build() {
// Build code here...
}
def test() {
// Test code here...
}
build-job.groovy
node {
def functions = load 'functions.groovy'
stage('Build') {
functions.build()
}
stage('Test') {
functions.test()
}
stage('Sonar Analysis') {
// Sonar analysis code...
}
}
deploy-job.groovy
node {
def functions = load 'functions.groovy'
stage('Build') {
functions.build()
}
stage('Test') {
functions.test()
}
stage('Deploy') {
// Deploy code...
}
}

How to limit Jenkins concurrent multibranch pipeline builds?

I am looking at limiting the number of concurrent builds to a specific number in Jenkins, leveraging the multibranch pipeline workflow but haven't found any good way to do this in the docs or google.
Some docs say this can be accomplished using concurrency in the stage step of a Jenkinsfile but I've also read elsewhere that that is a deprecated way of doing it.
It looks like there was something released fairly recently for limiting concurrency via Job Properties but I couldn't find documentation for it and I'm having trouble following the code. The only thing I found a PR that shows the following:
properties([concurrentBuilds(false)])
But I am having trouble getting it working.
Does anybody know or have a good example of how to limit the number of concurrent builds for a given, multibranch project? Maybe a Jenkinsfile snippet that shows how to limit or cap the number of multibranch concurrent builds?
Found what I was looking for. You can limit the concurrent builds using the following block in your Jenkinsfile.
node {
// This limits build concurrency to 1 per branch
properties([disableConcurrentBuilds()])
//do stuff
...
}
The same can be achieved with a declarative syntax:
pipeline {
options {
disableConcurrentBuilds()
}
}
Limiting concurrent builds or stages are possible with the Lockable Resources Plugin (GitHub). I always use this mechanism to ensure that no publishing/release step is executed at the same time, while normal stages can be build concurrently.
echo 'Starting'
lock('my-resource-name') {
echo 'Do something here that requires unique access to the resource'
// any other build will wait until the one locking the resource leaves this block
}
echo 'Finish'
As #VadminKotov indicated it is possible to disable concurrentbuilds using jenkins declarative pipelines as well:
pipeline {
agent any
options { disableConcurrentBuilds() }
stages {
stage('Build') {
steps {
echo 'Hello Jenkins Declarative Pipeline'
}
}
}
}
disableConcurrentBuilds
Disallow concurrent executions of the Pipeline. Can be useful for
preventing simultaneous accesses to shared resources, etc. For
example: options { disableConcurrentBuilds() }
Thanks Jazzschmidt, I looking to lock all stages easily, this works for me (source)
pipeline {
agent any
options {
lock('shared_resource_lock')
}
...
...
}
I got the solution for multibranch locking too, with de lockable-resources plugin and the shared-libs here is :
Jenkinsfile :
#Library('my_pipeline_lib#master') _
myLockablePipeline()
myLockablePipeline.groovy :
call(Map config){
def jobIdentifier = env.JOB_NAME.tokenize('/') as String[];
def projectName = jobIdentifier[0];
def repoName = jobIdentifier[1];
def branchName = jobIdentifier[2];
//now you can use either part of the jobIdentifier to lock or
limit the concurrents build
//here I choose to lock any concurrent build for PR but you can choose all
if(branchName.startsWith("PR-")){
lock(projectName+"/"+repoName){
yourTruePipelineFromYourSharedLib(config);
}
}else{
// Others branches can freely concurrently build
yourTruePipelineFromYourSharedLib(config);
}
}
To lock for all branches just do in myLockablePipeline.groovy :
call(Map config){
def jobIdentifier = env.JOB_NAME.tokenize('/') as String[];
def projectName = jobIdentifier[0];
def repoName = jobIdentifier[1];
def branchName = jobIdentifier[2];
lock(projectName+"/"+repoName){
yourTruePipelineFromYourSharedLib(config);
}
}

Run Parts of a Pipeline as Separate Job

We're considering using the Jenkins Pipeline plugin for a rather complex project consisting of several deliveries that need to be build using different tools (on different machines) before being merged. Still, it seems to be easy enough to do a complete build with a single Jenkinsfile, and I like the automatic discovery of git branches that comes with Pipeline.
However, at this point, we have jobs for each of the deliveries and use a build-flow based "meta" job to orchestrate the individual jobs. The nice thing about this is that it also allows starting just one individual job if only small changes were made, just to see whether this delivery still compiles.
To emulate this, some ideas came to mind:
Use different Jenkinsfiles for the deliveries and load them in the top-level Jenkinsfile; it seems that the Multibranch Pipeline job does not allow configuring the Jenkinsfile to use yet (https://issues.jenkins-ci.org/browse/JENKINS-35415), however, so creating the jobs for the individual deliveries is still open.
Provide a configuration option for the "top-level" job and have ifs for all deliveries in the Jenkinsfile to be able to select which should be build. This would mix different build types in one pipeline, though, and, at the very least, mess up the estimation of the build time.
Are those viable options, or is there a better one?
What you could do is to write a pipelining script that has has "if"-guards around the single stages, like this:
stage "s1"
if (theStage in ["s1","all"]) {
sleep 2
}
stage "s2"
if (theStage in ["s2", "all"]) {
sleep 2
}
stage "s3"
if (theStage in ["s3", "all"]) {
sleep 2
}
Then you can make a "main" job that uses this script and runs all stages at once by setting the parameter "theStage" to "all". This job will collect the statistics when all stages are run at once and give you useful estimation times.
Furthermore, you can make a "partial run" job that uses this script and that is parametrized with the stage that you want to run. The estimation will not be very useful, though.
Note that I put the stage itself to the main script and put only the execution code into the conditional, as suggested by Martin Ba. This makes sure that the visualization of the job is more reliable
As an expansion of the previous answer, I would propose something like that:
def stageIf(String name, Closure body) {
if (params.firstStage <= name && params.lastStage >= name) {
stage(name, body)
} else {
stage(name) {
echo "Stage skipped: $name"
}
}
}
node('linux') {
properties([
parameters([
choiceParam(
name: 'firstStage',
choices: '1.Build\n' +
'2.Docker\n' +
'3.Deploy',
description: 'First stage to start',
defaultValue: '1.Build',
),
choiceParam(
name: 'lastStage',
choices: '3.Deploy\n' +
'2.Docker\n' +
'1.Build',
description: 'Last stage to start',
defaultValue: '3.Deploy',
),
])
])
stageIf('1.Build') {
// ...
}
stageIf('3.Deploy') {
// ...
}
}
Not as perfect as I wish but at least its working.

How can I delete a job using Job DSL plugin(script) in Jenkins?

I am very new to Jenkins and Job DSL plugin. After a little research, I found how to create a job using DSL and now I am trying to delete a job using DSL.
I know to disable a job using this following code:
//create new job
//freeStyleJob("MyJob1", closure = null);
job("MyJob1"){
disabled(true);
}
It is working perfectly fine. But, I couldn't find any method to delete another job in jenkins.
Please help!
Thanks!
To delete a job, you have to set the "Action for removed jobs" option to "Delete" in the "Process Job DSLs" build step configuration. Then remove the job from your script and run the seed job.
Each instance of the Job Dsl plugin tracks what jobs (and views) it creates. When it is run again, you can configure what it does to jobs (and views) that were present the previous time this instance was run, but are not present this time.
Let's a assume you have two files you use to create jobs.
seed_jobdsl.groovy:
job('seed_all') {
steps {
dsl {
external('*_jobdsl.groovy')
// default behavior
// removeAction('IGNORE')
}
}
}
test_jobdsl.groovy:
job('test_stuff') {
steps {
shell('echo "I live!")
}
}
This will leave jobs created by seed_all unchanged even if they are not present in the list of job created the next time seed is run.
To get jobs to be deleted, change your seed job code:
seed_jobdsl.groovy:
job('seed_all') {
steps {
dsl {
external('*_jobdsl.groovy')
removeAction('DELETE')
}
}
}
Now, run seed_all job to apply your change (seed_all overwrites its own configuration when run). Then make the following change:
test_jobdsl.groovy:
job('test_other') {
steps {
shell('echo "The job is dead, long live the new job!"')
}
}
Run seed_all again. You notice test_stuff will be deleted and test_other will be created. If you remove test_jobdsl.groovy and then run seed_all, test_other will be deleted.

Resources