What are seed jobs in Jenkins and how does it work? - jenkins

What are seed jobs in Jenkins and how does it work ?
Can we create a new job from seed job without using github ?

That depends on context. Jenkins itself does not provide "seed jobs".
There's plugins that allow creating jobs from other jobs, like the excellent Job-DSL plugin. With that, you can create jobs where a groovy script creates a larger number of jobs for you.
The Job-DSL plugin refers to those jobs as "seed jobs" (but they're regular freestyle or pipeline jobs). The Job-DSL plugin does not require a github connection.

The seed job is a normal Jenkins job that runs the Job DSL script; in turn, the script contains instructions that create additional jobs. In short, the seed job is a job that creates more jobs. In this step, you will construct a Job DSL script and incorporate it into a seed job. The Job DSL script that you’ll define will create a single freestyle job that prints a 'Hello World!' message in the job’s console output.
A Job DSL script consists of API methods provided by the Job DSL plugin; you can use these API methods to configure different aspects of a job, such as its type (freestyle versus pipeline jobs), build triggers, build parameters, post-build actions, and so on. You can find all supported methods on the API reference site.
The jobs we used for creating new jobs are called Seed Jobs and this seed job generates new jobs using Jenkins files (using JobDSL plugin).
Here, we disabling this feature (Enable script security for Job DSL scripts)
Jenkins Dashboard→ Manage Jenkins → Configure Global Security
Way to create seed job :
JobDSL scripts for generating new jobs.
Job1.groovy
job("Job1"){
description("First job")
authenticationToken('secret')
label('dynamic')
scm {
github('Asad/jenkins_jobDSL1', 'master')
}
triggers {
gitHubPushTrigger()
}
steps {
shell ('''
echo "test"
''')
}
}
buildPipelineView('project-A') {
title('Project A CI Pipeline')
displayedBuilds(5)
selectedJob('Job1')
showPipelineParameters()
refreshFrequency(60)
}
and create same way others Job2.groovy and so on.
For Jenkins Job DSL documentation:-
Follow https://jenkinsci.github.io/job-dsl-plugin/

Think about a job - what is it actually ?
It is actually just a java/jre object that represents like this
How you generates such job/build ?
Configure Jenkins UI -> rest api to Jenkins url -> Jenkins service receive your call on the relevant endpoint -> calling to the relevant code/method and generate this new job
How Seed job will make it ?
Configure seed job on Jenkins UI only once -> run this seed job - > this code run against the internal Jenkins methods and skip all the manual process describes above
Now, when your code can talk directly to Jenkins code , things are much easier.just update your code on the relevant repo - and you are done

Related

How to run a job from a Jenkins Pipeline on the same executor (declarative syntax)

I want to use the Jenkins "PRQA" plugin, which seems not to have the option to use it from a pipeline. The plugin would run static code analysis and publish the results.
In my case, it requires some preparations that are already done in a pipelinejob. Because of that, I want to include the job into that pipeline, but on the same executor with the data prepared by the pipeline as some kind of inlined job-step.
I have tried to create a job for the PRQA-Plugin-Step and execute this with the build step from the pipeline. But this tries to start the job on a new executor (and stalls because I have only one executor).
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Prepare'
}
}
stage('SCA') {
steps {
//Run this without using a new executor with the Environment that exists now
build 'PRQA_Job'
}
}
}
}
What is the correct way to run the job on the same executor with the current working directory.
With specified build 'PRQA_Job' it's not possible to run second job on the same executor (1 job = 1 executor), since main job just waiting for a triggered job to be finished. But you can run another job on the same agent with more than 1 executor to reach workspace from main job.
For a test porpose specify agent name in both jobs: agent 'agent_name_here'
If you want to use plugin functionality for a plugin, which has no native pipeline support, you could try using "step: General Build step" feature for Jenkins Pipelines. You can use the Pipeline Syntax wizzard linked in the Job configuration windows to generate the needed Pipeline description.
If the plugin does not show up in the "step: General Build step" part of Jenkins you can use a separate Job. To copy all the needed files/Data into this second Job you will require to use Archive Artifact/Copy Artifact functionality of Jenkins to save files from your Pipeline build.
For more information on how to sue Archive Artifact/Copy Artifact see https://plugins.jenkins.io/copyartifact/ and
https://www.jenkins.io/doc/pipeline/tour/tests-and-artifacts/

How to pass env vars to a MultibranchPipelineJob created by Jenkins Job DSL?

I am creating a MultibranchPipelineJob with Jenkins Job DSL. I want to pass some environment variables to the job, but I can't figure out how to do that from the documentation.
MultibranchPipeline Job no longer support parameters
You can use Folder Properties Plugin to set your Env variables, which can be accessed by all the jobs within that folder. https://plugins.jenkins.io/folder-properties/
However, the MultiBranch pipeline job has a lot of performance issues so we moved away from multibranch pipeline jobs. We wrote a DSL job that would act as a multibranch pipeline job - which will scan through the git branches and create Simple pipeline jobs as needed.
You pass them as parameters like this:
parameters {
stringParam("MyVariable1", "my-value1")
stringParam("MyVariable2", "${my-dynamic-value2}")
}
Then consume them in the job using parameters or environment (both work equally) as this:
echo "my vars are ${parameters.MyVariable1} or ${env.MyVariable2}"

How can I get the joblist in jenkins after I execute the jobdsl

How can I get the job list in jenkins after I execute the jobdsl ?
Jenkin JobDSL is nice to manage the jenkins jobs. When you execute the jobDSL, jenkins can help to generate the jobs are expected. Even more, if the job is created, you can choose to skip or overwrite.
Now I want to trigger the build directly after it is new generated.
See sample console output from jenkins build.
Processing DSL script demoJob.groovy
Added items:
GeneratedJob{name='simpliest-job-ever'}
Existing items:
GeneratedJob{name=’existing-job'}
How can I get the jobname simpliest-job-ever in jenkins? And in this case, I don't want to build existing-job
Scan the console log could be choice, but it is not elegant enough.
You can trigger a build from the DSL script using the queue method (docs).
job('simpliest-job-ever') {
// ...
}
queue('simpliest-job-ever')

Initializing Jenkins 2.0 with pipeline in init.groovy.d script

For automation, I would like to initialize a Jenkins 2.0 instance with a pipeline job. I want to create a Groovy script that is copied to the /usr/share/jenkins/ref/init.groovy.d/ folder on startup. The script should create a Jenkins 2.0 Pipeline job for processing a Jenkinsfile from SCM.
I cannot find the relevant Javadoc for the 2.0 pipeline classes or examples of how to do this.
Previously, using Job DSL to create a pipeline, I used a Groovy script to create a FreeStyleProject with an ExecuteDslScripts builder. That job would then be the Job DSL seed job.
One option is to use an init script to create a Job DSL seed job to create a Jenkins 2.0 pipeline. It just seems unnecessarily complex.
I am experimenting in this repo: https://github.com/martinmosegaard/vigilant-sniffle
With Job DSL 1.47 (to be released soon) you can use the Job DSL API directly from the init script without the need to create a seed job.
import javaposse.jobdsl.dsl.DslScriptLoader
import javaposse.jobdsl.plugin.JenkinsJobManagement
def jobDslScript = new File('jobs.groovy')
def workspace = new File('.')
def jobManagement = new JenkinsJobManagement(System.out, [:], workspace)
new DslScriptLoader(jobManagement).runScript(jobDslScript.text)
See PR #837 for details.
If you only need to create one simple pipeline job, you can use the Jenkins API. But that really only works well when creating one simple job, for a complex setup you need some abstraction like Job DSL.
Start here: http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html#createProject(java.lang.Class,%20java.lang.String).
Example:
import jenkins.model.Jenkins
import org.jenkinsci.plugins.workflow.job.WorkflowJob
WorkflowJob job = Jenkins.instance.createProject(WorkflowJob, 'my-pipeline')
Then you need to populate the job, e.g. setting a flow definition.
Or you can wait for the System Config DSL Plugin to be ready. But it has not been released yet and I'm not sure if it can create jobs.

How do I make a Jenkins job start after multiple simultaneous upstream jobs succeed?

In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together).
Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed").
Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after?
Edit:
When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with.
However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.
Pipeline plugin
You can use the Pipeline Plugin (formerly workflow-plugin).
It comes with many examples, and you can follow this tutorial.
e.g.
// build
stage 'build'
...
// deploy
stage 'deploy'
...
// run tests in parallel
stage 'test'
parallel 'functional': {
...
}, 'performance': {
...
}
// promote artifacts
stage 'promote'
...
Build flow plugin
You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen).
Setting up the jobs
Create jobs for:
build
deploy
performance tests
functional tests
promotion
Setting up the upstream
in the upstream (here build) create a unique artifact, e.g.:
echo ${BUILD_TAG} > build.tag
archive the build.tag artifact.
record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent.
Configure to get promoted when promotion job is successful.
Setting up the downstream jobs
I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER
Copy the artifacts from upstream build job using the Copy Artifact Plugin
Project name = ${PARENT_JOB_NAME}
Which build = ${PARENT_BUILD_NUMBER}
Artifacts to copy = build.tag
Record fingerprints; that's crucial.
Setting up the downstream promotion job
Do the same as the above, to establish upstream-downstream relationship.
It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn".
Create a build flow job
// start with the build
parent = build("build")
parent_job_name = parent.environment["JOB_NAME"]
parent_build_number = parent.environment["BUILD_NUMBER"]
// then deploy
build("deploy")
// then your qualifying tests
parallel (
{ build("functional tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) },
{ build("performance tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) }
)
// if nothing failed till now...
build("promotion",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
// knock yourself out...
build("more expensive QA tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
good luck.
There are two solutions that I have used for this scenario in the past:
Use the Join Plugin on your "deploy" job and specify "promote" as the targeted job. You would have to specify "Functional Tests" and "Performance Tests" as the joined jobs and start them via in some fashion, post build. The Parameterized Trigger Plugin is good for this.
Use the Promoted Builds Plugin on your "deploy" job, specify a promotion that works when downstream jobs are completed and specify Functional and Performance test jobs. As part of the promotion action, trigger the "promote" job. You still have to start the two test jobs from "deploy"
There is a CRITICAL aspect to both of these solutions: fingerprints must be correctly used. Here is what I found:
The "build" job must ORIGINATE a new fingerprinted file. In other words, it has to fingerprint some file that Jenkins thinks was originated by the initial job. Double check the "See Fingerprints" link of the job to verify this.
All downstream linked jobs (in this case, "deploy", "Functional Tests" and "Performance tests") need to obtain and fingerprint this same file. The Copy Artifacts plugin is great for this sort of thing.
Keep in mind that some plugins allow you change the order of fingerprinting and downstream job starting; in this case, the fingerprinting MUST occur before a downstream job fingerprints the same file to ensure the ORIGIN of the fingerprint is properly set.
The Multijob plugin works beautifully for that scenario. It also comes in handy if you want a single "parent" job to kick off multiple "child" jobs but still be able to execute each of the children manually, by themselves. This works by creating "phases", to which you add 1 to n jobs. The build only continues when the entire phase is done, so if a phase as multiple jobs they all must complete before the rest are executed. Naturally, it is configurable whether the build continues if there is a failure within the phase.
Jenkins recently announced first class support for workflow.
I believe the Workflow Plugin is now called the Pipeline Plugin and is the (current) preferred solution to the original question, inspired by the Build Flow Plugin. There is also a Getting Started Tutorial in GitHub.
Answers by jason & dnozay are good enough. But in case someone is looking for easy way just use JobFanIn plugin.
This diamond dependency build pipeline could be configured with
the DepBuilder plugin. DepBuilder is using its own domain
specific language, that would in this case look like:
_BUILD {
// define the maximum duration of the build (4 hours)
maxDuration: 04:00
}
// define the build order of the existing Jenkins jobs
Build -> Deploy
Deploy -> "Functional Tests" -> Promote
Deploy -> "Performance Tests" -> Promote
After building the project, the build visualization will be shown on the project dashboard page:
If any of the upstream jobs didn't succeed, the build will be automatically aborted. Abort behavior could be tweaked on a per job basis, for more info see the DepBuilder documentation.

Resources