Groovy script to get Jenkins pipeline's script path - jenkins

I want to run a script in Jenkins Script Console to retrieve scriptPath parameter of all jobs/pipelines configured in Jenkins. I found the way to get names of the pipelines but I want scriptPath parameters of each pipeline.
Any leads?

All pipeline jobs are instances of org.jenkinsci.plugins.workflow.job.WorkflowJob and can be found using the Jenkins.instance.getAllItems function.
Once found, each job contains an attribute of the FlowDefinition class whcih is accessible via the getDefinition() method. There are two types of definition for pipelines:
CpsFlowDefinition - for pipelines which define an inline script (not SCM), the script is accessible via the getScript() method.
CpsScmFlowDefinition - for pipelines which define an SCM script, the script is accessible via the getScriptPath() method.
So to achieve what you want you can go over relevant jobs and extract the relevant attribute:
def pipelineJobs =Jenkins.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)
def scmJobs = pipelineJobs.findAll { it.definition =~ 'CpsScmFlowDefinition'}
scmJobs.each {
println "Pipeline Name: ${it.name}"
println "SCM Script Path: ${it.definition.scriptPath}"
}
If all your jobs are SCM pipelines you can use the following single liner:
Jenkins.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)*.definition.scriptPath
For a single specific job you can use:
Jenkins.instance.getItemByFullName("<PIPELINE_NAME>").definition.scriptPath // or just script for inline definition

Related

How to pass environment variable to Jenkins Remote API when submitting job

I have a declarative pipeline job (this is not multi-branch pipeline job using Jenkinsfile) without parameters but some stages are conditional based on value in environment variable:
stage('deploy-release') {
when {
environment name: 'GIT_BRANCH', value: 'master'
}
steps {
sh "mvn deploy:deploy-file -B -DpomFile=pom.xml -Dfile=target/example.jar -DrepositoryId=maven-releases -Durl=${NEXUS_URL}/repository/maven-releases/"
}
}
I want to trigger the job from external system but I need to pass correct value of given environment variable. Is there some way how to do that via Jenkins Remote API?
For passing value of given environment variable, you need to define parameters with the exact same name as that of environment variable for your job by selecting "This build is parameterized".
You can refer Parameterized Build

Using variables created inside other job build inside on Jenkinsfile

I need to read some vars created inside another job. Easier to be explain with pseudo code:
my job:
{
build job:"create cluster" //this job will create some vars (cluster_name)
//used this var from my job
echo "${cluster_name}"
}
The best will be with declarative pipelines but I can always use script {}
Firstly in your create cluster job you need to put that variable into environment variable. You can do it this way
//create cluster Jenkinsfile
env.CLUSTER_NAME = cluster_name
Then in your upstream job you can receive that variable using a result of build step.
def result = build job: 'create cluster'
echo result.buildVariables.CLUSTER_NAME

How to pass env vars to a MultibranchPipelineJob created by Jenkins Job DSL?

I am creating a MultibranchPipelineJob with Jenkins Job DSL. I want to pass some environment variables to the job, but I can't figure out how to do that from the documentation.
MultibranchPipeline Job no longer support parameters
You can use Folder Properties Plugin to set your Env variables, which can be accessed by all the jobs within that folder. https://plugins.jenkins.io/folder-properties/
However, the MultiBranch pipeline job has a lot of performance issues so we moved away from multibranch pipeline jobs. We wrote a DSL job that would act as a multibranch pipeline job - which will scan through the git branches and create Simple pipeline jobs as needed.
You pass them as parameters like this:
parameters {
stringParam("MyVariable1", "my-value1")
stringParam("MyVariable2", "${my-dynamic-value2}")
}
Then consume them in the job using parameters or environment (both work equally) as this:
echo "my vars are ${parameters.MyVariable1} or ${env.MyVariable2}"

Multi-branch configuration with externally-defined Jenkinsfile

I have an open-source project, that resides in GitHub and is built using a build farm, controlled by Jenkins.
I want to build it branch-wise using a pipeline, but I don't want to store Jenkinsfile inside the code. Is there a way to accomplish this?
I have encountered the same issue as you. While the idea of having the build process as part of the code is good, there is information that the Jenkinsfile would include that are not intrinsic to the project build itself, but rather are specific to the build environment instance, which may change.
The way I accomplished this is:
Encapsulate the core build process in a single script (build.py or build.sh). This may call specific build tools like Make, CMake, Ant, etc.
Tell Jenkins via the Jenkinsfile to call a function defined in a single global library
Define the global Jenkins build function to call the build script (e.g. build.py) with appropriate environment settings. For example, using custom tools and setting up the PATH.
So for step 2, create a Jenkinsfile in your project containing just the line
build_PROJECTNAME()
where PROJECTNAME is based on the name of your project.
Then use the Pipeline Shared Groovy Libraries Plugin and create a Groovy script in the shared library repository called vars/build_PROJECTNAME.groovy containing the code that sets up the environment and calls the project build script (e.g. build.py):
def call() {
node('linux') {
stage("checkout") {
checkout scm
}
stage("build") {
withEnv([
"PATH+CMAKE=${tool 'CMake'}/bin",
"PATH+PYTHON=${tool 'Python-3'}",
"PATH+NINJA=${tool 'Ninja'}",
]) {
execute 'python build.py'
}
}
}
}
First of all, why do you not want a Jenkinsfile in your code? The pipeline is just as much part of the code as would be your build file.
Other then that, you can load groovy files to be evaluated as a pipeline script. You can do this either from a different location with the from SCM option and then checkout the actual code. But this will force you to manually take care of the branch builds.
Another option would be to have a very basic Jenkinsfile that merely checkouts an external pipeline.
You would get something like this:
node{
deleteDir()
git env.flowScm
def flow = load 'pipeline.groovy'
stash includes: '**', name: 'flowFiles'
stage 'Checkout'
checkout scm // short hand for checking out the "from scm repository"
flow.runFlow()
}
Where the pipeline.groovy file would contain the actual pipeline would look like this:
def runFlow() {
// your pipeline code
}
// Has to exit with 'return this;' in order to be used as library
return this;

What are seed jobs in Jenkins and how does it work?

What are seed jobs in Jenkins and how does it work ?
Can we create a new job from seed job without using github ?
That depends on context. Jenkins itself does not provide "seed jobs".
There's plugins that allow creating jobs from other jobs, like the excellent Job-DSL plugin. With that, you can create jobs where a groovy script creates a larger number of jobs for you.
The Job-DSL plugin refers to those jobs as "seed jobs" (but they're regular freestyle or pipeline jobs). The Job-DSL plugin does not require a github connection.
The seed job is a normal Jenkins job that runs the Job DSL script; in turn, the script contains instructions that create additional jobs. In short, the seed job is a job that creates more jobs. In this step, you will construct a Job DSL script and incorporate it into a seed job. The Job DSL script that you’ll define will create a single freestyle job that prints a 'Hello World!' message in the job’s console output.
A Job DSL script consists of API methods provided by the Job DSL plugin; you can use these API methods to configure different aspects of a job, such as its type (freestyle versus pipeline jobs), build triggers, build parameters, post-build actions, and so on. You can find all supported methods on the API reference site.
The jobs we used for creating new jobs are called Seed Jobs and this seed job generates new jobs using Jenkins files (using JobDSL plugin).
Here, we disabling this feature (Enable script security for Job DSL scripts)
Jenkins Dashboard→ Manage Jenkins → Configure Global Security
Way to create seed job :
JobDSL scripts for generating new jobs.
Job1.groovy
job("Job1"){
description("First job")
authenticationToken('secret')
label('dynamic')
scm {
github('Asad/jenkins_jobDSL1', 'master')
}
triggers {
gitHubPushTrigger()
}
steps {
shell ('''
echo "test"
''')
}
}
buildPipelineView('project-A') {
title('Project A CI Pipeline')
displayedBuilds(5)
selectedJob('Job1')
showPipelineParameters()
refreshFrequency(60)
}
and create same way others Job2.groovy and so on.
For Jenkins Job DSL documentation:-
Follow https://jenkinsci.github.io/job-dsl-plugin/
Think about a job - what is it actually ?
It is actually just a java/jre object that represents like this
How you generates such job/build ?
Configure Jenkins UI -> rest api to Jenkins url -> Jenkins service receive your call on the relevant endpoint -> calling to the relevant code/method and generate this new job
How Seed job will make it ?
Configure seed job on Jenkins UI only once -> run this seed job - > this code run against the internal Jenkins methods and skip all the manual process describes above
Now, when your code can talk directly to Jenkins code , things are much easier.just update your code on the relevant repo - and you are done

Resources