How to load another groovy script in the same Jenkins node? - jenkins

When loading a pipeline script from another pipeline script, the two pipelines don't get executed on the same node : the first one is executed on my master node and the second gets executed on a slave node.
I'm using Jenkins pipelines with Pipeline Script from SCM option for a lot of jobs in this way :
Each of my jobs defines their corresponding Git repo URL with Poll SCM option so that the repository gets automatically polled when a change is made to my code (basic job usage).
Each of my jobs define a simple Jenkinsfile at the root of their repository, and the pipeline script inside does basically nothing but loading a more generic pipeline.
E.g. :
node {
// --- Load the generic pipeline ---
checkout scm: [$class: 'GitSCM', branches: [[name: '*/master']], extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'http://github/owner/pipeline-repo.git']]]
load 'common-pipeline.groovy'
}()
My common-pipeline.groovy pipeline does the actual stuff such as building, releasing or deploying artifacts, e.g. :
{ ->
node() {
def functions = load 'common/functions.groovy'
functions.build()
functions.release()
functions.deploy()
}
}
Now I don't want to force the node for each job so both pipelines have node("master") or node("remote") because I really don't want to be handling that manually, however I'd like that once the first pipeline runs on a specific node (either master, slave1, slave2, slave3) the second/loaded pipeline gets executed on the same node, because otherwise my actual Git repository code is not available from other node's workspace...
Is there any way I can specify that I want my second pipeline to be executed on the same node as the first, or maybe pass an argument when using the load step ?

How about stashing the workspace after the checkout and before the script load?:
e.g.
stash includes: '**', name: "source"
and then unstash it in another node(){} section:
e.g.
unstash "source"
That way it will be available in the other node
Don't forget to clean up the workspace though
Or how about creating a common function that contains the logic for the checkout too (maybe passing in branches as a param). Then you can discard the node(){} in the Jenkins file and just use node(){} entries in your shared groovy script?
e.g. Jenkinsfile
load 'common-pipeline.groovy'
createWorlflow("*/master")
common-pipeline.groovy:
def createWorkflow(branches){
node() {
def functions = load 'common/functions.groovy'
functions.checkout(branches)
functions.build()
functions.release()
functions.deploy()
}
}

You can make a parameterized job for the second pipeline and use the parameter to control the node when triggering the second job.
The second pipeline would look like this:
node(runhereParam) {
}

Related

Jenkins checkout into a user defined directory

My project has 3 submodules in GitLab, which are all needed to build my project. I want to create independent pipelines in Jenkins to monitor and pull when a merge request is open.
If I create individual pipelines, Jenkins will create a new folder with the name of the pipeline project like so: "jenkins_home/workspace/submodule1", "jenkins_home/workspace/submodule2", "jenkins_home/workspace/submodule3".
Is it possible to specify the directory where I want to checkout each submodule? As in, checkout all into "jenkins_home/workspace/common_folder", where common_folder will contain submodule1, submodule2 and submodule3.
P.S. I tried bat 'cd common_folder', but the cd command just hangs and never executes.
Also tried dir (**subdir**){} which just creats a new directory inside the submodule pipeline directory: "jenkins_home/workspace/submodule1/subdir/code_from_git".
#!/usr/bin/env groovy
pipeline {
agent { label 'master' }
environment {
gbuild = 'true'
DB_ENGINE = 'sqlite'
}
options{
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
checkout([
HERE, need to checkout into a custom folder and not the workspace
$class: 'GitSCM',
branches: scm.branches,
extensions: scm.extensions + [
[$class: 'GitLFSPull'],
[$class: 'CleanCheckout']
],
userRemoteConfigs: scm.userRemoteConfigs
])
}
}
}
I believe what you are doing dir is the correct approach or you can create separate pipelines.
Jenkins works on master slave configuration and the pipeline you create creates the same name folder in workspace on master server which is then created to slave server when you run the pipelines for the first time, once the pipeline runs and checkout the code on slave server it is then pushed to your master server.
I hope the answer explained you the working principle.
A possible workaround for projects with subprojects, where you want to track each subproject for any merge requests and need all the subprojects to build is: use an independent pipeline.
Additional comment: as there is no admin access on the server pc this might be limiting my capabilities to execute some simple commands, this solution might not be correct for you.
As my cmd commands in the pipeline were not executing and keeping the whole pipeline from running, and I was not able to change the location of the project from workspace to a desired location, I created 2 extra pipelines.
First pipeline is there to listen to webhooks from GitLab and pull the branch in the merge request (also verifying if its a merge request, if so its going to take the branch being merged, if not it will take the master branch):
stage('Checkout'){
steps{
script{
if(env.gitlabActionType == 'Merge')
{
checkout([
$class: 'GitSCM'
branches: [[name: "${env.gitlabSourceBranch}"]]
])
}
else
{
checkout([
$class: 'GitSCM'
branches: master
])
}
}
}
}
Second pipeline to copy the checkouted files into the desired location. For this step I made a freestyle project, where i execute a windows batch command to xcopy CheckedoutDir DesiredDestination.
The second pipeline has Build Trigger to build after first pipeline is built stable. It also has a Trigger/call builds on other projects to trigger the main pipeline that does the building and unit testing.

Jenkins: Building multiple repos with different branches

I have multiple repos with their own jenkins files and when I am working on one repo I will need to build the others so I have an end to end app deployed for feature development. As the app runs on AWS with the containers deployed into EKS my preference is to be able to build and run on AWS.
There is an order to the building, the infrastructure needs to deployed first, before the backend services (there are 3) and the UI.
Ideally I can choose which branches from the 5 repos are deployed, and when a change on any branch that is deployed as part of the ephemeral environment occurs the pipeline will trigger.
So far what I am thinking is to have a jenkinsfile in each repo and create a 6th repo, which will have just a yaml file and jenkinsfile of its own. This pipeline job for this repo would take data from the yaml file about which branches to use, and trigger the other pipelines passing the branch to each, it would be the only repo with an actual pipeline job.
Has anyone tried this? I'm not sure if it's possible to have a pipeline watch multiple different repos and branches and act as an orchestrator, kicking off other pipelines.
There might be a much easier way to do this, I have read a lot of posts and articles but none seem to achieve what I want.
One of the approach can be writing single Jenkinsfile by combing all the stages from each repo into this single Jenkinsfile
stages {
stage('Infra Setup') {
steps {
// The below will clone your repo and will be checked out to master branch by default.
git credentialsId: 'jenkins_git_cred', url: '<your_git_url_for_clone>'
sh "git checkout branchname"
// Your steps
}
}
stage('Backend1 ') {
steps {
//If you want to checkout to a specific branch by default instead of master then use the below in your pipeline stage.
checkout([$class: 'GitSCM', branches: [[name: '*/branchname']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'jenkins_git_cred', url: 'your_git_url_for_clone']]])
}
}
stage('backend_n') {
steps {
// One or more steps need to be included within the steps block.
}
}
stage('UI') {
steps {
// One or more steps need to be included within the steps block.
}
}
}
You can generate syntax using jenkins pipeline-syntax
https://your-jenkins-url.com/pipeline-syntax/

What does the pollSCM trigger refer to in this Jenkinsfile?

Consider the following setup using Jenkins 2.176.1:
A new pipeline project named Foobar
Poll SCM as (only) build trigger, with: H/5 * * * * ... under the assumption that this refers to the SCM configured in the next step
Pipeline script from SCM with SCM Git and a working Git repository URL
Uncheck Lightweight checkout because of JENKINS-42971 and JENKINS-48431 (I am using build variables in the real project and Jenkinsfile; also this may affect how pollSCM works, so I include this step here)
Said repository contains a simple Jenkinsfile
The Jenkinsfile looks approximately like this:
#!groovy
pipeline {
agent any
triggers { pollSCM 'H/5 * * * *' }
stages {
stage('Source checkout') {
steps {
checkout(
[
$class: 'GitSCM',
branches: [],
browser: [],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
userRemoteConfigs: [
[
url: 'git://server/project.git'
]
]
]
)
stash 'source'
}
}
stage('OS-specific binaries') {
parallel {
stage('Linux') {
agent { label 'gcc && linux' }
steps {
unstash 'source'
echo 'Pretending to do a build here'
}
}
stage('Windows') {
agent { label 'windows' }
steps {
unstash 'source'
echo 'Pretending to do a build here'
}
}
}
}
}
}
My understanding so far was that:
a change to the Jenkinsfile (not the whole repo) triggers the pipeline on any registered agent (or as configured in the pipeline project).
said agent (which is random) uses the pollSCM trigger in the Jenkinsfile to trigger the pipeline stages.
But where does the pollSCM trigger poll (what SCM repo)? And if it's a random agent then how can it reasonably detect changes across poll runs?
then the stages are being executed on the agents as allocated ...
Now I am confused what refers to what. So here my questions (all interrelated which is why I keep it together in one question):
The pipeline project polls the SCM just for the Jenkinsfile or for any changes? The repository in my case is the same (for Jenkinsfile and source files to build binaries from).
If the (project-level) polling triggers at any change rather than changes to the Jenkinsfile
Does the pollSCM trigger in the Jenkinsfile somehow automagically refer to the checkout step?
Then ... what would happen, would I have multiple checkout steps with differing settings?
What determines what repository (and what contents inside of that) gets polled?
... or is this akin to the checkout scm shorthand and pollSCM actually refers to the SCM configured in the pipeline project and so I can shorten the checkout() to checkout scm in the steps?
Unfortunately the user handbook didn't answer any of those questions and pollSCM has a total of four occurrences on a single page within the entire handbook.
I'll take a crack at this one:
The pipeline project polls the SCM just for the Jenkinsfile or for any
changes? The repository in my case is the same (for Jenkinsfile and
source files to build binaries from).
The pipeline project will poll the repo for ANY file changes, not just the Jenkinsfile. A Jenkinsfile in the source repo is common practice.
If the (project-level) polling triggers at any change rather than
changes to the Jenkinsfile Does the pollSCM trigger in the Jenkinsfile
somehow automagically refer to the checkout step?
Your pipeline will be executed when a change to the repo is seen, and the steps are run in the order that they appear in your Jenkinsfile.
Then ... what would happen, would I have multiple checkout steps with
differing settings?
If you defined multiple repos with the checkout step (using multiple checkout SCM calls) then the main pipeline project repo would be polled for any changes and the repos you define in the pipeline would be checked out regardless of whether they changed or not.
What determines what repository (and what contents inside of that)
gets polled? ... or is this akin to the checkout scm shorthand and
pollSCM actually refers to the SCM configured in the pipeline project
and so I can shorten the checkout() to checkout scm in the steps?
pollSCM refers to the pipeline project's repo. The entire repo is cloned unless the project is otherwise configured (shallow clone, lightweight checkout, etc.).
The trigger defined as pollSCM polls the source-control-management (SCM), at the repository and branch in which this jenkinsfile itself (and other code) is located.
For Pipelines which are integrated with a source such as GitHub or BitBucket, triggers may not be necessary as webhooks-based integration will likely already be present. The triggers currently available are cron, pollSCM and upstream.
It works for a multibranch-pipeline as trigger to execute the pipeline.
When Jenkins polls the SCM, exactly this repository and branch, and detects a change (i.e. new commit), then this Pipeline (defined in jenkinsfile) is executed.
Usually then the following SCM Step checkout will be executed, so that the specified project(s) can be built, tested and deployed.
See also:
SCM Poll in jenkins multibranch pipeline
SehllHacks(2020): Jenkins: Scan Multibranch Pipeline Without Build

Trigger on only one of multiple polled SCMs in a Jenkinsfile

My Jenkinsfile has two SCM checkouts, primary, and secondary. I only want to have the build triggered when commits are made in primary. I've set the poll argument in the obvious way, but it does not seem to be honored; the build gets triggered when commits are made to either repository.
node {
stage("checkout") {
checkout scm: [$class: "MercurialSCM", source: "/var/jenkins_home/hg/primary", subdir: "hg/primary", clean: true], poll: true
checkout scm: [$class: "MercurialSCM", source: "/var/jenkins_home/hg/secondary", subdir: "hg/secondary", clean: true], poll: false
}
stage("do something") {
echo 'Hello World'
sh 'sleep 30s'
echo 'Done'
}
}
I could not figure out how to do this from within Jenkinsfile only.
To solve this problem for myself, I ended up creating a separate Jenkins job (Free Style project, not pipeline) which was setup to Poll SCM on the primary repo. This job does nothing except in the Post-build Actions it triggers my actual Jenkins Pipeline job which loads the Jenkinsfile like you showed.
The trigger passes a Predefined parameter set to the change ID that the poller found. In my case it was Git so I set change=${GIT_COMMIT}.
In my Pipeline job, I created a String parameter called change.
In my Jenkinsfile, I used env.change in the checkout line to checkout the specific commit.

Jenkins Pipeline and Promotions

When having build job with implemented promotion cycle, i.e. Dev->QA->Performance->Production.
What will be the correct way to migrate this cycle into pipeline? it looks rather clean\structured to call each of the above mentioned jobs, Yet, How can I query the build ID (to be able to call the deployment job)? Or I have have totally misunderstood the pipeline concept?
You might consider multiple solutions :
Trigger each job sequentially
Just call each job sequentially using build step :
node() {
stage "Dev"
build job: 'Dev'
stage "QA"
build job: 'QA'
// Your other promotion cycles...
}
It is easy to use and will probably be already compliant with your actual solution, but I'm not a big fan of this solution because the actual output of your pipeline stages (Dev, QA, etc.) will really be in the dedicated job (Dev job, QA job) instead of being directly inside your pipeline. Your pipeline will be an empty shell just calling other jobs...
Call pipelines functions instead of jobs
Define a pipeline function for each of your promotion cycle (preferably in an external file) and then call each function sequentially. Example :
node {
git 'http://urlToYourGit/projectContainingYourFunctions'
cycles = load 'promotions-cycles.groovy'
stage "Dev"
cycles.dev()
stage "QA"
cycles.qa()
// Your other promotion cycles calls...
}
The biggest advantages is that your promotions cycles code is commited in your Git repository and that all your stages output is actually a part of your pipeline output, which is great for easy debugging.
Plus you can easily apply conditions based on the success/failure of your functions (e.g. if your QA stage fails you don't want to go any further).
Note that both solutions should allow you to launch your promotions cycles in parallel if needed and to pass parameters to either your jobs or functions.
It is better to call each build in separate pipeline stages. Something like this:
stage "Dev"
node{
build job: 'Dev', parameters:
[
[$class: 'StringParameterValue', name: 'param', value: "param"],
];
}
stage "QA"
node{
build job: 'QA'
}
etc...
To cycle this process you ca use retry option or endless cycle in Groovy

Resources