Pass configuration into a Jenkins Pipeline - jenkins

I'm trying to find a way to pass a configuration for a Multibranch pipeline job into the jenkinsfile when it's executing.
My goal is to configure something like the following:
Branch : Server
"master" : "prodServer"
"develop" : "devServer"
"release/*", "hotfix/*" : "stagingServer"
"feature/Thing-I-Want-To-Change-Regularly" : "testingServer"
where I can then write a Jenkinsfile like this:
pipeline {
agent any
stages {
stage('Example Build') {
steps {
echo 'Hello World'
}
}
stage('Example Deploy') {
when {
//branch is in config branches
}
steps {
//deploy to server
}
}
}
}
I'm having trouble finding a way to achieve this. EnvInject Plugin seems to be the solution for non-Pipeline projects, but it's currently got security issues and only partial Pipeline support.

If you want to deploy to different servers depending on the branch, in Multibranch Pipelines you can use:
when { branch 'master' } (decalrative)
or
${env.BRANCH_NAME} (scripted)
to access which branch you are on and then add logic to deploy to corresponding servers based on this.

Going to post my current best approach to a global config value and hope something better comes along.
In Manage Jenkins -> Configure System -> Global Properties you can define global Environment Variables which can be accessed from Jenkins jobs. Defining an MY_BRANCH variable there could be accessed from a pipeline.
when { branch: MY_BRANCH }
Or even a RegEx and used like this
when { expression { BRANCH_NAME ==~ MY_BRANCH } }
However, this has the disadvantage that the Environment Variables are shared between every Jenkins job, not just across all branches of a single job. So careful naming will be necessary.

Related

Is it Possible to Run Jenkinsfile from Jenkinsfile

Currently we are developing centralized control system for our CI/CD projects. There are many projects with many branches so we are using multibranch pipeline ( This forces us to use Jenkinsfile from project branches so we can't provide custom Jenkinsfile like Pipeline projects ). We want to control everything under 1 git repo where for every project there should be kubernetes YAMLS's, Dockerfile and Jenkinsfile. When developer presses build button, Jenkinsfile from their project repo suppose to run our jenkinsfile. Is it possible to do this?
E.g. :
pipeline {
agent any
stages {
stage('Retrieve Jenkinsfile From Repo') { // RETRIEVE JENKINSFILE FROM REPO
steps {
git branch: "master",
credentialsId: 'gitlab_credentials',
url: "jenkinsfile_repo"
scripts {
// RUN JENKINSFILE FROM THE REPO
}
}
}
}
}
Main reason we are doing this, there are sensetive context in jenkinsfile like production database connections. We don't want to store jenkinsfile under developers' repo. Also you can suggest correct way to achieve that beside using only 1 repo.
EDIT: https://plugins.jenkins.io/remote-file/
This plugin solved all my problems. I could'not try comments below
As an option you can use pipeline build step.
pipeline {
agent any
stages {
stage ('build another job') {
steps {
build 'second_job_name_here'
}
}
}
}
Try load step
scripts {
// rename Jenkinsfile to .groovy
sh 'mv Jenkinsfile Jenkins.groovy'
// RUN JENKINSFILE FROM THE REPO
load 'Jenkinsfile.groovy'
}

Jenkins declarative pipeline with Docker/Dockerfile agent from SCM

With Jenkins using the Declarative Pipeline Syntax how do i get the Dockerfile (Dockerfile.ci in this example) from the SCM (Git) since the agent block is executed before all the stages?
pipeline {
agent {
dockerfile {
filename 'Dockerfile.ci'
}
}
stage ('Checkout') {
steps {
git(
url: 'https://www.github.com/...',
credentialsId: 'CREDENTIALS',
branch: "develop"
)
}
}
[...]
}
In all the examples i've seen, the Dockerfile seems to be already present in the workspace.
You could try to declare agent for each stage separately, for checkout stage you could use some default agent and docker agent for others.
pipeline {
agent none
stage ('Checkout') {
agent any
steps {
git(
url: 'https://www.github.com/...',
credentialsId: 'CREDENTIALS',
branch: "develop"
)
}
}
stage ('Build') {
agent {
dockerfile {
filename 'Dockerfile.ci'
}
steps {
[...]
}
}
}
[...]
}
If you're using a multi-branch pipeline it automatically checks out your SCM before evaluating the agent. So in that case you can specify the agent from a file in the SCM.
The answer is in the Jenkins documentation on the Dockerfile parameter:
In order to use this option, the Jenkinsfile must be loaded from
either a Multibranch Pipeline or a Pipeline from SCM.
Just scroll down to the Dockerfile section, and it's documented there.
The obvious problem with this approach is that it impairs pipeline development. Now instead of testing code in a pipeline field on the server, it must be committed to the source repository for each testable change. NOTE also that the Jenkinsfile checkout cannot be sparse or lightweight as that will only pick up the script -- and not any accompanying Dockerfile to be built.
I can think of a couple ways to work around this.
Develop against agents in nodes with the reuseNode true directive. Then when code is stable, the separate agent blocks can be combined together at the top of the Jenkinsfile which must then be loaded from the SCM.
Develop using the dir() solution that specs the exact workspace directory, or alternately use one of the other examples in this solution.

How to get branch name in jenkins shared library

I'm trying to write my first Jenkins shared library and I'm struggling with something basic - getting the branch name.
I could do:
sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()
However, that requires a checkout. Would it be possible to get the branch name (for both multibranch and freestyle) pipeline projects? I know I'll be using git, but I would like to avoid doing a checkout (until it is necessary).
The GIT_BRANCH environment variable should give you what you want. It won't work in pipeline until Jenkins 2.60 and upgraded pipeline model definition plugin.
If you are using a pipeline job, you can
Capture object returned from scm checkout
Reference environment variable
pipeline {
// ...
stages {
stage('Setup') {
steps {
script {
// capture scm variables
def scmVars = checkout scm
String branch = scmVars.GIT_BRANCH
// or use the environment variable
branch = env.GIT_BRANCH
}
}
}
// ...
}
}
Environment variable reference.
I ended up using this:
env.CHANGE_BRANCH ?: env.GIT_BRANCH ?: scm.branches[0]?.name?.split('/')[1] ?: 'UNKNOWN'
However, this requires me to approve several things in In-Script Approvals page.

In jenkins declarative pipeline, how can I set environment variable based on method?

In jenkins declarative pipeline, how can I set the value of an environment variable based on custom groovy/powershell method? For instance, if I have a delcarative pipeline as follows, can I use a shared library method to set this value?
Essentially I am trying to use a multibranch Declarative Pipeline jenkins job which has a deploy stage, but I need to ensure that develop branches are deployed to DEV, Release branches are deploying to STG, but using the same pipeline. My thought was to create an environment variable that is set based on a custom method (in perhaps Groovy in shared library), and that method would simply look at the current value for env.BRANCH and simply have a little logic to set the value of the target deploy environment. Here is an example of what I envision
pipeline {
environment {
DEPLOY_ENV = mapBranchToDeployEnvironment(${BRANCH})
}
And then in my deploy stage I would use this value in two powershell invocations
bat "powershell .\\Deploy-Service -Environment ${DEPLOY_ENV}"
bat "powershell .\\Deploy-ServiceProxy -Environment ${DEPLOY_ENV}"
Otherwise, How are people current solving the problem of using the same pipeline to deploy to different environments while using the variables across many other function invocations? What is the recommended approach from Jenkins on mapping a branch name that triggered the build to an environment (if any) it should be deployed to?
Based on my understanding, the Declarative Pipeline allows a pipeline to be "multibranch", which, if the job deploys as well, it needs to map to an deploy environment. How else would a pipeline deploy using multibranch to multiple environments when all the global jenkins pipeline environment variables are the same value for every job /branch execution?
In the above scenario, the pipeline variable 'DEPLOY_ENV' is derived from other environment variables that are set by the job and are available typically at the stage level, but here we are looking to set the value globally so that we can use it across stages
Update: My issue was that I didnt realize how simple it was and instead thought that I had to pass in a stage or script object into a groovy shared library function, when in fact its as simple as creating a shared library, then directly referencing the environment variables in the method. Easy. Thank you.
I had exactly the same problem, and indeed it is possible to use a shared library method. But there is another solution, more simple if you do not have a shared library set-up yet, that consists in defining a groovy method before the Pipeline statement and then use it inside your pipeline like this :
def getEnvFromBranch(branch) {
if (branch == 'master') {
return 'production'
} else {
return 'staging'
}
}
pipeline {
agent any
environment {
targetedEnv = getEnvFromBranch(env.BRANCH_NAME)
}
stages {
stage('Build') {
steps {
echo "Building in ${env.targetedEnv}"
}
}
}
}
You can do exactly what you're suggesting. You should create a jenkins shared library with a var (a new DSL method). These can be called to assign to a pipeline-wide environment variable. You had it basically correct. Here's a Jenkinsfile fragment to assign to an environment variable:
environment {
DEPLOY_ENV = mapBranchToDeployEnvironment()
}
You don't need to pass the branch to the mapBranchToDeployEnvironment DSL method, since you can access the branch in that method. sample contents of vars/mapBranchToDeployEnvironment.groovy in shared library look like this:
def call() {
echo "branch is: ${env.BRANCH_NAME}"
if (env.BRANCH_NAME == 'master') {
return 'prod'
} else {
return 'staging'
}
}
You probably shouldn't expect this to be a five minute task, but you'll get it. Good luck!
stage('Prepare env variables') {
steps {
script {
if (env.BRANCH_NAME == 'master') {
echo 'Copying project-stg.env file...';
sh 'cp /opt/project-stg.env .env';
} else {
echo 'Copying project-dev.env file...';
sh 'cp /opt/project-dev.env .env';
}
}
}
}

Jenkins Job DSL: How to read Pipeline DSL script from file?

I want to generate my pipeline plugin based jobs via Job DSL, which is contained in a git repository that is checked out by Jenkins.
However, I think it is not very nice to have the pipeline scripts as quoted Strings inside of the Job DSL script. So I want to read them into a string and pass that to the script() function:
definition {
cps {
sandbox()
script( new File('Pipeline.groovy').text )
}
}
}
Where do I have to put Pipeline.groovy for this to work? I tried putting it right next to my DSL script, and also in the resources/ folder of my DSL sources. But Jenkins always throws a "file not found".
have you tried readFileFromWorkspace()? it should be able find the files you checkout from git.
Ref the Job DSL pipelineJob: https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob, and hack away at it on http://job-dsl.herokuapp.com/ to see the generated config.
The Job DSL below creates a pipeline job, which pulls the actual job from a Jenkinsfile:
pipelineJob('DSL_Pipeline') {
def repo = 'https://github.com/path/to/your/repo.git'
triggers {
scm('H/5 * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master', '**/feature*')
scriptPath('misc/Jenkinsfile.v2')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
// the single line below also works, but it
// only covers the 'master' branch and may not give you
// enough control.
// git(repo, 'master', { node -> node / 'extensions' << '' } )
}
}
}
}
You/your Jenkins admins may want to separate Jenkins job creation from the actual job definition. That seems sensible to me ... it's not a bad idea to centralize the scripts to create the Jenkins jobs in a single Jenkins DSL repo.

Resources