How to specify custom path for remote node on Jenkins? - jenkins

I've a Jenkins server. It runs on D:\Jenkins.
In my jenkins file, I've the following:
pipeline {
agent {
node{
label 'windows-node'
customWorkspace "${JENKINS_HOME}\\${env.BRANCH_NAME}"
}
}
//...
}
This was working fine and tries to use the D:\jenkins\feature\testbranch (by example).
I now setup a new node, and it has only one disk, C:\
The remote node has the Remote root directory configured as C:/ws. So I was expecting that my output folder would be C:/ws/feature/testbranch.
But it seems it tries to access the D:\jenkins\feature\testbranch on the remote node. How to use the node specific root folder in an Jenkins file?

So that variable appears to be specific to the master. You probably need a environment variable which will point to a location that is consistent across your slaves (like WKSPC_LOC), set it in your windows environment and use something like customWorkspace "${WKSPC_LOC}\${env.BRANCH_NAME}", note that you still need to set this up in every slave. In general for CI, having different things being treated same creates problems. So a slave should be the same (consistent). As of now I don't think there is way to have different workspace locations for master and slave. In linux you could have used sym links, but I can't think of a solution for windows, except what I mentioned before, but I still doubt it will work.

Related

Jenkins: Access job/plugin configuration values inside pipeline

I am trying the access the values set on a job's configuration page from within my pipeline. These values are not made available as params, nor are they injected as envvars.
Setup
Jenkins, v2.263.1
GitLab Branch Source plugin, v1.5.3 (link)
Multibranch pipeline job which is pointed to a Gitlab repo
Remote Jenkinsfile Provider, v1.13 (link)
Problem
Ordinarily, one would have a Jenkinsfile in the root of the repo and therefore the scm would be associated with the repo we want to checkout and build. However, in my case the code I want to build is in a different repo to the Jenkinsfile (hence the Remote Jenkinsfile Provider plugin).
This means that I need to checkout the code I wish to build as an explicit step in the pipeline, and to do that I need to know the repo. This repo is, however, already defined in the job config.
The Branch Source plugin does export things like the branch name or merge request number/branch/target into appropriate envvars, but NOT the actual repo.
As this is a multibranch pipeline, I cannot use something like envInject either (multibranch jobs do not provide the option to 'Prepare an environment for the run' as with other jobs)
Goal
I would like to be able to access the server, owner and project fields set in the job config page. Ultimately I could manage with just the project's ssh/http address even.
Is there some clever way of accessing a job's config from within the pipeline?
Thanks for any suggestions!
Reference images
Within the gitlab branch source plugin (and the documentation) you have a lot more information, than just with the normal branch source plugin. there are environment variables for the project like GITLAB_PROJECT_GIT_SSH_URL/GITLAB_PROJECT_GIT_HTTPS_URL for the git source and many more. So far i did not see one for the server, but that would be parse-able our of the URLs.
Within this information, it should be fairly easy to checkout the repository and build it.
As through the process it came clear, that it is needed to also trigger the pipeline manually, and this is normally also possible with variables (not sure about the Remote File plugin). I assume your Jenkinsfile is a groovy script, which opens up a lot of possibilities. You can define variables and use some logic to determine if the env variable or the parameter is used.
pipeline {
parameters {
string(name: 'projectUrl', defaultValue: "")
}
stages {
stage('Prepare') {
steps {
def projectUrl = env.GITLAB_PROJECT_GIT_SSH_URL ?: params.projectUrl
// DO Checkout with projectUrl
}
}
}
}
The only critical thing you have to take into account, is that the multibranch pipeline, has to run once, for each branch or mr - so they detect the variables. Afterwards you can easily trigger it, manually by providing your values.
This allows you, to utilize webhooks for automatic actions, and also allows you to trigger the build manually when ever you like.
Sidenote: if you use the centralized jenkinsfile, for reducing duplication, you might also want to checkout Shared libraries for jenkins.
For completeness, here is a list of all current environment variables added by the jenkins gitlab branch source plugin version 1.5.3 (and only for Push Events - but they are pretty similar in the other event types too)
GITLAB_OBJECT_KIND
GITLAB_AFTER
GITLAB_BEFORE
GITLAB_REF
GITLAB_CHECKOUT_SHA
GITLAB_USER_ID
GITLAB_USER_NAME
GITLAB_USER_EMAIL
GITLAB_PROJECT_ID
GITLAB_PROJECT_ID_2
GITLAB_PROJECT_NAME
GITLAB_PROJECT_DESCRIPTION
GITLAB_PROJECT_WEB_URL
GITLAB_PROJECT_AVATAR_URL
GITLAB_PROJECT_GIT_SSH_URL
GITLAB_PROJECT_GIT_HTTP_URL
GITLAB_PROJECT_NAMESPACE
GITLAB_PROJECT_VISIBILITY_LEVEL
GITLAB_PROJECT_PATH_NAMESPACE
GITLAB_PROJECT_CI_CONFIG_PATH
GITLAB_PROJECT_DEFAULT_BRANCH
GITLAB_PROJECT_HOMEPAGE
GITLAB_PROJECT_URL
GITLAB_PROJECT_SSH_URL
GITLAB_PROJECT_HTTP_URL
GITLAB_REPO_NAME
GITLAB_REPO_URL
GITLAB_REPO_DESCRIPTION
GITLAB_REPO_HOMEPAGE
GITLAB_REPO_GIT_SSH_URL
GITLAB_REPO_GIT_HTTP_URL
GITLAB_REPO_VISIBILITY_LEVEL
GITLAB_COMMIT_COUNT
GITLAB_COMMIT_ID_#
GITLAB_COMMIT_MESSAGE_#
GITLAB_COMMIT_TIMESTAMP_#
GITLAB_COMMIT_URL_#
GITLAB_COMMIT_AUTHOR_AVATAR_URL_#
GITLAB_COMMIT_AUTHOR_CREATED_AT_#
GITLAB_COMMIT_AUTHOR_EMAIL_#
GITLAB_COMMIT_AUTHOR_ID_#
GITLAB_COMMIT_AUTHOR_NAME_#
GITLAB_COMMIT_AUTHOR_STATE_#
GITLAB_COMMIT_AUTHOR_USERNAME_#
GITLAB_COMMIT_AUTHOR_WEB_URL_#
GITLAB_COMMIT_ADDED_#
GITLAB_COMMIT_MODIFIED_#
GITLAB_COMMIT_REMOVED_#
GITLAB_REQUEST_URL
GITLAB_REQUEST_STRING
GITLAB_REQUEST_TOKEN
GITLAB_REFS_HEAD

How to use multiple labels to select a node in a Jenkins Pipeline script?

Intro:
We are currently running a Jenkins master with multiple slave nodes, each of which is currently tagged with a single label (e.g., linux, windows, ...)
In our scripted-pipeline scripts (which are defined in a shared library), we currently use snippets like the following:
node ("linux") {
// do something on a linux node
}
or
node ("windows") {
// do something on a windows node
}
Yet, as our testing environment grows, we now have multiple different Linux environments, some of which have or do not have certain capabilities (e.g., some may be able to run service X and some may not).
I would like to label my slaves now with multiple lables, indicating their capabilities, for example:
Slave 1: linux, serviceX, serviceY
Slave 2: linux, serviceX, serviceZ
If I now need a Linux slave that is able to run service X, I wanted to do the following (according to this):
node ("linux" && "serviceX") {
// do something on a linux node that is able to run service X
}
Yet, this fails.
Sometime, also a windows slave gets selected, which is not what I want to achieve.
Question: How can i define multiple labels (and-combined) based on which a node gets selected in a Jenkins scripted pipepline script?
The && needs to be part of the string, not the logical Groovy operator.

jenkins & labels getting : pending—master is offline while trying to execute on non-master nodes

I have a Jenkins instance where I am not able to use labels, builds are triggered but get stuck at "pending—master is offline". I have disabled the master (executor # : 0) as I do not wish to use it.
Instead I would expect the build to go to the next available node with the label mentioned in the pipeline.
node("mylabel"){
echo " jenkins pipeline for mylabel nodes"
}
This works in a clean install of jenkins so I can only assume this is a configuration/compatibility issue on my master instance.
Could it be a permission issue?
more info about my master instance :
I have used in the past the nodeLabel (with freestyle jobs) and have removed it (and removed all extra instructions in my jobs once removed via the management view).
I am using the Role-based Authorization Strategy and have defined roles for each projects in jenkins.
Note that I am behind a firewall (no internet access during execution) using Jenkins 2.73.2
EDIT 1:
another syntax - same issue observed.
pipeline {
agent{
label "mylabel"
echo " jenkins pipeline for mylabel node"
}
}
I found the issue occurred because I am not able to bypass the master node using the above pipeline. I understand that before the label selection there is a need for a default node to be available to run instructions.

How to get Jenkins node configurations from groovy

I would like to use a groovy script in a job to read all the node configurations as viewed in the Node configuration in the Jenkins GUI. I know that it is possible to use the REST API to fetch node configurations, but I would like to know how to do this using the available library methods, e.g. from a system Groovy script.
I envision something like this pseudo-code to print the Host as it is configured in the "Launch method" setting, if launch method is set to SSH:
jenkins.getNodes().each { node ->
println(node.getLaunchMethod().getHost())
}
The answers to Accessing Jenkins global property in Groovy seems possibly to be relevant?

Best way to configure jenkins job running on different slaves

I want to run a Jenkins job on 4 different slaves (windows, linux, solaris, Mac). Instead of making 4 different jobs I want to have a single job. I can use a Node parameter to execute on different slaves. My job runs a script which uses Jenkins workspace of slave and a few other scripts. My script is in a different folder on each slave, and other required scripts are in a different folder. So now I have created 4 different jobs for each slave and hard-coded Jenkins workspace and other required scripts path.
Is there any way so that I can put all paths in some JSON-like structure and depending on slave will pick those paths? So that I will have 1 job only.
Please suggest, Thanks in advance!
my idea is to use e.g "Execute system Groovy script" to get slave value and then use if statement to assigne proper path and create parameter visible in Environment Variables:
import hudson.model.Computer
import hudson.model.StringParameterValue
import hudson.model.ParametersAction
//get slave name
def slaveName = Computer.currentComputer().getNode().name
def path
//choose path
if(slaveName.equals("slave01")){
path = "C:"
}
if(slaveName.equals("slave02")){
path = "/root"
}
if(slaveName.equals("slave03")){
path = "D:"
}
//pass path as env. variable
build.addAction(new ParametersAction(new StringParameterValue('path', path)))
then you can use variable path in command:
echo %path%
or use Conditional BuildStep Plugin to set separable steps for each operation system and control when each step should be executed
Jenkins is designed to check out files from a version control system (Subversion, Git, whatever) and run tasks. Instead of trying to manage separate files on separate slaves, you should put your scripts in some form of version control and let Jenkins check out the files in the workspace as part of its build process.

Resources