We know there's a environment variable named JENKINS_HOME so we can use it anywhere as $JENKINS_HOME.
But now when I running projects on slave nodes, I need to use the home directory of jenkins (which named "remote FS root" when defining a slave node) on slave node as a variable. And I found $JENKINS_HOME always be the home directory of jenkins on master node even I'm running projects on slave node.
Anyone can help? Thank you!
Old question, but seems never answered
The variable is not directly exposed in the job's environment, but you can figure it out ...
By default (ie: you did not set a custom workspace),
WORKSPACE=<RemoteFS>/<JOB_NAME>
and JOB_NAME includes folders if you use them.
ie: WORKSPACE=/path/to/remoteFSroot/myfolder/jobname
and JOB_NAME=myfolder/jobname
You can manipulate from there.
But, there should be no need to store the data inside the remoteFS root and is even probably a bad idea, since you should be able to delete all the workspaces w/o impacting your data. Just store it in another directory (or even an NFS shared across all you slaves) and reference with a complete path.
If you see JENKINS_HOME environment variable, it just a side-effect of the script you use to start Jenkins master. You cannot trust it to be always available.
Maybe you could explain why you think you need to know the slave home directory?
Using the Jenkins pipeline plugin, I retrieve the remote root directory of the slave node where the job is running at with
run.getEnvironment(listener).get("BASE")
Related
I have inherited a jenkins setup that appears to have only the built in node (it's called master in the list.
The jenkins file calls a python script at /var/lib/jenkins/scripts/helper.py. Builds are failing after calling this. Using println 'cat /var/lib/jenkins/scripts/helper.py'.execute().text I've been able to get the contents of the file and I can see how it's failing.
What is the best way to access/edit this file? Can I SSH into the node to update it so I can fix it? Is this master node an actual server somewhere like an EC2 instance?
I want to execute some operation on Jenkins slave machine but want to use the data present in jenkins master home directory, can I access the jenkins master home directory from the slave machine?
Please let me know any leads here!
You can not access them directly but you can copy the folder or files from master to slave & vice versa.
Plugin link
First of all, I think that one of the Jenkins/Cloudbees recommendation is to avoid builds on master.
Secondly, if you have n slaves with the same label (let's say BUILD) you just need to configure your job with this label so that it will run on any of the slave (jenkins will choose the less loaded) having this label, and the checkout will be made on that slave.
I have around 100 linux servers that need to be added to a Jenkins master. The situation here is I need to add them by Copy Existing Node and the Jenkins master should not be shutdown/restart.
I don't want to do it manually for a hundred times. Is there any automation way to handle such request. Thank you in advance.
You could script this (self-automate). The Jenkins agent configuration files are located in the nodes subdirectory in the Jenkins home directory. You'd create a sub-directory for each node and inside that put a config.xml file for that nodes configuration. I recommend that you shutdown your Jenkins server while doing this, we've observed Jenkins deleting things when doing this while it is running. Use an existing agent's config.xml file for a template. Assuming all of your servers are configured the same, you need only update the name and host tags, which can be automated using sed.
Update with zero-downtime:
CloudBees has a support article for creating a node using the Rest API. If you'd prefer to use the Jenkins CLI, here's an example shell script. Neither of these approaches will require restarting Jenkins.
I'm using a remote agent/slave to build my project in Jenkins via SSH.
Although the correct PATH environment variable is available when SSH'ing to it with the same user, it's not available when Jenkins tries to use the agent for building.
With the pipelines DSL, I was able to add it to my environment at runtime.
environment {
PATH = "/usr/local/bin:$PATH"
}
But I want this location in the PATH variable at all time, without this configuration. Any pointers on how to configure this for my agent/slave; whether it is in the jenkins node configuration or on the machine itself?
Just for anybody who is having the same issue.
When adding a new node in Jenkins, the master caches the environment variables of this node, but doesn't update it afterwards, to avoid breaking the configuration.
If you update the environment variables on the node itself, this changes won't be available for builds from the Jenkins master. You have to re-add the node or add environment variables in the configuration of the node.
I have one Jenkins master node and 2 Jenkins slave nodes. All my job builds happen in the slave node. When I configured my slaves, I set the Remote root directory as /data/home/jenkins/jenkins-slave. Also, I give the custom workspace option as DEVELOP_BRANCH in the job configuration page of the respective job.
However, at the start of job, I get the following log information:
Building remotely on linux in workspace /data/home/jenkins/jenkins-slave/workspace/DEVELOP_BRANCH
I want to start my builds in this location.
/data/home/jenkins/jenkins-slave/DEVELOP_BRANCH
Why does the extra workspace directory come into the picture? How do I remove it? I do not have access to Jenkins master node. So, if there is a workaround that can match my requirements, it would be awesome.
Note: By node, I refer to a Linux OS computer with redHat distribution.
In project configuration, under Advanced Project Options, you can check Use custom workspace and put a path there.
If you put an absolute path, it will be used without any extra workspace/ directory. (at least that's the behavior I can see on a windows server.)