How to access the filesystem of the built in node on Jenkins - jenkins

I have inherited a jenkins setup that appears to have only the built in node (it's called master in the list.
The jenkins file calls a python script at /var/lib/jenkins/scripts/helper.py. Builds are failing after calling this. Using println 'cat /var/lib/jenkins/scripts/helper.py'.execute().text I've been able to get the contents of the file and I can see how it's failing.
What is the best way to access/edit this file? Can I SSH into the node to update it so I can fix it? Is this master node an actual server somewhere like an EC2 instance?

Related

Add multiple nodes to Jenkins master

I have around 100 linux servers that need to be added to a Jenkins master. The situation here is I need to add them by Copy Existing Node and the Jenkins master should not be shutdown/restart.
I don't want to do it manually for a hundred times. Is there any automation way to handle such request. Thank you in advance.
You could script this (self-automate). The Jenkins agent configuration files are located in the nodes subdirectory in the Jenkins home directory. You'd create a sub-directory for each node and inside that put a config.xml file for that nodes configuration. I recommend that you shutdown your Jenkins server while doing this, we've observed Jenkins deleting things when doing this while it is running. Use an existing agent's config.xml file for a template. Assuming all of your servers are configured the same, you need only update the name and host tags, which can be automated using sed.
Update with zero-downtime:
CloudBees has a support article for creating a node using the Rest API. If you'd prefer to use the Jenkins CLI, here's an example shell script. Neither of these approaches will require restarting Jenkins.

Running a build on Jenkins Slave

I have done the following
Create a slave Node
In the Labels field added Test
Save the node configuration
Created a new Job
Selected the options Restrict where this project can run
In the Label expression field added Test
Save the job
When i build the job, i get the error
java.io.FileNotFoundException: C:\Users\Administrator\Test\src\test\java\test\data\Project Suites.xlsx (The system cannot find the path specified)
Not Sure whats wrong, The folder does not exist in the slave machine but exist in the Master machine.
But if i run it using the master it works fine.
Hmm I dont understand the problem - you said it yourself, the file does not exist in the slave machine, and you're running jenkins on the slave. So of course its not going to find the file?
Just move the file to the slave machine and run the job on the slave?

Jenkins- How do i run a batch command on Master in a job that runs on a slave

I'm trying to a run a simple windows batch command (say copy) on Master inside a job that's set to run on a particular slave.
What i'm trying to accomplish with this is that copy the build log that gets saved on a master to a shared drive that's accessible from master. Please advise.
You are going to have to make the jenkins file system visible on the client independently of Jenkins. Since you have a Windows client, you are probably going to have to setup sharing from the Jenkins master using samba or something of the sort.
What I do instead: When I need assets from master is I use curl or wget to download the assets to the clients. You can use the FSTrigger plugin to start builds when the file changes on the Jenkins master. Once curl or wget has run, your asset is then in the %WORKSPACE% directory and you can proceed.
I would recommend to handle the logfile copying (and maybe further tasks) as a dedicated job (let's call it "SaveLog"). SaveLog should be tied to run only on master.
You should then configure SaveLog to be triggered after completion of your primary job.
The Logfile is already available on master, even if you do not save any artifacts.
Should you need further files from the slave workspace, then you should save them as artifacts. SaveLog (on master) can then still decide whether to do anything useful with those artifacts.

Jenkins - Copy build log from master to a shared drive

Can someone direct me here? I have a simple job configured in Jenkins on a WINDOWS environment (master and all slaves running on windows) and the job is supposed to run on a particular slave. When you build the job, the build log ( log.log) gets stored in ” %JENKINS_HOME%\jobs\\builds\%BUILD_NUMBER%\” on the master.
I do have a Jenkins workspace (which is required when you add a slave node) set on the slave for this job–where nothing gets stored when the job runs.
With this scenario, I would like to copy the build log (log.log file that’s available on the master) to a share drive. Please advise me the way to get this done. I have tried few plugins “Copy to slave”, “Copy Artifact Plugin” and ArtifactDeployer Plugin…I could not get them working to meet what I need.
Use a second build action with the execute batch option. Put the copy command there to copy the log to another location.
The following command kind-of works:
curl ${BUILD_URL}consoleFull -o ${TargetDir}/Log.txt
where
TargetDir="${WORKSPACE}/Directory/target"
BUILD_URL and WORKSPACE are set by Jenkins. Unfortunately Jenkins doesn't copy the whole log. I've tried consoleText and gotten the same result: partial logs files. :-(

Is there a env variables for slave node home in JENKINS?

We know there's a environment variable named JENKINS_HOME so we can use it anywhere as $JENKINS_HOME.
But now when I running projects on slave nodes, I need to use the home directory of jenkins (which named "remote FS root" when defining a slave node) on slave node as a variable. And I found $JENKINS_HOME always be the home directory of jenkins on master node even I'm running projects on slave node.
Anyone can help? Thank you!
Old question, but seems never answered
The variable is not directly exposed in the job's environment, but you can figure it out ...
By default (ie: you did not set a custom workspace),
WORKSPACE=<RemoteFS>/<JOB_NAME>
and JOB_NAME includes folders if you use them.
ie: WORKSPACE=/path/to/remoteFSroot/myfolder/jobname
and JOB_NAME=myfolder/jobname
You can manipulate from there.
But, there should be no need to store the data inside the remoteFS root and is even probably a bad idea, since you should be able to delete all the workspaces w/o impacting your data. Just store it in another directory (or even an NFS shared across all you slaves) and reference with a complete path.
If you see JENKINS_HOME environment variable, it just a side-effect of the script you use to start Jenkins master. You cannot trust it to be always available.
Maybe you could explain why you think you need to know the slave home directory?
Using the Jenkins pipeline plugin, I retrieve the remote root directory of the slave node where the job is running at with
run.getEnvironment(listener).get("BASE")

Resources