How to control the workspace Jenkins starts the build on? - jenkins

I have one Jenkins master node and 2 Jenkins slave nodes. All my job builds happen in the slave node. When I configured my slaves, I set the Remote root directory as /data/home/jenkins/jenkins-slave. Also, I give the custom workspace option as DEVELOP_BRANCH in the job configuration page of the respective job.
However, at the start of job, I get the following log information:
Building remotely on linux in workspace /data/home/jenkins/jenkins-slave/workspace/DEVELOP_BRANCH
I want to start my builds in this location.
/data/home/jenkins/jenkins-slave/DEVELOP_BRANCH
Why does the extra workspace directory come into the picture? How do I remove it? I do not have access to Jenkins master node. So, if there is a workaround that can match my requirements, it would be awesome.
Note: By node, I refer to a Linux OS computer with redHat distribution.

In project configuration, under Advanced Project Options, you can check Use custom workspace and put a path there.
If you put an absolute path, it will be used without any extra workspace/ directory. (at least that's the behavior I can see on a windows server.)

Related

Is the Jenkins workspace on the master or the worker?

Who does the actual cloning of the project, is it the master or the agent node? If it is the master, then how does the agent node actually execute the job. If it is the agent node, how can we view the workspace in the browser?
When people ask "where is the workspace" the answer is usually a path, but I am more interested in where that path is, on the master or the agent node? Or maybe it is both?
Edit1
Aligned terminology to this: https://jenkins.io/doc/book/glossary/ in order to avoid confusion.
In a Jenkins set up all the machines are considered nodes. The master node connects to one or more agent nodes. Executors can run both on the master or agent nodes.
In my scenario, no executors run on the master. They are run only on the agent nodes.
The answer is: it depends !
First of all, although it is not a good practice IMO, some installation let the master be an actual worker and run jobs. In this case, the workspace will be on the master.
If you configured the master not to accept jobs, there are still occasion when a workspace can be created on the master. A good example is when your job is a "pipeline script from SCM". In this case, the master will create a workspace for the job, clone the target repo, read the pipeline, and start needed jobs on whatever slave is targeted, creating a workspace to run the actions themselves. If the pipeline targets multiple slaves, there will be a workspace on each of them.
In simple situation (e.g. maven or freestyle job), the workspace will only be on the targeted slave.
I needed to dig a bit deeper to understand this.
I ran a brand new instance of Jenkins and I attached a single agent node. I used SSH and I set the remote (agent) root directory to: /home/igorski/jenkins
As soon as I attached the node the remoting folder and remoting.jar showed up in that root directory.
I ran a basic Gradle Java pipeline job (Jenkinsfile in the project).
The workspace showed up on the slave. Not on the master.
From the Jenkins GUI I can access the workspace and see it's contents.
At the moment I kill the agent machine I can no longer view the workspace in Jenkins.
My guess is that the remoting.jar somehow does a live sync.
I also ran a freestyle project and I can confirm the same. As soon as the agent is killed I can no longer open the Workspace and I get an error stack trace:
hudson.remoting.Channel$CallSiteStackTrace: Remote call to JenkoOne
This was much more obvious with the Pipeline job though. There you get a link to the agent that you need to click in order to see the contents. As soon as the agent is gone the link is disabled. And you know exactly on which agent the node is. With freestyle jobs, you just get a Workspace link. There is no indication on what agent it is or if the agent is accessible at the moment.
So, both Zeitounator and fabian were correct.

Add multiple nodes to Jenkins master

I have around 100 linux servers that need to be added to a Jenkins master. The situation here is I need to add them by Copy Existing Node and the Jenkins master should not be shutdown/restart.
I don't want to do it manually for a hundred times. Is there any automation way to handle such request. Thank you in advance.
You could script this (self-automate). The Jenkins agent configuration files are located in the nodes subdirectory in the Jenkins home directory. You'd create a sub-directory for each node and inside that put a config.xml file for that nodes configuration. I recommend that you shutdown your Jenkins server while doing this, we've observed Jenkins deleting things when doing this while it is running. Use an existing agent's config.xml file for a template. Assuming all of your servers are configured the same, you need only update the name and host tags, which can be automated using sed.
Update with zero-downtime:
CloudBees has a support article for creating a node using the Rest API. If you'd prefer to use the Jenkins CLI, here's an example shell script. Neither of these approaches will require restarting Jenkins.

Inconsistent Jenkins workspace path on slave machines

We have some jobs set up which share a workspace. The workflow for the various branches is:
Build a big honking C++ project called foo.
Execute several downstream tests, each of which uses the workspace of foo.
We accomplish this by assigning the Use custom workspace field of the downstream jobs to the build workspace.
Recently, we took one branch and assigned it to be build on a Jenkins slave machine rather than on the master. I was surprised to find that on master, the foo repository was cloned to $JENKINS_JOBS_PATH/FOO/workspace/foo_repo - while on the slave, the repository was cloned to $JENKINS_JOBS_PATH/FOO/foo_repo.
Is this by design, or have we somehow configured master and slave inconsistently?
Older versions of Jenkins put the workspace under the ${JENKINS_HOME}/jobs/JOB/workspace directories. After upgrading, this pattern stays with the Jenkins instance. New versions put the workspaces in ${JENKINS_HOME}/workspace/. I suspect the slaves don't need to follow the old pattern (especially if it is a newer slave), so the directories may not be consistent across machines.
You can change the location of the workspaces on the master in Jenkins -> Configure Jenkins -> Advanced.
I think the safe way to handle this... If you are going to use a custom workspace, you should use that for all of your jobs, including the first one that builds the big honking c++ project.
If you did this all in a pipeline, you can run all of this in a single job and have more control over where all the files are, and you have the option of stash and unstash, but if the files are huge, stash may not be the way to go.
You can omit 'Use custom workspace' option for each job and instead change master and/or slave workspace paths and use
%WORKSPACE%/../foo_repo path
or (that equal)
./../foo_repo path
In that case
%WORKSPACE% = [master or slave node workspace]/[job name]
and
%WORKSPACE%/../ = [master or slave node workspace]

Running a build on Jenkins Slave

I have done the following
Create a slave Node
In the Labels field added Test
Save the node configuration
Created a new Job
Selected the options Restrict where this project can run
In the Label expression field added Test
Save the job
When i build the job, i get the error
java.io.FileNotFoundException: C:\Users\Administrator\Test\src\test\java\test\data\Project Suites.xlsx (The system cannot find the path specified)
Not Sure whats wrong, The folder does not exist in the slave machine but exist in the Master machine.
But if i run it using the master it works fine.
Hmm I dont understand the problem - you said it yourself, the file does not exist in the slave machine, and you're running jenkins on the slave. So of course its not going to find the file?
Just move the file to the slave machine and run the job on the slave?

Jenkins's home directory

My Jenkins is installed on the default location: /var/lib/jenkins. Every time it builds, it changes my root directory of my workspace (on my local machine /home/john/p4) to /var/lib/jenkins/..., which shouldn't happen?
How to specify my root directory of my client(workspace) so that the build won't change its location? Should I change $JENKINS_HOME? If I should change it, then that's equavilent to the fact that I have to re-install Jenkins to the location I want, because $JENKINS_HOME is supposed to be the root directory for all jenkins files and builds.
What should be the correct behavior of Jenkins and P4 client? Also, does it have anything to do with the user who starts the builds in Jenkins? Does Jenkins user have anything to do with the Linux user who installs Jenkins?
The p4-plugin Jenkins requires it's own Perforce workspace and it WILL set the Perforce workspace root to match the Jenkins workspace root.
Let Jenkins create a new Perforce workspace (use a name that does not exist, I generally prefix it with jenkins-). If you want to be dynamic use a name like:
jenkins-${NODE_NAME}-${JOB_NAME}
...as ${NODE_NAME} and ${JOB_NAME} will expand.
Next define a view mapping (or streams path) to specify the location of the files in Perforce and how you want them to appear in the workspace. e.g.:
View:
//depot/myProj/main/... //jenkins-${NODE_NAME}-${JOB_NAME}/...
As for the user that connects to Perforce, that is defined in the Perforce Credentials, but the files sync'ed to the Jenkins Master (or Slave if you have a build farm) will use the UID/GID of the Jenkins service.
You can find the documentation for the p4-plugin here.

Resources