We're running tests and producing build files on a jenkins master and a jenkins slave for extra parallellisation, our RCPTT tests takes ages.
Our problem is that Jenkins -> -> Show workspace only shows the workspace on the master, so we have no way to get the builds except copying files manually over ssh.
We don't want duplication since different patches run either on master or slave, and we want to be able to get the files from both master and slave nodes.
You can use "Copy To Slave Plugin" to copy any files from master to slave.
If you use the Jenkins pipeline plugin https://jenkins.io/doc/book/pipeline/
then you can use stash / unstash:
https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-stash-code-stash-some-files-to-be-used-later-in-the-build
Her an example: https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
Related
I am building a Shared Library(I'm not a CI engineer) for Jenkins pipeline. As a part of it, we are thinking of giving configurations in yaml file and have Jenkinsfile pipeline script read from the yaml file.
So, we are planning to commit Jenkinsfile and yaml file in one git repository (let's say repo A) and the job is going to run on a slave machine utilising another git repository (let's say repo B). The Jenkinsfile will be executed from master after it clones repo A. The yaml file is also in master workspace. In the slave, repo B will be cloned and a build will take place as defined in the Jenkinsfile. But the question I have is, how do I read yaml file from Jenkinsfile script without having to clone repo A in slave i.e., how I reference a file present in master and not in slave from Jenkinsfile? This question arises because, whatever file I am trying to open is being opened from slave and not from master.
Thanks in advance.
EDIT:
Soemthing I forgot to mention. We actually thought of using stash from master and unstash in slave. But the problem is that we do only the job configurations and the environment is provided to us by some other team which also provides to many other teams. Because of that, master rarely has executors free. So, even when we run the job, it hangs waiting for the master to be free. Is there any other way to load the yaml file when the pipeline script is being loaded in master's memory?
I think the most straightforward way is to use the stash. Stashes are temporary packages that can be created on one node and copied to any other node.
There is some overhead on master for the package creation so stashes are not recommended for really big files, but they are ideally suited for your usecase of transfering a small configuration file.
node('master') {
// Create temporary stash package on master
stash name: 'MyConfiguration', includes: 'SomeFile.yaml'
}
node('MySlave') {
// Copy to and extract the stash package on slave
unstash 'MyConfiguration'
}
This expects 'SomeFile.yaml' to be in the workspace on master and will also extract it to workspace on slave. In case you want a sub directory of WORKSPACE, simply wrap the stash and/or unstash steps in a dir step.
I am trying to run a pipeline job that get its' pipeline file from TFS but the mapping of the workspace and the checkout is done on the Master instead of the Slave.
I have Jenkins-master which is installed on a linux machine and I connected a windows machine as a slave to it. I created a pipeline job with 'Pipeline script from SCM' option selected for TFS.
How can I make the windows slave run that pipeline job?
The master can't run that job because it is running on linux and it fails when it is trying to map a workspace to TFS in order to download the pipeline script and run it.
Even if I create another pipeline job and select to hard-code a script to run my original pipeline job like this:
node('WIN_SLAVE') {
build job: 'My_Pipeline'
}
It doesn't work.
And I can see in the output that the initiali script (above) is in fact running on my windows slave, but when it's building the job 'My_Pipeline' it still tries to map a workspace to the Jenkins-master at it's linux machine path /var/jenkins/... and it fails.
If the initial pipeline script ran at the windows slave, why does the other pipeline script not running on the same node? Why is it trying again to checkout the pipeline file from TFS to the Jenkins-Master?
How can I make the windows slave checkout the pipeline file and run it?
Here are some things to check...
Make sure you disabled the original job, or you are completely redefining it for running on the slave, because you indicated you set up “another job” for the slave. It appears that this other job is just triggering the previous job, rather than defining its own specifications. When the job is ran on the slave, it’s just running whatever settings are in that original job.
Also, If you have the box checked to build when a change is pushed to TFS, then your original job could still be trying to run every time a change is made to TFS.
Verify the slaves Remote root directory is set properly in the slave configuration under Manage Jenkins -> Manage Nodes.
Since this slave job is triggering the other job you originally created on the master, then it will build on the master as expected.
Instead of referencing the My_Pipeline job, change the My_Pipeline job itself to run on the slave. If you are using a declarative Pipeline for the original job, then change that original job to run on the slave within the original job settings. You can do it similarly to how you have indicated above, just define the node in the original job.
If the original job is a freestyle project, there is a checkbox titled Restrict where this project can be run. Check that and include the name of the slave in the Label Expression. When you run the job, it will then be restricted to the slave.
Lastly, posting the My_Pipeline job will be helpful.
We have some jobs set up which share a workspace. The workflow for the various branches is:
Build a big honking C++ project called foo.
Execute several downstream tests, each of which uses the workspace of foo.
We accomplish this by assigning the Use custom workspace field of the downstream jobs to the build workspace.
Recently, we took one branch and assigned it to be build on a Jenkins slave machine rather than on the master. I was surprised to find that on master, the foo repository was cloned to $JENKINS_JOBS_PATH/FOO/workspace/foo_repo - while on the slave, the repository was cloned to $JENKINS_JOBS_PATH/FOO/foo_repo.
Is this by design, or have we somehow configured master and slave inconsistently?
Older versions of Jenkins put the workspace under the ${JENKINS_HOME}/jobs/JOB/workspace directories. After upgrading, this pattern stays with the Jenkins instance. New versions put the workspaces in ${JENKINS_HOME}/workspace/. I suspect the slaves don't need to follow the old pattern (especially if it is a newer slave), so the directories may not be consistent across machines.
You can change the location of the workspaces on the master in Jenkins -> Configure Jenkins -> Advanced.
I think the safe way to handle this... If you are going to use a custom workspace, you should use that for all of your jobs, including the first one that builds the big honking c++ project.
If you did this all in a pipeline, you can run all of this in a single job and have more control over where all the files are, and you have the option of stash and unstash, but if the files are huge, stash may not be the way to go.
You can omit 'Use custom workspace' option for each job and instead change master and/or slave workspace paths and use
%WORKSPACE%/../foo_repo path
or (that equal)
./../foo_repo path
In that case
%WORKSPACE% = [master or slave node workspace]/[job name]
and
%WORKSPACE%/../ = [master or slave node workspace]
I have just added a slave to my Jenkins build - with the idea that I can now deploy artefacts to either my dev server or my test server.
However i've now hit a problem.
When I deploy a job on the master slave, the job build directory is
$JENKINS_HOME/localmoduledirectory (as defined in the build job)
However when I deploy my job via the slave the build directory is different which breaks my jobs. The build directory is
$JENKINS_HOME/workspace/build job title/localmoduledirectory
I know I can change the workspace root directory location for the master under configure settings /advances .. so can change it to $JENKINS_HOME/workspace, but I want to stop the slave using the build job title in the path.
The end result I'm after is to have jenkins, building / deploying from the same location on two servers i.e /opt/jenkins/workspace/localmoduledirectory.
Any ideas ?
ok after lots of head scratching ...
managed to discover that the mvn plugin has a custom workspace option hidden under advanced. so configured all jobs with a customer workspace of /opt/jenkins.
I'd like to use the PostBuildScript plugin to deploy the artifacts from a Matrix job that runs on several slaves.
The slaves are archiving the artifacts-- but its unclear how to access them from the PostBuildScript. How can I get the matrix node artifacts into the master workspace where the PostBuildScript job is running?
There is a plugin called Copy To Slave Plugin.
https://wiki.jenkins-ci.org/display/JENKINS/Copy+To+Slave+Plugin
This can copy artifacts from master to slave or vice versa. Inorder to get your work done you can use this plugin. It has a feature called "Copy files back to master node". This will copy the files back to the masters workspace. So you don't need post build script plugin to copy artifacts. This way will be more simpler.