Is it possible to run a Jenkins job on a slave, use an excel file created as output from the first job and run the next job on Master? - jenkins

I am trying to run a Jenkins job on a slave. An excel file is created as a result of this first job.
I want to run a second parametrized job on the master after the first job is completed depending on the value from the excel.
I have tried the following options till now:
1. Using the Join Plugin. This doesn't work because the second job is parametrized and I have to take an input from the excel file. there is no option to provide options or read the parameter from a file.
2. Pipeline on master- For some reason when I create a pipeline on the master and execute the first slave job, the slave job waits for a slot to run since one job is already running and the main job is waiting for the job on the slave to run. So it results in a deadlock.

Pipeline (scripted, not declarative) sounds like the way to go.
Something like:
node('MySlaveLabel') {
...do your stuff here...
stash includes: 'myExcelFile.xls', name: 'myExcelFile'
}
node('MyMasterLabel') {
unstash 'myExcelFile'
...examine your Excel file here..
...add conditional statements...
}
As long as the node blocks are not nested, you will need only 1 executor on the slave and 1 on the master.
If for some reason you actually need the jobs to call each other:
Use the build 'anotherProject' syntax.
Make sure you have enough executors on the slave.

Related

Build Jenkins job with Build periodically option with different argument

I have a Jenkins job which accepts only BranchName(bitbucket) as an input parameter.
I have scheduled this Job to run every morning 7 AM with Build periodically option H 7 * * *.
On triggering automatically, it takes default input parameter as development.
Now my requirement is that I need to pass few other branch names as input parameter automatically.
One option I tried it Down stream job with other branch name, but that works only from one branch and not a sophisticated solution.
Is there an easy way I can achieve this?
Job Configuration
If you only want to run a known set of branches you can either:
Create an upstream job for every branch that triggers the build with different parameters.
Create one upstream job, that utilize an list, loop over that list and execute the jobs in parallel.
If you need to get the list of branches dynamically, I would assume, that running sh script: 'git branch', returnStdout: true and some Groovy string split operation is the easiest way to archive that.

Running one particular job in parallel on Jenkins

I have multiple jobs set on Jenkins. There are only one executor on masters, there are many other slaves. I want to create a job which would run on a separate job queue or a separate executor concurrently. How can I achieve this in simplest way? Can it be achieved without modifying slaves?
The next thing is how can I achieve running on any new build for this job in parallel. Rest of the jobs set should not be disturbed and interfered.
As I can understand you need to configure a particular job to run on a slave node...
In order to achieve that you can 1st add a node to the Jenkins master. There you have a place to add a label to that slave node. The in the pipeline script of your job you can edit it as follows.
pipeline {
agent {
node {
label '<YOUR SLAVE NODE LABEL>'
}
}
}
In the latter part of the question, you have asked to run parallel jobs. Please refer to the parallel job documentation If you need to add sequential stages refer to the sequential stage documentation If you want to create a dynamic parallel job this link would help you.

How to split load between master and slave

I have setup a slave.
For a job that executes a shell script, I configured to run either on the slave and on the master.
If I launch 2 instances of the same job, I observe that the job is run only by the master, and the 2nd instance waits to the 1st one to finish, and it will be run also by the master.
I expect master and slave to work simultaenously.
Why is the slave always idle?
Is there a way to priorize one slave?
UPDATE: In my use case, a job uses the database for destructive tests, so is not good for reliability to have more than one instance of the same job in a node. Each node has a copy of the database.
First, go to the job configuration page and check "Execute concurrent builds if necessary". This will allow multiple instances of your job to execute at the same time.
Next, go to the configuration pages of your build nodes (via the link "Build Executor Status" on the main page) and set "# of executors" to 1 for each one (both master and slave) . This will prevent one build node from running multiple jobs at the same time.
The result should be that if you launch 2 instances of the same job, one will execute on the master and one will execute on the slave.
The solution with a jenkins pipeline script:
node("master") {
parallel (
"masterbuild" : {
node ("master") {
mybuild()
}
},
"slavebuild" : {
node ("slave") {
mybuild()
}
}
)
}
def mybuild() {
sh 'echo build on `hostname`'
}
This is an improvement on Wim answer:
Go to the job configuration page and check "Execute concurrent builds if necessary". This will allow multiple instances of your job to execute at the same time.
Next, use the Throttle Concurrent Builds Plug-in.
In this way, only one execution per node is allowed, and the load is balanced between different nodes.
In this way, a node doesn't loose the ability to run simultaneously several unrelated jobs.

How can I analyze console output data from a triggered jenkins job?

I have 2 jobs 'job1' and 'job2'. I will be triggering 'job2' from 'job1'. I need to get the console output of the 'job1' build which triggered 'job2' and want to process it in 'job2'.
The difficult part is to find out the build number of the upstream job (job1) that triggered the downstream job (job2). Once you have it, you can access the console log, e.g. via ${BUILD_URL}consoleOutput, as pointed out by ivoruJavaBoy.
How can a downstream build determine by what job and build number it has been triggered? This is surprisingly difficult in Jenkins:
Solution A ("explicit" approach): use the Parameterized Trigger Plugin to "manually" pass a build number parameter from job1 to job2. Apart from the administrational overhead, there is one bigger drawback with that approach: job1 and job2 will no longer run asynchronously; there will be exactly one run of job2 for each run of job1. If job2 is slower than job1, then you need to plan your resources to avoid builds of job2 piling up in the queue.
So, some solution "B" will be better, where you extract the implicit upstream information from Jenkins' internal data. You can use Groovy for that; a simpler approch may be to use the Job Exporter Plugin in job2. That plugin will create a "properties" (text) file in the workspace of a job2 build. That file contains two entries that contain exactly the information that you're looking for:
build.upstream.number Number of upstream job that triggered this job. Only filled if the build was triggered by an upstream project.
build.upstream.project Upstream project that triggered this job.
Read that file, then use the information to read the console log via the URL above.
You can use the Post Build Task plugin, then you can get the console output with a wget command:
wget -O console-output.log ${BUILD_URL}consoleOutput

Is it possible to run part of Job on master and the other part on slave?

I'm new to Jenkins. I have a requirement where I need to run part of a job on the Master node and the rest on a slave node.
I tried searching on forums but couldn't find anything related to that. Is it possible to do this?
If not, I'll have to break it into two separate jobs.
EDIT
Basically I have a job that checks out source code from svn, then compiles and builds jar files. After that it's building a wise installer for this application. I'd like to do source code checkout and compilation on the master(Linux) and delegate Wise Installer setup to a Windows slave.
It's definitely easier to do this with two separate jobs; you can make the master job trigger the slave job (or vice versa).
If you publish the files that need to be bundled into the installer as build artifacts from the master build, you can pull them onto the slave via a Jenkins URL and create the installer. Use the "Archive artifacts" post build step in the master build to do this.
The Pipeline Plugin allows you to write jobs that run on multiple slave nodes. You don't even have to go create other separate jobs in Jenkins -- just write another node statement in the Pipeline script and that block will just run on an assigned node. You can specify labels if you want to restrict the type of node it runs on.
For example, this Pipeline script will execute parts of it on two different nodes:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
Some more information at the Pipeline plugin tutorial. (Note that it was previously called the Workflow Plugin.)
You can use the Multijob plugin which adds an the idea of a build phase which runs other jobs in parallel as a build step. You can still continue to use the regular freestyle job build and post build options as well

Resources