I have a master (linux) and a windows slave set up, and would like to build a single job both on the master and the slave. The "Restrict where this project can be run" option allows us to bind the job to a particular slave but is it possible to bind one job to the master as well as slave? How would one configure the "Build Step" since running it on Windows requires a build with Windows batch command and Linux requires shell command. For example even if the job tries to run on master and slave, wouldn't it fail at one point since both the build options (with batch and shell command) will be executed?
Well, in Jenkins you can create groups of machine (either master or slaves), to do this :
click on the machine name on the first page of jenkins
enter in the node configuration menu
then, you can enter some labels in the Labels field. Let's add a mutli_platform label for example
go back to the first page of Jenkins
do it for each machine on which you need to run the job
go back to the first page of Jenkins
click on the job you want to run on multiple nodes
go in the configuration menu
check the Restrict where this project can be run and put the mutli_platform in it.
Then, your build will be able to run on the mutli_platform label.
For the second part, the multi-platform script, you can use ant builds, or python builds (with the python plugin).
EDIT: If you need to build on the 2 (or more) platforms, you should use a Matrix Job. You will be able to create a job and force it to run on every slave you need.
This is how you should do it:
import groovy.json.JsonSlurperClassic
def requestNodes() {
def response = httpRequest url: '<your-master-url>/computer/api/json', authentication: '<configured-authentication>'
println("Status: "+response.status)
return new JsonSlurperClassic().parseText(response.content)
}
def Executor(node_name) {
return {
stage("Clean ${node_name}") {
node(node_name) {
//agent {node node_name}
echo "ON NODE: ${node_name}."
}
}
}
}
def makeAgentMaintainer() {
def nodes = requestNodes()
def agent_list = []
for (e in nodes['computer']) {
echo e.displayName
if (!e.offline) {
if (e.displayName != "master") {
agent_list.add(e.displayName)
}
}
}
def CleanAgentsMap = agent_list.collectEntries {
["${it}" : Executor(it)]
}
return CleanAgentsMap
}
node {
parallel makeAgentMaintainer()
}
You will need http_request plugin and do some approvals.
In the Executor function you can define the commands, that you want to execute on every Agent.
Related
I have a simple Jenkins pipeline script like this:
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
When running the job, it starts correctly on the node "agent2", but it runs as 'jenkins' (the OS shell user of the master server where Jenkins is installed) instead of the OS SSH Shell user of the node.
The node has own credentials assigned to it, but they are not used.
When I run the job "doSomething" on its own and set "Restrict where this project can be run" to "node1" everything is fine. The job then is run by the correct user.
This behaviour can be easily recreated:
1. create a new OS user (pw: testrpm)
sudo useradd -m testrpm
sudo passwd testrpm
2. create a new node and use this settings:
3. create a freestyle job (called 'doSomething) with a shell step which does 'whoami'
doSomething Job XML
4. create a pipeline job and paste this code into the pipeline script
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
test-pipeline Job XML
5. run the pipeline and check the console output of the 'doSomething' job. The output of 'whoami' is not 'testrpm' but 'jenkins'
Can somebody explain this behaviour and tell me where the error is ?
If I change the "Usage" of the built-in node to "Only build jobs with label expressions matching this node", than it works as expected an the "doSomething" job is called with the user testrpm:
I want to be able to use an AWS node to run our jenkins build stages or one of our PCs connected to hardware that can also test the systems. We have some PCs installed that have a recognised agent, and for the AWS node I want to run the steps in a docker container.
Based on this, I would like to decide whether to use the docker agent or the custom agent based on parameters passed in by the user at build time. My jenkinsfile looks like this:
#!/usr/bin/env groovy
#Library('customLib')
import groovy.transform.Field
pipeline {
agent none
parameters{
choice(name: 'NODE', choices: ['PC1', 'PC2', 'dockerNode', 'auto'], description: 'Select which node to run on? Select auto for auto assignment.')
}
stages {
stage('Initialise build environment'){
agent { // agent
docker {
image 'docker-image'
label 'docker-agent'
registryUrl 'url'
registryCredentialsId 'dockerRegistry'
args '--entrypoint=\'\''
alwaysPull true
}
}
steps {
script {
if (params.NODE != 'auto' && params.NODE != 'dockerNode'){
agentLabel = params.NODE + ' && ' + 'custom_string' // this will lead to running the stages on our PCs
}else {
agentLabel = 'docker-agent'
}
env.NODE = params.NODE
}
}
} // initialise build environment
stage('Waiting for available node') {
agent {label agentLabel} // if auto or AWSNode is chosen, I want to run in the docker container defined above; otherwise, use the PC.
post { cleanup { cleanWs() } }
stages {
stage('Import and Build project') {
steps {
script {
...
}
}
} // Build project
...
}
} // Waiting for available node
}
}
Assuming the PC option works fine if I don't add this docker option and the docker option also works fine on its own, how do I add a label to my custom docker agent and use it conditionally like above? Running this Jenkinsfile, I get the message 'There are no nodes with the label ‘docker-agent’ which seems to say that it can't find the label I defined.
Edit: I worked around this problem by adding two stages that run a when block that decides whether the stage runs or not, with a different agent.
Pros: I'm able to choose where the steps are run.
Cons:
The inside build and archive stages are identical, so they are simply repeated for each outer stage. (This may become a pro if I want to run a separate HW test stage only on one of the agents)
both outer stages always run, and that means that the docker container is always pulled from the registry, even if it is not used (i.e. if we choose to run the build process on a PC)
Seems like stages do not lend themselves easily to being encapsulated into functions.
I am new to Jenkins pipeline groovy script.
I have a job param named 'HOSTNAMES' which will take some host name values separated by comma as an input. I need to execute some scripts on all these hosts in parallel mode. How to achieve this using jenkins groovy script. Pls guide me.
Assuming all these nodes (hostnames) are connected to the Jenkins server and the HOSTNAMES parameter is a list of the node names (as they appear in the jenkins server) you can use the parallel step to achieve what you want.
You will have to transform each node name into a map entry representing the parallel execution branch and then run them using the parallel keyword.
something like:
def hosts = HOSTNAMES.split(',')
def executions = hosts.collectEntries { host ->
["Running on ${host}" : {
node(host) {
// The code to run on each node
stage("Execution") {
echo 'example code on #{host}'
...
}
}
}]
}
parallel executions
You can also run it in one line if you want:
parallel hosts.collectEntries { ...
I have two Jenkins workflow jobs that start the same job with different parameters, namely, the branch they build. The latter job builds the project on several platforms. The "head" job, that is the worklflow job may start on different machines. Also, there are two linux machines in the setup.
And sometimes it so happens that one of them (say, master) starts on one of the linux machines, and the other one starts on the other. Both of them have to build a target on a linux machine, and since both of them are busy, both jobs stall.
With usual jobs, one can limit where they can run, however, I couldn't find how to limit where a workflow job can run. Obviously, it should be done using the groovy script, but it escapes me how exactly.
Is there a solution to that?
here's a Jenkinsfile to do it globally (this is telling jenkins the entire pipeline must be run on a slave with these three labels):
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
echo 'building stuff'
}
}
}
}
you can also select a certain slave or certain capabilities via the node step for any stage or part of a stage:
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
// this overrides the top-level agent requirements
node('linux_with_zsh') {
echo 'building stuff'
}
}
}
}
}
I have setup a slave.
For a job that executes a shell script, I configured to run either on the slave and on the master.
If I launch 2 instances of the same job, I observe that the job is run only by the master, and the 2nd instance waits to the 1st one to finish, and it will be run also by the master.
I expect master and slave to work simultaenously.
Why is the slave always idle?
Is there a way to priorize one slave?
UPDATE: In my use case, a job uses the database for destructive tests, so is not good for reliability to have more than one instance of the same job in a node. Each node has a copy of the database.
First, go to the job configuration page and check "Execute concurrent builds if necessary". This will allow multiple instances of your job to execute at the same time.
Next, go to the configuration pages of your build nodes (via the link "Build Executor Status" on the main page) and set "# of executors" to 1 for each one (both master and slave) . This will prevent one build node from running multiple jobs at the same time.
The result should be that if you launch 2 instances of the same job, one will execute on the master and one will execute on the slave.
The solution with a jenkins pipeline script:
node("master") {
parallel (
"masterbuild" : {
node ("master") {
mybuild()
}
},
"slavebuild" : {
node ("slave") {
mybuild()
}
}
)
}
def mybuild() {
sh 'echo build on `hostname`'
}
This is an improvement on Wim answer:
Go to the job configuration page and check "Execute concurrent builds if necessary". This will allow multiple instances of your job to execute at the same time.
Next, use the Throttle Concurrent Builds Plug-in.
In this way, only one execution per node is allowed, and the load is balanced between different nodes.
In this way, a node doesn't loose the ability to run simultaneously several unrelated jobs.