I have two Jenkins jobs that tun on separate computers. On computer 1 I read properties file and use it for environment variables. But i need the same file on PC 2 and it only exist on the first one. When the first Jenkins job finishes it starts the second one and it can pass parameters file via job but I have to receive with creation of separate parameter with Parameterized Trigger Plugin for each parameter, and I have a lot and don`t want to do so. Is there simple solution for this issue?
Forget Jenkins 1 and the plugins Parameterized Trigger Plugin. Using Jenkins 2, here's an example of your need:
node ("pc1") {
stage "step1"
stash name: "app", includes: "properties_dir/*"
}
node ("pc2") {
stage "step2"
dir("dir_to_unstash") {
unstash "app"
}
}
Related
I have a Jenkins job config that uses the "Build whenever the specified event is seen" trigger (supported by the Cloudbee's Notification API plugin) and specifies a Jmespath Query (e.g. ref=='refs/heads/master') and runs a pipeline script. I want to access other properties in the trigger event (e.g. repository.full_name) from within the pipeline script. How can I do this?
Found the answer. The data I was looking for is in the com.cloudbees.jenkins.plugins.pipeline.events.EventTriggerCause instance of the build causes. For example, the following code finds all the commits:
def newCommits = currentBuild.rawBuild.getCauses().findAll {
it instanceof com.cloudbees.jenkins.plugins.pipeline.events.EventTriggerCause
}.collect{
it.getEvent().commits
}
So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.
I am using a Jenkins pipeline script and when all nodes are offline, the builds keep on queuing up. How do I stop Jenkins from adding jobs to the queue while all slaves are offline?
pipeline {
triggers {
pollSCM('H/3 * * * 1-5')
}
}
Is your agent's availability configured to 'Keep this agent online as much as possible' ?
One way to tackle this situation is, run the below script on master node and build your pipeline(s) only if at least one of the nodes is online. You can pass the online node name to your downstream job as a parameter.
def axis = []
for (slave in jenkins.model.Jenkins.instance.getNodes()) {
if (slave.toComputer().isOnline()) {
axis += slave.getDisplayName()
}
}
return axis
Above script source: Jenkins: skip if node is offline
Other links that may help are:
Monitor and restart your slave nodes - https://wiki.jenkins.io/display/JENKINS/Monitor+and+Restart+Offline+Slaves
I found this script handy in some situations:
https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/clearBuildQueue.groovy
I'm not into pipeline jobs, but for regular freestyle jobs, this kind of queueing will only happen if your builds are parameterized. Seperate builds are needed then to ensure that the project will run seperately for each and every parameter value (it does not matter whether the value is actually different).
So, removing build parameters in your project might solve the problem.
I have a unique issue. I have one build system that needs one set of params to start a job and another jenkins server that needs to use the same Jenkinsfile but have a different set of options to select from. Think of it as a jenkins server in region 1 and another in region 2 and they need a different set of options
I have this
pipeline {
agent any
environment {
ENV_NAMES = readFile "/etc/env_list"
}
parameters {
// choices are newline separated
choice(choices: "${env.ENV_NAMES}", description: 'What Environment?', name: 'env')
}
}
Right now the choice is coming up as null but i'd love a way to source an outside file to import those.
I have config management that can place a file with data in it.
So server one would get like
env1
env2
env3
And server2 would get
prod1
prod2
prod3
TLDR: I want to be able to run job simultaneously on multiple nodes in Jenkins pipeline. [ for example - build application x on nodes dev, test & staging nodes based on aws ]
I have a large group of nodes with the same label. I would like to be able to run a job in Jenkins that executes on all of the nodes with the same label as well as doing so simultaneously.
I saw a suggestion to use the matrix configuration option in Jenkins, but I can only think of one axis (the label group). When I try and run the job, it seems like it only executes once instead of 300 times (1 for each of the nodes in that label group).
What should my other axis be? Or...is there some plugin to do this? I had tried the NodeLabel Parameter Plugin, and choosing "run on all available online nodes", but it does not seem to run the jobs simultaneously.
Install
Parameterized Trigger Plugin
NodeLabel Parameter Plugin
For the job you want to run, enable Execute concurrent builds if necessary
Create another job besides the job you want to run on all slaves and configure it
Build > Add build step > Trigger/call builds on other projects
Add ParameterFactories > All Nodes for Label Factory > Label: the label of the nodes
The matrix build will work; use "Slaves" as the axis and expand the "Individual nodes" list to select all of your nodes.
Note that you will need to update the selection every time you add or remove a slave.
For a more maintainable solution, you could use the Job DSL plugin to set up a seed job that has the template for the build, then loops over each slave and creates a new job with the build label set to the name of the slave.
There is two plugins that you need: Paramitrized Trigger Plugin to be able to trigger other jobs as build step of your main job, and NodeLabel Plugin (read the BuildParameterFactory section for descrition of what you need) to specify the label.
The best and easiest way to accomplish this is using Elastic Axis plugin. 1. Install the pulgin.
2. Create a Multi Configuration job.(Install if not present)
3. In the job configuration you can find new axis added as Elastic axis. Add the label as shown below to get the job run on multiple slaves.
Taking a few of the above answers and adjusting them for 2.0 series.
You can now launch all a job on all nodes.
// The script triggers PayloadJob on every node.
// It uses Node and Label Parameter plugin to pass the job name to the payload job.
// The code will require approval of several Jenkins classes in the Script Security mode
def branches = [:]
def names = nodeNames()
for (int i=0; i<names.size(); ++i) {
def nodeName = names[i];
// Into each branch we put the pipeline code we want to execute
branches["node_" + nodeName] = {
node(nodeName) {
echo "Triggering on " + nodeName
build job: 'PayloadJob', parameters: [
new org.jvnet.jenkins.plugins.nodelabelparameter.NodeParameterValue
("TARGET_NODE", "description", nodeName)
]
}
}
}
// Now we trigger all branches
parallel branches
// This method collects a list of Node names from the current Jenkins instance
#NonCPS
def nodeNames() {
return jenkins.model.Jenkins.instance.nodes.collect { node -> node.name }
}
Taken from the code
https://jenkins.io/doc/pipeline/examples/#trigger-job-on-all-nodes
Rundeck might be a tool better suited to your needs. Can be setup to run several jobs in parallel and has a plugin for Jenkins: http://rundeck.org/
Rundeck is designed to integrate with larger systems. We generate the resource file from our configuration management database. Very easy to do see the documentation: http://rundeck.org/docs/administration/node-resource-sources.html.
Additionally plugins available for amazon and/or systems like puppet and chef: http://rundeck.org/plugins
I was looking for a way to run docker system prune on all nodes (with label docker). I ended with a pretty simple scripted pipeline, which AFAIK will only need the pipeline plugin to work:
#!/usr/bin/env groovy
def nodes = [:]
nodesByLabel('docker').each {
nodes[it] = { ->
node(it) {
stage("docker-prune#${it}") {
sh('docker system prune -af --filter "until=1440h"')
}
}
}
}
parallel nodes
Note: Requires Pipeline Utility Steps
What this does, it is looking for all nodes with label docker, then iterates over it and creates an associative array nodes with one step per found node (to be precise, what this is doing is cleaning all old docker stuff older then 60 days). parallel nodes starts to execute in parallel (on all found nodes simultaneously).
Hope that this will help someone.
Got it - No need for any special plugin!
I've created a parent job that triggers/call another build ,
And when I'm calling him I pass him the Label that I wan't the child job to run on.
So basically the parent job Only triggers the job I need ,
and the child job will run as many times as the number of slaves in that Label
(In my case 4 times).
Enable This project is parameterized, add a parameter of type Label, enter an arbitrary name for the label and select a default value such as a label covering a number of nodes or a conjuction (&&) of such labels. Enable Run on all nodes matching the label, keep Run regardless of result, keep Node eligibility at All nodes.
Solution: You can succinctly parallel the same build across multiple Jenkins nodes
This can be useful for building the same project on different environments ( for example: build node applications on test ,dev and staging environments )
Example:
pipeline {
agent { docker { image 'node:14-alpine' } }
stages {
stage('build') {
steps {
parallelTasks
}
}
}
}
def parallelTasks() {
def labels = ['test', 'dev', 'staging'] // labels for Jenkins node types we will build on
def builders = [:]
for (x in labels) {
def label = x
builders[label] = {
node(label) {
sh """#!/bin/bash -le
echo "build app on ${label} node"
cd /home/app
npm run build
"""
}
}
}
parallel builders
}