In deploy scenario i need to create and run jenkins task on list of hosts, i.e. create something like parametrized task (where ip address is a parameter) or a task on Multijob Plugin with HOST axis, but run by only 2 ones in parallel over multiple hosts.
One of the option could be to run ansible with the list of hosts, but i'd like to see a status per each host separately, and relaunch a jenkins job if needed.
The main option is to use Job DSL Plugin or Pipeline Plugin, but here i need help to understand what classes/methods of dsl groovy code should be used to achieve this.
Can anyone help with it?
Assume that the hosts have been configured as Jenkins slaves already.
Assume that hosts are provided in pipeline job parameter
HOSTS as whitespace separated list. Following example should get you started:
def hosts_pairs = HOSTS.split().collate(2)
for (pair in host_pairs) {
def branches = [:]
for (h in pair) {
def host = h // fresh variable per iteration; it will be mutated
branches[host] = {
stage(host) {
node(host) {
// do the actual job here, e.g.
// execute a shell script
sh "echo hello world"
}
}
}
}
parallel branches
}
A combination of Matrix project and Throttle Concurrent Builds Plugin is possible.
All you need is to setup a single user-defined axis (e.g. "targetHost") with all IP addresses as values and set the desired throttling under "Throttle Concurrent Builds" (please note that you have to enable the "Execute concurrent builds if necessary" option to tell jenkins to allow concurrent execution).
The axis values are available during every child build in the corresponding environment variable (e.g. targetHost).
Below is an example config.xml with simple ping&wait build step:
<?xml version='1.0' encoding='UTF-8'?>
<matrix-project plugin="matrix-project#1.7.1">
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties>
<hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents#1.9.0">
<maxConcurrentPerNode>2</maxConcurrentPerNode>
<maxConcurrentTotal>2</maxConcurrentTotal>
<categories class="java.util.concurrent.CopyOnWriteArrayList"/>
<throttleEnabled>true</throttleEnabled>
<throttleOption>project</throttleOption>
<limitOneJobWithMatchingParams>false</limitOneJobWithMatchingParams>
<matrixOptions>
<throttleMatrixBuilds>true</throttleMatrixBuilds>
<throttleMatrixConfigurations>true</throttleMatrixConfigurations>
</matrixOptions>
<paramsToUseForLimit></paramsToUseForLimit>
</hudson.plugins.throttleconcurrents.ThrottleJobProperty>
</properties>
<scm class="hudson.scm.NullSCM"/>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>true</concurrentBuild>
<axes>
<hudson.matrix.TextAxis>
<name>targetHost</name>
<values>
<string>127.0.0.1</string>
<string>127.0.0.2</string>
<string>127.0.0.3</string>
<string>127.0.0.4</string>
<string>127.0.0.5</string>
</values>
</hudson.matrix.TextAxis>
</axes>
<builders>
<hudson.tasks.Shell>
<command>sleep 7
ping -c 7 $targetHost
sleep 7</command>
</hudson.tasks.Shell>
</builders>
<publishers/>
<buildWrappers/>
<executionStrategy class="hudson.matrix.DefaultMatrixExecutionStrategyImpl">
<runSequentially>false</runSequentially>
</executionStrategy>
</matrix-project>
Good luck!
Related
I created a Junit jenkins test case where a in-memory jenkins instance is launched (as we use #Rule jenkinsrule). The code of the test case is available here.
The test case will create a FreeStyleProject (= seed job) which will use as Groovy script DSL a maven.groovy file
But when the test case is executed, the following message is reported during the the job build execution. The message reports ghe consequence of the import/parsing of the mavenJob.groovy file as the job expects that a new job will be created.
Legacy code started this job. No cause information is available
Running as SYSTEM
Building in workspace /var/folders/t2/jwchtqkn5y76hrfrws7dqtqm0000gn/T/j h5344303144116520886/workspace/test0
Processing provided DSL script
ERROR: java.io.IOException: Unable to read /var/folders/t2/jwchtqkn5y76hrfrws7dqtqm0000gn/T/j h5344303144116520886/jobs/mvn-spring-boot-rest-http/config.xml
Finished: FAILURE
And of course no stack trace of the error is stdout or stderr.
How can I investigate the problem and fix it ?
Remark:
If I use the config.xml file and import it in a separate jenkins instance, the job succeeded
config.xml file generated, it looks good (vs same config.xml file created using the UI)
<?xml version='1.1' encoding='UTF-8'?>
<project>
<keepDependencies>false</keepDependencies>
<properties/>
<scm class="hudson.scm.NullSCM"/>
<canRoam>false</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders>
<javaposse.jobdsl.plugin.ExecuteDslScripts>
<scriptText>mavenJob('mvn-spring-boot-rest-http') {
description 'A Maven Job compiling the project Spring Boot Rest HTTP Example'
parameters {
gitParameter {
name 'SELECTED_TAG'
description 'The Git tag to checkout'
type 'PT_TAG'
defaultValue '2.3.4-2'
branch ''
branchFilter 'origin/(.*)'
quickFilterEnabled false
selectedValue 'DEFAULT'
sortMode 'DESCENDING_SMART'
tagFilter '*'
useRepository '.*rest-http-example.git'
listSize '10'
}
}
scm {
git {
remote {
url 'https://github.com/snowdrop/rest-http-example.git'
// branch('$SELECTED_TAG')
branch('2.3.4-2')
}
}
}
rootPOM 'pom.xml'
goals 'clean install'
}</scriptText>
<usingScriptText>true</usingScriptText>
<sandbox>false</sandbox>
<ignoreExisting>false</ignoreExisting>
<ignoreMissingFiles>false</ignoreMissingFiles>
<failOnMissingPlugin>false</failOnMissingPlugin>
<failOnSeedCollision>false</failOnSeedCollision>
<unstableOnDeprecation>false</unstableOnDeprecation>
<removedJobAction>IGNORE</removedJobAction>
<removedViewAction>IGNORE</removedViewAction>
<removedConfigFilesAction>IGNORE</removedConfigFilesAction>
<lookupStrategy>JENKINS_ROOT</lookupStrategy>
</javaposse.jobdsl.plugin.ExecuteDslScripts>
</builders>
<publishers/>
<buildWrappers/>
</project>
Many thanks in advance for your help.
I created a thread discussion here too: https://groups.google.com/g/jenkinsci-users/c/mRSwARFapyA
Charles
The problem was related to many missing dependencies needed to run the test case.
I upgraded the build.gradle file and now that works.
https://github.com/ch007m/jenkins-job-dsl/blob/jenkins-2.271/build.gradle#L53-L72
BTW, the error message reported was not correlated at all to the root cause and How to fix the problem. that should be improved within the code ;-)
I am creating a Jenkins master/slave cluster and I am having trouble finding a way to have new slaves auto register themselves with the master.
My current set up is I run some Terraform scripts that will create the master and 5 slaves. Then I have to log in to the master node and Manage Jenkins -> Manage Nodes -> New Node and manually create the number of nodes I want.
Then I RDP into my slaves and run the command java -jar agent.jar -jnlpUrl http://yourserver:port/computer/agent-name/slave-agent.jnlp. This works perfectly fine, but I would like a way to auto scale up/down the number of agents without having to manually log into the slaves every time I create a new one.
Is there a plugin or some documentation I'm missing about how to dynamically self register nodes?
NOTE: This only applies to windows nodes. I am using the Kubernetes plugin to auto scale up/down linux nodes, but Kubernetes does not have stable windows nodes support so I can't use that. I have to support classic .NET applications (not .NET Core) so I have to build on windows nodes.
Here's a bash script I use on Linux, it could be adapted fairly easily for Windows.
#!/bin/bash
set -xe
MASTER_URL=$1
MASTER_USERNAME=$2
MASTER_PASSWORD=$3
NODE_NAME=$4
NUM_EXECUTORS=$5
# Download CLI jar from the master
curl ${MASTER_URL}/jnlpJars/jenkins-cli.jar -o ~/jenkins-cli.jar
# Create node according to parameters passed in
cat <<EOF | java -jar ~/jenkins-cli.jar -auth "${MASTER_USERNAME}:${MASTER_PASSWORD}" -s "${MASTER_URL}" create-node "${NODE_NAME}" |true
<slave>
<name>${NODE_NAME}</name>
<description></description>
<remoteFS>/home/jenkins/agent</remoteFS>
<numExecutors>${NUM_EXECUTORS}</numExecutors>
<mode>NORMAL</mode>
<retentionStrategy class="hudson.slaves.RetentionStrategy\$Always"/>
<launcher class="hudson.slaves.JNLPLauncher">
<workDirSettings>
<disabled>false</disabled>
<internalDir>remoting</internalDir>
<failIfWorkDirIsMissing>false</failIfWorkDirIsMissing>
</workDirSettings>
</launcher>
<label></label>
<nodeProperties/>
<userId>${USER}</userId>
</slave>
EOF
# Creating the node will fail if it already exists, so |true to suppress the
# error. This probably should check if the node exists first but it should be
# possible to see any startup errors if the node doesn't attach as expected.
# Run jnlp launcher
java -jar /usr/share/jenkins/slave.jar -jnlpUrl ${MASTER_URL}/computer/${NODE_NAME}/slave-agent.jnlp -jnlpCredentials "${MASTER_USERNAME}:${MASTER_PASSWORD}"
This is somewhat similar to the agent launchers included in the docker slave images, but before it runs jnlp it uses the jenkins cli to create the node on jenkins. Some of the parameters would need adapting to windows obviously.
EDIT: and to get that xml the easiest way is to create a node how you want in the web ui then use jenkins-cli to retrieve it.
I've been testing this on AWS.
https://plugins.jenkins.io/swarm
Although you cannot use broadcast on AWS you can add to the java command and specify the hostname or URL of the LB of the jenkins master.
I have not checked to see how this works on windows yet but will be doing that soon and will let you know how it goes.
I am using the Swarm Plugins https://plugins.jenkins.io/swarm to connect my Windows Clients to Jenkins.
At start-up I let my VMs running this Powershell Skript:
function startJenkinsSlave()
{
[CmdletBinding(SupportsShouldProcess=$true)]
param (
[parameter(Position = 1, Mandatory = $true)]
[string]$jkMasterUrl,
[parameter(Position = 2, Mandatory = $true)]
[string]$jkSlaveName,
[parameter(Position = 3, Mandatory = $true)]
[string]$jkSlaveUser,
[parameter(Position = 4, Mandatory = $true)]
[string]$jkSlaveSecret
)
Write-Host "--- start jenkins swarm slave ---"
Write-Host "download new Version of swarm-client.jar"
$jkSwarmJarUrl="$jkMasterUrl/swarm/swarm-client.jar"
$jkJarFilePath="C:\Program Files\Jenkins\swarm-client-$($jkSlaveName).jar"
$javaExePath="C:\ProgramData\Oracle\Java\javapath\java.exe"
Try {
[io.file]::OpenWrite($jkJarFilePath).close()
Get-ItemProperty -Path $jkJarFilePath -ErrorAction SilentlyContinue
$client = new-object System.Net.WebClient
$client.DownloadFile($jkSwarmJarUrl, $jkJarFilePath)
Write-Host "neueste Version vom swarm-client.jar wurde heruntergeladen"
Get-ItemProperty -Path $jkJarFilePath
}
Catch {
Write-Warning "Unable to write to output file $jkJarFilePath"
}
Write-Host "Jenkins slave will start:"
& $javaExePath '-Dfile.encoding=UTF8' -jar $jkJarFilePath -deleteExistingClients -master $jkMasterUrl -username $jkSlaveUser -password $jkSlaveSecret -labels "W10-swarm $jkSlaveName"
}
$jkSlaveUser='JenkinsUserForSwarm'
# Use access-token and not password!
$jkSlaveSecret='1d1a700e0a0981ef74f23efa9a6c90d39d'
$jkMasterUrl='http://jenkins.onmyhost.local:8080'
$vmName=(Get-ItemProperty -Path 'Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters' -name 'VirtualMachineName').VirtualMachineName
Write-Host "Starting Jenkins Swarm"
while($true)
{
try
{
startJenkinsSlave -jkMasterUrl $jkMasterUrl -jkSlaveName $vmName -jkSlaveUser $jkSlaveUser -jkSlaveSecret $jkSlaveSecret
}
finally
{
# ctrl-c will the skript.
Write-Host "Jenkins Slave service ended. Restart in 120 seconds"
Start-Sleep -Seconds 120
}
}
This script works stabile and do the job for Windows slaves usefull when you need UI-iteractions.
You may also use env variable for the different parameters (jenkins master, user, token, ...).
For Unix slaves there may be better Solutions with k8s ans ssh.
Regards, Éric.
We use Jenkins Job DSL for our CI setup. Since we are using a special command only available in the traditional Jenkinsfile syntax, we need to use a pipeline job.
Inside of the pipeline job we check out our project from Git. We are using the pipeline job for multiple projects, so we want to inject the git url into the pipeline script.
This is a short version of our script generating the pipeline job:
def createPipelineJob(def jobName, def gitUrl) {
pipelineJob(jobName) {
environmentVariables(GIT_URL: gitUrl)
definition {
cps {
script('''
node {
sh 'env | sort'
}
''')
sandbox(true)
}
}
}
}
This creates the following XML config:
<flow-definition>
<actions/>
<description/>
<keepDependencies>false</keepDependencies>
<properties>
<EnvInjectJobProperty>
<info>
<propertiesContent>GIT_URL=my-git.url</propertiesContent>
<loadFilesFromMaster>false</loadFilesFromMaster>
</info>
<on>true</on>
<keepJenkinsSystemVariables>true</keepJenkinsSystemVariables>
<keepBuildVariables>true</keepBuildVariables>
<overrideBuildParameters>false</overrideBuildParameters>
<contributors/>
</EnvInjectJobProperty>
</properties>
<triggers/>
<definition class="org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition">
<script>
node { sh 'env | sort' }
</script>
<sandbox>true</sandbox>
</definition>
</flow-definition>
But if i run this, the GIT_URL environment variable is not listed (other environment variables are). But if i instead create the pipeline job manually with this setup, the GIT_URL environment variable is printed just fine. Creating the job manually pretty much creates the same xml configuration:
<flow-definition plugin="workflow-job#2.15">
<actions>
<io.jenkins.blueocean.service.embedded.BlueOceanUrlAction plugin="blueocean-rest-impl#1.3.1">
<blueOceanUrlObject class="io.jenkins.blueocean.service.embedded.BlueOceanUrlObjectImpl">
<mappedUrl>blue/organizations/jenkins/test-jobname</mappedUrl>
<modelObject class="flow-definition" reference="../../../.."/>
</blueOceanUrlObject>
</io.jenkins.blueocean.service.embedded.BlueOceanUrlAction>
</actions>
<description/>
<keepDependencies>false</keepDependencies>
<properties>
<com.sonyericsson.rebuild.RebuildSettings plugin="rebuild#1.27">
<autoRebuild>false</autoRebuild>
<rebuildDisabled>false</rebuildDisabled>
</com.sonyericsson.rebuild.RebuildSettings>
<EnvInjectJobProperty plugin="envinject#2.1.5">
<info>
<propertiesContent>GIT_URL=my-git.url</propertiesContent>
<secureGroovyScript plugin="script-security#1.35">
<script/>
<sandbox>false</sandbox>
</secureGroovyScript>
<loadFilesFromMaster>false</loadFilesFromMaster>
</info>
<on>true</on>
<keepJenkinsSystemVariables>true</keepJenkinsSystemVariables>
<keepBuildVariables>true</keepBuildVariables>
<overrideBuildParameters>false</overrideBuildParameters>
</EnvInjectJobProperty>
<org.jenkinsci.plugins.workflow.job.properties.PipelineTriggersJobProperty>
<triggers/>
</org.jenkinsci.plugins.workflow.job.properties.PipelineTriggersJobProperty>
</properties>
<definition class="org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition" plugin="workflow-cps#2.41">
<script>
node { sh 'env | sort' }
</script>
<sandbox>true</sandbox>
</definition>
<triggers/>
<disabled>false</disabled>
</flow-definition>
We are pretty lost because we are new to jenkins and this problem is holding us for days now.
Edit:
The job is generated on the jenkins master node but executed on a slave node
Jenkins 2.37.3
Environment Injector Plugin 2.1.5
Pipeline 2.5
This is more of a comment than an answer, but I modified and tested your DSL code and it works fine.
I created a DSL job using the script:
def createPipelineJob(def jobName, def gitUrl) {
pipelineJob(jobName) {
environmentVariables(GIT_URL: gitUrl)
definition {
cps {
script('''
node {
sh "echo $GIT_URL"
}
''')
sandbox(true)
}
}
}
}
createPipelineJob('new-job-2','my-git.url')
The resulting pipeline job has the same XML as the one you posted (minus the shell script), and building the pipeline job prints the value of GIT_URL.
[new-job-1] Running shell script
+ echo my-git.url
my-git.url
My recommendation:
If the short version you posted (or maybe try mine) doesn't work, I would try to see if upgrading Jenkins or the plugins makes any difference.
If the short version you posted or mine does work, maybe post the full version, perhaps there's an error there.
As it turns out the Environment Injector Plugin was not installed successfully, therefore the script did not run properly. So all i had to do was to restart Jenkins and everything worked just fine. Special thanks to Javier Garcés ensuring me that my script was indeed correct.
I'm converting some Jenkins jobs to DSL scripts.
Some of these use github for SCM and as this is supported by the DSL this is easy enough to configure. However, after over 100 job conversions, for the first time I need to specify a Git executable (all jobs so far have used the default) and there doesn't seem to be a way to do this. The job.xml shows this:
<scm class="hudson.plugins.git.GitSCM" plugin="git#2.4.4">
<configVersion>2</configVersion>
<userRemoteConfigs>...</userRemoteConfigs>
<branches>...</branches>
<doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
<gitTool>Ubuntu Git</gitTool>
<submoduleCfg class="list"/>
<extensions>
<hudson.plugins.git.extensions.impl.SparseCheckoutPaths>
<sparseCheckoutPaths>
<hudson.plugins.git.extensions.impl.SparseCheckoutPath>
<path>
octane.pricing/octane.trader/server/work/mif_interface/cfg
</path>
</hudson.plugins.git.extensions.impl.SparseCheckoutPath>
</sparseCheckoutPaths>
</hudson.plugins.git.extensions.impl.SparseCheckoutPaths>
</extensions>
</scm>
I can do all of this using the DSL apart from <gitTool>Ubuntu Git</gitTool>.
This isn't mentioned in the DSL so I presume this isn't supported so I tried using the configure block (bearing in mind I'm still learning exactly how to use that). Tried a few things but the one I most expected to work:
configure { project ->
project << 'hudson.plugins.git.GitSCM' {
paramDefs << 'gitTool' {
string('Ubuntu Git')
}
}
}
But no dice - the XML still shows the "default" option.
I'm surprised this can't be specified directly in the DSL but can anyone see what I am doing wrong with that configure block?
The best option is to use the nested configure block of the Git SCM context:
job('example') {
scm {
git {
remote {
github('owner/repo')
}
configure { scmNode ->
scmNode / gitTool('changeme')
}
}
}
}
See configure in the Job DSL API Viewer and more info about the Configure Block in the Job DSL wiki.
I'm using MultiJob plugin and have a job (Job-A) that triggers Job-B several times.
My requirement is to copy some artifact (xml files) from each build.
The difficulty I have is that using Copy Artifact Plugin with "last successful build" option will only take the last build of Job-B, while I need to copy from all builds that were triggered on the same build of Job-A
The flow looks like:
Job-A starts and triggers:
`Job-A` -->
Job-B build #1
Job-B build #2
Job-B build #3
** copy artifcats of all last 3 builds, not just #3 **
Note: Job-B could be executed on different slaves on the same run (I set the slave to run on dynamically by setting parameter on upstream job-A)
When all builds are completed, I want Job-A to copy artifact from build #1, #2 and #3 , and not just from last build.
How can I do this?
Here is more generic groovy script; it uses the groovy plugin and the copyArtifact plugin; see instructions in the code comments.
It simply copies artifacts from all downstream jobs into the upstream job's workspace.
If you call the same job several times, you could use the job number in the copyArtifact's 'target' parameter to keep the artifacts separate.
// This script copies artifacts from downstream jobs into the upstream job's workspace.
//
// To use, add a "Execute system groovy script" build step into the upstream job
// after the invocation of other projects/jobs, and specify
// "/var/lib/jenkins/groovy/copyArtifactsFromDownstream.groovy" as script.
import hudson.plugins.copyartifact.*
import hudson.model.AbstractBuild
import hudson.Launcher
import hudson.model.BuildListener
import hudson.FilePath
for (subBuild in build.builders) {
println(subBuild.jobName + " => " + subBuild.buildNumber)
copyTriggeredResults(subBuild.jobName, Integer.toString(subBuild.buildNumber))
}
// Inspired by http://kevinormbrek.blogspot.com/2013/11/using-copy-artifact-plugin-in-system.html
def copyTriggeredResults(projName, buildNumber) {
def selector = new SpecificBuildSelector(buildNumber)
// CopyArtifact(String projectName, String parameters, BuildSelector selector,
// String filter, String target, boolean flatten, boolean optional)
def copyArtifact = new CopyArtifact(projName, "", selector, "**", null, false, true)
// use reflection because direct call invokes deprecated method
// perform(Build<?, ?> build, Launcher launcher, BuildListener listener)
def perform = copyArtifact.class.getMethod("perform", AbstractBuild, Launcher, BuildListener)
perform.invoke(copyArtifact, build, launcher, listener)
}
I suggest you the following approach:
Use Execute System Groovy script from Groovy Plugin to execute the following script:
import hudson.model.*
// get upstream job
def jobName = build.getEnvironment(listener).get('JOB_NAME')
def job = Hudson.instance.getJob(jobName)
def upstreamJob = job.upstreamProjects.iterator().next()
// prepare build numbers
def n1 = upstreamJob.lastBuild.number
def n2 = n1 - 1
def n3 = n1 - 2
// set parameters
def pa = new ParametersAction([
new StringParameterValue("UP_BUILD_NUMBER1", n1.toString()),
new StringParameterValue("UP_BUILD_NUMBER2", n2.toString()),
new StringParameterValue("UP_BUILD_NUMBER3", n3.toString())
])
Thread.currentThread().executable.addAction(pa)
This script will create three environment variables which correspond to three last build numbers of upstream job.
Add three build steps Copy artifacts from upstream project to copy artifacts from last three builds of upstream project (use environment variables from script above to set build number):
Run build and checkout build log, you should have something like this:
Copied 2 artifacts from "A" build number 4
Copied 2 artifacts from "A" build number 3
Copied 1 artifact from "A" build number 2
Note: perhaps, script need to be adjusted to catch unusual cases like "upstream project has only two builds", "current job doesn't have upstream job", "current job has more than one upstream job" etc.
You can use the following example from an "Execute Shell" Build Step.
Please note it can be run only from the Jenkins Master machine and the job calling this step also triggered the MultiJob.
#--------------------------------------
# Copy Artifacts from MultiJob Project
#--------------------------------------
PROJECT_NAME="MY_MULTI_JOB"
ARTIFACT_PATH="archive/target"
TARGET_DIRECTORY="target"
mkdir -p $TARGET_DIRECTORY
runCount="TRIGGERED_BUILD_RUN_COUNT_${PROJECT_NAME}"
for ((i=1; i<=${!runCount} ;i++))
do
buildNumber="${PROJECT_NAME}_${i}_BUILD_NUMBER"
cp $JENKINS_HOME/jobs/$PROJECT_NAME/builds/${!buildNumber}/$ARTIFACT_PATH/* $TARGET_DIRECTORY
done
#--------------------------------------