I've recently moved to the Pipeline plugin in Jenkins. I've successfully used freestyle jobs before for my project, but now would like to test something new.
My project builds for Windows and Linux, in release and in debug mode, and uses a parameter, called device, to configure some C preprocessor macros: globally #defined frame_width and frame_height differ, depending on device value.
Here is my Jenkinsfile:
def device_config(device) {
def device_config = "";
switch(device) {
case ~/^dev_[Aa]$/:
device_config="""-DGLOBAL_FRAME_WIDTH=640\
-DGLOBAL_FRAME_HEIGHT=480"""
break;
case ~/^dev_[Ss]$/:
device_config="""-DGLOBAL_FRAME_WIDTH=320\
-DGLOBAL_FRAME_HEIGHT=240"""
break;
default:
echo "warning: Unknown device \"$device\" using default config from CMake"
break;
}
return device_config
}
pipeline {
agent {
label 'project_device_linux'
}
environment {
device='dev_A'
}
stages {
stage('Configure') {
steps {
script {
dc = device_config("${env.device}")
}
dir('build') {
sh """cmake .. -DOpenCV_DIR="/usr/local" -DCMAKE_BUILD_TYPE="Debug"\
-DCHECK_MEMORY_LEAKS=ON\
-DENABLE_DEVELOPER_MODE=OFF\
-DUNIT_TEST_RAW_PATH=${env.tmpfs_path}\
$dc"""
}
}
}
stage('Build') {
steps {
dir('build') {
sh "make -j 16"
}
}
}
stage('Test'){
steps {
dir('build') {
sh "make check"
}
}
}
}
}
Now I'd like to repeat all those stages for another device, dev_s, for "Release" build type, and for Windows also. There are also some minor differences, depending on parameters: for example, "Release" builds should have included publishing of compiled binaries and excluded check for memory leaks. Also, if I've got it correctly, Windows slave does not understand sh build step and uses bat for that purpose.
How can I do it without copy-pasteing of code and in parallel on 2 nodes, one running Linux, and another one, running Windows?
Obviously, there should be several nested loops, but it is not clear for me, what to emit on each loop iteration.
Forgot to mention, I'd like to run everything from the Gitlab trigger on Push events.
UPDATE
Currently I end up with something like the following
#!/usr/bin/env groovy
def device_config(device) {
def result = "";
switch(device) {
case ~/^dev_[Aa]$/:
result ="""-DFRAME_WIDTH=640\
-DFRAME_HEIGHT=480"""
break;
case ~/^dev_[Ss]$/:
result ="""-DFRAME_WIDTH=320\
-DFRAME_HEIGHT=240"""
break;
default:
echo "warning: Unknown device \"$device\" using default config from CMake"
break;
}
return result;
}
oses = ['linux', 'windows']
devices = ['dev_A', 'dev_S']
build_types = ['Debug', 'Release']
node {
stage('Checkout') {
checkout_steps = [:]
for (os in oses) {
for (device in devices) {
for (build_type in build_types) {
def label = "co-${os}-${device}-${build_type}"
def node_label = "project && ${os}"
checkout_steps[label] = {
node(node_label) {
checkout scm
}
}
}
}
}
parallel checkout_steps
}
stage('Configure') {
config_steps = [:]
for (os in oses) {
for (device in devices) {
for (build_type in build_types) {
def label = "configure-${os}-${device}-${build_type}"
def node_label = "project && ${os}"
def dc = device_config("${device}")
cmake_parameters = """-DCMAKE_BUILD_TYPE="${build_type}"\
-DCHECK_MEMORY_LEAKS=ON\
$dc"""
if(os == 'linux') {
config_steps[label] = {
node(node_label) {
dir('build') {
sh """cmake .. -DOpenCV_DIR=/usr/local ${cmake_parameters}"""
}
}
}
} else {
config_steps[label] = {
node(node_label) {
dir('build') {
bat """cmake .. -G"Ninja" -DOpenCV_DIR=G:/opencv_2_4_11/build ${cmake_parameters}"""
}
}
}
}
}
}
}
parallel config_steps
}
}
What I don't like is that some node-specific settings, like paths, are set in the Jenkinsfile. Hope to figure out, how to set them in node settings in Jenkins.
I also see in logs, that only Release + dev_S configuration is applied - there is some kind of closure and late binding. Search reveals that it is a known and already fixed issue - I have to plan to figure out how to deal with closures.
Related
My pipeline is compiling on a windows and linux machine in parallel.
Since I used the parallel directive logrotators does not work anymore and I don't manage to find what is wrong
All artefacts are keeping stored .
here is a sample of my Jenkinsfile
properties([gitLabConnection('numagit'),
buildDiscarder(
logRotator(
numToKeepStr:'1', artifactNumToKeepStr:'1'
)
)
]
)
parallel (
'linux' : {
node('linux64') {
stage('Checkout sources') {
echo 'Checkout..'
checkout(scm)
}
gitlabBuilds(builds:[
"Compiling linux64"
] ) {
try {
stage('Compiling linux64') {
gitlabCommitStatus("Compiling linux64") {
sh('rm -rf build64')
sh('mkdir -p build64')
dir('build64')
{
......
}
archiveArtifacts(artifacts: 'build64/TARGET/numalliance/MAJ/data.tgz', fingerprint: true)
archiveArtifacts(artifacts: 'build64/TARGET/numalliance/MAJ/machine.sh', fingerprint: true)
}
}
} catch (e) {
currentBuild.result = "FAILURE" // make sure other exceptions are recorded as failure too
// en cas d erreur on archive la sortie du CMake
archiveArtifacts("build64/CMakeFiles/CMakeOutput.log")
}
}
cleanWs()
}
},
'windows' : {
// Noeud de compilation windows
node('win32') {
def revision = ""
stage('Checkout sources') {
echo 'Checkout..'
checkout(scm)
}
try {
stage('Compilation windows') {
gitlabCommitStatus("Compilation windows") {
echo 'Building win32 version'
....
}
}
stage('Packaging for win32') {
gitlabCommitStatus('Packaging for win32') {
....
dir('win32/TARGET/numalliance/MACHINE'){
...
archiveArtifacts(artifacts: '*.exe', fingerprint: true)
}
}
}
} catch (e) {
currentBuild.result = "FAILURE" // make sure other exceptions are recorded as failure too
}
cleanWs()
}
}
)
The properties function changes the configuration of the job. It is the same as if you were to go to the configuration page of the job and change the log rotator settings manually. It seems you want to achieve that 5 development builds and one of everything else are kept but what actually happens is that the pipeline will keep 5 of anything when a develop job has run last and it will keep 1 when something else ran last.
The easiest fix would be to use seperate jobs (with the same pipeline code). In that case you would have one job that only keeps the last build and another (the develop job) which keeps the last 5.
I am new to jenkins and I try to build a declarative pipeline according to the tutorial.
On the page: https://jenkins.io/doc/book/pipeline/syntax/#matrix-cell-directives
there is an example on how to build a pipeline with a matrix which I tried.
Unfortunately I get the following error:
WorkflowScript: 32: Unknown stage section "matrix". Starting with version 0.5, steps in a stage must be in a ‘steps’ block. # line 32, column 5.
stage ('Deploy NB') {
^
WorkflowScript: 32: Expected one of "steps", "stages", or "parallel" for stage "Deploy NB" # line 32, column 5.
stage ('Deploy NB') {
My pipeline in the jenkinsfile looks like this:
The functions from the lib are surely without any problems because they are used in several other jenkinsfiles which run without problems.
pipeline {
agent {
node {
label ""
// Location of the output files
customWorkspace "/home/wf/builds/${env.JOB_NAME}"
}
}
environment {
// mail addresses that gets notifications about failures, success etc., - comma delimited
MAIL_NOTIFY = "mustbeanonymous"
// Server admin (not necessary for wildfly)
ADMIN_USER = " "
ADMIN_PWD = " "
// home directory
HOME_DIR = "/home/wf"
// Product name
PRODUCT_NAME = "MYPRD"
}
options {
disableConcurrentBuilds()
durabilityHint("PERFORMANCE_OPTIMIZED")
}
stages {
stage ('Deploy NB') {
matrix {
axes {
axis {
name 'ENVIRONMENT'
values 'NB', 'TEST1'
}
axis {
name 'DATABASE'
values 'ORA', 'ORA_INIT', 'DB2', 'DB2_INIT'
}
}
environment {
// Server scripts installation path
SERVER_PATH = "${HOME_DIR}/WildFly16_${PRODUCT_NAME}_${ENVIRONMENT}_${DATABASE}"
// EAR to deploy on server
DEPLOY_EAR = "${PRODUCT_NAME}_WF_${DATABASE}.ear"
}
stages {
/* BUILD */
stage('Init tools') {
steps {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.initTools()
}
}
}
stage('Copy Deployment') {
steps {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.copyDeployment()
}
}
}
/* DEPLOY */
stage('Install EAR') {
steps {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.installEARDeploy()
}
}
}
}
}
}
}
/* POST PROCESSING */
post {
success {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.onSuccess()
}
}
failure {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.onFailure()
}
}
unstable {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.onUnstable()
}
}
always {
script {
def lib = load "${workspace}/build/Jenkinsfile.lib"
lib.onAlways()
}
}
}
}
What I try to achieve is that the pipeline runs for every ENVIRONMENT and DATABASE (each cell) and executes the stages. But where did I make a mistake?
I use Jenkins: 2.198
Update: The solution was to upgrade the plugin to a version above 1.5.0. See accepted answer for more information.
What version of Declarative Pipeline do you use ?
Matrix section was only added in version 1.5.0 of Declarative Pipeline plugin
See https://github.com/jenkinsci/pipeline-model-definition-plugin/releases
To verify the version, search for pipeline-model-definition on jenkins.yourcompany.com/pluginManager/api/xml?depth=1
I try to configure different pipelines in jenkins 2. My Problem ist that all my pipelines need the same workspace path (configugerd with customWorkspace in my configuration script).
Now I have to prevent that more than one pipeline is running.
My search always leads me back to the same pages, which unfortunately do not help me :-(
Has anyone already solved the same problem and can give me a hint?
Thank you very much
def locked = false;
pipeline {
agent any
stages {
stage('check workspace lock status') {
steps {
script {
locked = fileExists file: '.lock'
if(locked == false) {
touch file: '.lock'
}
}
}
}
stage('build') {
when {
beforeAgent true
expression { locked == false }
}
steps {
// do something you want
}
}
}
post {
always {
sh 'rm -f .lock'
}
}
}
I'm trying to build a dynamic group of steps to run in parallel. The following example is what I came up with (and found examples of at https://devops.stackexchange.com/questions/3073/how-to-properly-achieve-dynamic-parallel-action-with-a-declarative-pipeline). But I'm having trouble getting it to use the expected variables. The result always seems to be the variables from the last iteration of the loop.
In the following example the echo output is always bdir2 for both tests:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def tests = [:]
def files
files = ['adir1/adir2/adir3','bdir1/bdir2/bdir3']
files.each { f ->
rolePath = new File(f).getParentFile()
roleName = rolePath.toString().split('/')[1]
tests[roleName] = {
echo roleName
}
}
parallel tests
}
}
}
}
}
I'm expecting one of the tests to output adir2 and another to be bdir2. What am I missing here?
Just try to move the test section a little higher, and it will be work
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def tests = [:]
def files
files = ['adir1/adir2/adir3','bdir1/bdir2/bdir3']
files.each { f ->
tests[f] = {
rolePath = new File(f).getParentFile()
roleName = rolePath.toString().split('/')[1]
echo roleName
}
}
parallel tests
}
}
}
}
}
I'm trying to create a declarative pipeline which does a number (configurable via parameter) jobs in parallel, but I'm having trouble with the parallel part.
Basically, for some reason the below pipeline generates the error
Nothing to execute within stage "Testing" # line .., column ..
and I cannot figure out why, or how to solve it.
import groovy.transform.Field
#Field def mayFinish = false
def getJob() {
return {
lock("finiteResource") {
waitUntil {
script {
mayFinish
}
}
}
}
}
def getFinalJob() {
return {
waitUntil {
script {
try {
echo "Start Job"
sleep 3 // Replace with something that might fail.
echo "Finished running"
mayFinish = true
true
} catch (Exception e) {
echo e.toString()
echo "Failed :("
}
}
}
}
}
def getJobs(def NUM_JOBS) {
def jobs = [:]
for (int i = 0; i < (NUM_JOBS as Integer); i++) {
jobs["job{i}"] = getJob()
}
jobs["finalJob"] = getFinalJob()
return jobs
}
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr:'5'))
}
parameters {
string(
name: "NUM_JOBS",
description: "Set how many jobs to run in parallel"
)
}
stages {
stage('Setup') {
steps {
echo "Setting it up..."
}
}
stage('Testing') {
steps {
parallel getJobs(params.NUM_JOBS)
}
}
}
}
I've seen plenty of examples doing this in the old pipeline, but not declarative.
Anyone know what I'm doing wrong?
At the moment, it doesn't seem possible to dynamically provide the parallel branches when using a Declarative Pipeline.
Even if you have a stage prior where, in a script block, you call getJobs() and add it to the binding, the same error message is thrown.
In this case you'd have to fall back to using a Scripted Pipeline.