Running multiple steps in sequenze in one parallel block In Jenkinsfile - jenkins

I'm trying to optimize my pipeline. I'm using the pipeline to generate and deploy some docs. At the end I clear my document root and write the newly generated docs into the document root. I'm doing this for several stages in parallel.
o-----o-----o--+--o--+---+--o--+-----o
| | | |
+--o--+ +--o--+
| | | |
+--o--+ +--o--+
this is the pipeline exerpt for the parallel stages
stage("clear nfs directory") {
steps {
parallel(
test: {
sh "rm -rf /mnt/nfs/test/docs/$pipelineParams.groupname"
sh "mkdir /mnt/nfs/test/docs/$pipelineParams.groupname"
},
rele: {
sh "rm -rf /mnt/nfs/rele/docs/$pipelineParams.groupname"
sh "mkdir /mnt/nfs/rele/docs/$pipelineParams.groupname"
},
prod: {
sh "rm -rf /mnt/nfs/prod/docs/$pipelineParams.groupname"
sh "mkdir /mnt/nfs/prod/docs/$pipelineParams.groupname"
}
)
}
}
stage("copy generated docs to nfs directory") {
steps {
parallel(
test: {
dir("target/public") {
sh "cp -r * /mnt/nfs/test/docs/$pipelineParams.groupname"
}
},
rele: {
dir("target/public") {
sh "cp -r * /mnt/nfs/rele/docs/$pipelineParams.groupname"
}
},
prod: {
dir("target/public") {
sh "cp -r * /mnt/nfs/prod/docs/$pipelineParams.groupname"
}
}
)
}
}
Since clear and write should depend on each other I would like to refactor the pipeline into a more sequential design (running multiple steps in sequence in less parallel steps)
o-----o-----o--+--o---o--+-----o
| |
+--o---o--+
| |
+--o---o--+
I'm not sure how to run multiple steps in the same parallel block ... can anyone give me a hint? Thanks guys

Please see below reference which will allow you to run multiple steps in same parallel block.
You would need to use sequential stages which will give below output :
o-----o-----o--+--o---o--+-----o
| |
+--o---o--+
| |
+--o---o--+
pipeline {
agent { label 'master' }
stages {
stage('Build and Test') {
parallel {
stage("Build and Test Linux") {
stages {
stage("Build (Linux)") {
agent any
steps {
echo "Inside for loop 1"
}
}
stage("Test (Linux)") {
agent any
steps {
echo "Inside for loop 2"
}
}
}
}
stage("Build and Test Windows") {
stages {
stage("Build (Windows)") {
agent any
steps {
echo "Inside for loop 3"
}
}
stage("Test (Windows)") {
agent any
steps {
echo "Inside for loop 4"
}
}
}
}
}
}
}
}
For more info see:-
https://www.jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/
Below link gives reference example:
https://issues.jenkins.io/browse/JENKINS-55438

Related

Run multiple Jobs in parallel via Jenkins Declarative pipeline syntax

I want to execute multiple jobs from a single pipeline using declarative syntax in parallel. Can this be possible!! I know we can make a declarative parallel pipeline using "parallel" parameter.
pipeline {
agent any
parallel{
stages {
stage('Test1') {
steps {
sh 'pip install -r requirements.txt'
}
}
stage('Test2') {
steps {
echo 'Stage 2'
sh 'behave -f allure_behave.formatter:AllureFormatter -o allure-results features/scenarios/**/*.feature'
}
}
stage('Test3') {
steps {
script {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'allure-results']]
])
}
}
}
}
}
}
Below image will show you the proper flow that I want. Any approach how to do it?
// Pipeline project: SO-69680107-1-parallel-downstream-jobs-matrix
pipeline {
agent any
stages {
stage('Clean Workspace') {
steps {
cleanWs()
}
}
stage('Job matrix') {
matrix {
axes {
axis {
name 'job'
values 'SO-69680107-2', 'SO-69680107-3', 'SO-69680107-k' // , ...
}
}
stages {
stage('Run job') {
steps {
build "$job"
copyFiles( "$WORKSPACE\\..\\$job", "$WORKSPACE")
}
} // stage 'Run job'
}
} // matrix
} // stage 'Job matrix'
stage('List upstream workspace') {
steps {
bat "#dir /b \"$WORKSPACE\""
}
}
} // stages
}
def copyFiles( downstreamWorkspace, upstreamWorkspace ) {
dir("$downstreamWorkspace") {
bat """
#set prompt=\$g\$s
#echo Begin: %time%
dir /b
xcopy /f *.* \"$upstreamWorkspace\\\"
#echo End: %time%
"""
}
}
Template for downstream projects SO-69680107-2, SO-69680107-3, SO-69680107-k:
// Pipeline project: SO-69680107-X
pipeline {
agent any
stages {
stage('Stage X') {
steps {
sh 'set +x; echo "Step X" | tee SO-69680107-X.log; date; sleep 3; date'
}
}
}
}

Record warnings from a given stage in a jenkins pipeline

I'm trying to use the Warnings Next Generation Plugin to record GCC warnings in a Jenkins pipeline.
I have multiple stages inside a matrix section and I'd like to record the warnings which appear on a given stage and ideally as a bonus, be able to discriminate per axis value (products).
As a minimal example, I wrote the following pipeline:
pipeline{
agent { label 'master' }
stages {
stage('Create workspace') {
steps {
deleteDir()
sh "echo 'main() { }' > build_1_file.c"
sh "echo 'int main() { }' > build_2_file.c"
}
}
stage('Main stage') {
matrix {
axes {
axis {
name 'PRODUCT'
values 'first_product', 'second_product'
}
}
stages {
stage('Build 1') {
steps {
echo "Build 1 for ${PRODUCT}"
sh "if [ ${PRODUCT} = first_product ]; then gcc build_1_file.c; fi"
}
}
stage('Build 2') {
steps {
echo "Build 2 for ${PRODUCT}"
sh "gcc build_2_file.c -Wstrict-prototypes"
recordIssues tool: gcc(name: "${PRODUCT} GCC warnings")
}
}
}
}
}
}
}
First issue is that if I put recordIssues in one stage, then the warnings which appear in a stage before, will be recorded, when they shouldn't. For example, the warning detected in build_1_file.c in stage 'Build 1', will be recorded.
Second, both first_product GCC warnings and second_product GCC warnings will show 2 warnings, when only the second should (because of the if [ ${PRODUCT} = first_product ]).
Is there a solution to do what I want?
Well, I think I finally found a solution. The key is to redirect the compilation logs to a file and use that file's path as a pattern for the gcc tool. This gives:
pipeline{
agent { label 'master' }
stages {
stage('Create workspace') {
steps {
deleteDir()
sh "echo 'main() { }' > build_1_file.c"
sh "echo 'int main() { }' > build_2_file.c"
}
}
stage('Main stage') {
matrix {
axes {
axis {
name 'PRODUCT'
values 'first_product', 'second_product'
}
}
stages {
stage('Build 1') {
steps {
echo "Build 1 for ${PRODUCT}"
sh "gcc build_1_file.c"
}
}
stage('Build 2') {
steps {
echo "Build 2 for ${PRODUCT}"
sh "if [ ${PRODUCT} = second_product ]; then gcc build_2_file.c -Wstrict-prototypes |& tee ${PRODUCT}.log; fi"
recordIssues tool: gcc(pattern: "${PRODUCT}.log", name: "${PRODUCT} GCC warnings", id: "${PRODUCT} GCC warnings")
}
}
}
}
}
}
}

re-execute single job in jekins from parallel build

Is there any way to re-run only single job in parallel job config in Jenkins?
for example: in given picture there is a Testing stage and in testing stage there is 3 parallel jobs 1,2 and 3. if job 1 will get failed , can we re-run only job 1 again , instead of executing Testing stage again? image
Jenkinsfile:
pipeline {
agent {
label "agent1"
}
stages {
stage('Test') {
parallel {
stage('Test1') {
steps { sh 'echo Test 1 passed' }
}
stage('Test2') {
steps {
sh 'echo Test2 is passed'
}
} stage('Test3') {
steps {
sh 'echo Test 3 passed'
}
}
}
}
}

Can I specify node using Scripted Pipeline in Jenkins?

Ihave noticed that Jenkins pipeline file -- Jenkinsfile which have two syntax
Declarative
Scripted
I have made Declarative Script work to specify node to run my task. However I don't know how to modify my script to Scripted syntax.
My Declarative Script
pipeline {
agent none
stages {
stage('Build') {
agent { label 'my-label​' }
steps {
echo 'Building..'
sh '''
'''
}
}
stage('Test') {
agent { label 'my-label​' }
steps {
echo 'Testing..'
sh '''
'''
}
}
stage('Deploy') {
agent { label 'my-label​' }
steps {
echo 'Deploying....'
sh '''
'''
}
}
}
}
I have tried to use in this way:
node('my-label') {
stage 'SCM'
git xxxx
stage 'Build'
sh ''' '''
}
But it seems Jenkins cannot find my node to run.
How about this simple example?
stage("one") {
node("linux") {
echo "One"
}
}
stage("two") {
node("linux") {
echo "two"
}
}
stage("three") {
node("linux") {
echo "three"
}
}
Or the below answer, this way you are guaranteed to have the stages run on the same node if there are multiple nodes with the same label and run interrupted by another job.
The above example will release the node after every stage, the below example will hold the node for all three stages.
node("linux") {
stage("one") {
echo "One"
}
stage("two") {
echo "two"
}
stage("three") {
echo "three"
}
}

Chained multiple pipeline based on 'post' jenkins block

I'm beginner to Jenkins. I have code pipeline structure like this
Repo1 -> Repo2 -> Repo3 -> Deploy
I already created such hierarchy via GUI but I want to create it via pipeline as code.I want to create chain of pipelines where I clone different repos and perform tests on it and then continue to another repo based on current pipeline post result.
This is my jenkinsfile - (psuedo code like as it gives me error to build)
pipeline {
agent any
stages {
stage('Build Repo1') {
steps {
sh 'echo "repo1 build!"'
}
}
stage('Test Repo1') {
steps {
sh 'echo "repo success!"'
}
}
}
post {
success {
pipeline {
agent any
stages {
stage('Build Repo2') {
steps {
sh 'echo "build repo2!"'
}
}
stage('Test Repo2') {
steps {
sh 'echo "test repo2!"'
}
}
}
post {
success {
# continue to generate pipeline for repo3
echo 'This will always run'
}
failure {
echo 'This will run only if failed'
}
}
}
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
Please help!

Resources