How to run two Jenkins multi phase Jobs at the same time? - jenkins

I have two groups of multi-phase jobs, parallel test 1 and parallel test 2; where I need to execute both the groups together at the same time.
Does multi job jenkins plugin has a hack for it? or any alternatives...
Note: I don't want all the 3 jobs in the same MultiJob Phase

Since you can't run those jobs in one multijob phase, as an alternative You could use Jenkins pipeline job (Pipeline docs). Parallel stages execution can be achieved by using declarative pipeline parallel block. A dummy example of how your MultiJob could be achieved with pipeline:
pipeline {
agent any
stages {
stage('MultiJob like stage') {
parallel {
stage('Parallel Test') {
steps {
echo "Here trigger job: allure_behave. Triggered at time:"
sh(script: "date -u")
// build(job: "allure_behave")
}
}
stage('Parallel Test 2') {
steps {
echo "Here trigger job: allure_behave_new. Triggered at time:"
sh(script: "date -u")
// build(job: "allure_behave_new")
echo "Here trigger job: allure_behave_old. Triggered at time:"
sh(script: """date -u""")
// build(job: "allure_behave_old")
}
}
}
}
}
}
In this case, You have a Stage called MultiJob like stage which has substages Parallel Test and Parallel Test 2 just like in your MultiJob. The difference is that both of those sub stages are being executed in parallel.
To trigger other jobs from inside the pipeline job use build step:
build(job: "job-name")
Or if you need to run it with parameters then just add parameters build() option:
build(job: "${JOB_NAME}", parameters: [string(name: 'ENVNAME', value: 'EXAMPLE_STR_PARAM')])
Blue Ocean View:
Output:
Running on Jenkins in /var/jenkins_home/workspace/Dummy_pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (MultiJob like stage)
[Pipeline] parallel
[Pipeline] { (Branch: Parallel Test)
[Pipeline] { (Branch: Parallel Test 2)
[Pipeline] stage
[Pipeline] { (Parallel Test)
[Pipeline] stage
[Pipeline] { (Parallel Test 2)
[Pipeline] echo
Here trigger job: allure_behave. Triggered at time:
[Pipeline] sh
[Pipeline] echo
Here trigger job: allure_behave_new. Triggered at time:
[Pipeline] sh
+ date -u
Thu Nov 22 18:48:56 UTC 2018
+ date -u
Thu Nov 22 18:48:56 UTC 2018
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] echo
Here trigger job: allure_behave_old. Triggered at time:
[Pipeline] sh
+ date -u
Thu Nov 22 18:48:56 UTC 2018
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Is this alternative valid for your use case?
Regards

Related

Jenkins pipeline with parallel stages causes "process apparently never started" error

I'm trying to set up a Jenkins pipeline (using the declarative syntax) that runs unit and feature tests on two separate, on-demand AWS EC2 instances. The pipeline works perfectly when run on a single instance and without the parallel stages. As soon as I switch to parallel stages, any shell script fails with this cryptic message:
process apparently never started in
/home/admin/workspace/GSWebRuby_Test#tmp/durable-b0d8c4b4 (running
Jenkins temporarily with
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true
might make the problem clearer)
I've googled extensively and came across several bug reports of the Durable Task plugin that appears to be responsible for this message. I'm using the latest version of the plugin v. 1.33 and none of the problems seem to apply to my case, e.g. failures on unusual architectures or when running Docker containers. I've also down- and re-upgaded the plugin (toggling between versions 1.30 and 1.33). Also, to re-iterate, sh commands work without issue when I don't use the parallel stages.
I've created a simplified pipeline to debug the problem. Note that the shell commands are also simple, e.g. "env | sort", or "pwd".
pipeline {
agent none
environment {
DB_USER = credentials('db-user')
DB_PASS = credentials('db-pass')
}
stages {
stage('Setup'){
failFast false
parallel {
stage('foo') {
agent {
label 'jenkins-slave-ondemand'
}
steps {
echo 'In stage foo'
sh 'env|sort'
}
}
stage('bar') {
agent {
label 'jenkins-slave-ondemand'
}
steps {
echo 'In stage bar'
sh 'pwd'
}
}
}
}
}
}
This is the console output:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] withCredentials
Masking supported pattern matches of $DB_PASS or $DB_USER
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Setup)
[Pipeline] parallel
[Pipeline] { (Branch: foo)
[Pipeline] { (Branch: bar)
[Pipeline] stage
[Pipeline] { (foo)
[Pipeline] stage
[Pipeline] { (bar)
[Pipeline] node
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘jenkins-slave-ondemand’ are offline
Still waiting to schedule task
All nodes of label ‘jenkins-slave-ondemand’ are offline
Running on EC2 (Jenkins AWS EC2) - Jenkins slave (i-0982299c572100c71) in /home/admin/workspace/GSWebRuby_Test
[Pipeline] {
[Pipeline] echo
In stage foo
[Pipeline] sh
Running on EC2 (Jenkins AWS EC2) - Jenkins slave (i-092ecac8e6c257270) in /home/admin/workspace/GSWebRuby_Test
[Pipeline] {
[Pipeline] echo
In stage bar
[Pipeline] sh
process apparently never started in /home/admin/workspace/GSWebRuby_Test#tmp/durable-b0d8c4b4
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch foo
process apparently never started in /home/admin/workspace/GSWebRuby_Test#tmp/durable-b6cfcff9
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch bar
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] End of Pipeline
ERROR: script returned exit code -2
Finished: FAILURE
Am I doing something wrong in the way I've set up the pipeline? Any pointers would be greatly appreciated.
Edit:
After setting this JENKINS_JAVA_OPTIONS org.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true, I see this additional output:
In stage bar
[Pipeline] sh
nohup: failed to run command 'sh': No such file or directory
process apparently never started in /home/admin/workspace/GSWebRuby_Test#tmp/durable-099a2e56

Jenkins Pipeline: abort an input in a stage cannot trigger the aborted post action of that stage

My purpose is simple, just want to do some post actions in a stage when user click 'Abort' button in the input step of that stage. I've read some docs from jenkins.io and found there seems an implicit way for doing this by using the post directive. So I make some simple test below:
First is this:
pipeline {
agent any
stages {
stage('test') {
input {
message 'Proceed?'
ok 'yes'
submitter 'admin'
}
steps {
echo "helloworld"
}
post {
aborted {
echo "stage test has been aborted"
}
}
}
}
post {
aborted {
echo "pipeline has been aborted"
}
}
}
If I click Abort button, the log output will only show:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] input
Proceed?
yes or Abort
Aborted by admin
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
pipeline has been aborted
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Rejected by SYSTEM
Finished: ABORTED
which means Input Abort only trigger the post action part of the pipeline but not the part within that stage.
Then I try another one:
pipeline {
agent any
stages {
stage('test') {
steps {
sh "sleep 15"
}
post {
aborted {
echo "stage test has been aborted"
}
}
}
}
post {
aborted {
echo "pipeline has been aborted"
}
}
}
I abort this job within 15 seconds, and output will show
[Pipeline] { (hide)
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] sh
+ sleep 15
Sending interrupt signal to process
Aborted by admin
Terminated
script returned exit code 143
Post stage
[Pipeline] echo
stage test has been aborted
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
pipeline has been aborted
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: ABORTED
Which means the post aborted action part in a stage could be triggered.
How can I do some post actions when abort an input step within a stage, not the whole pipeline?
I think I figure out what's the difference between the above two examples and how to solve this problem myself :).
It seems like the Post actions in a stage will only be triggered when the related Steps section's currentResult is matched but not the Input section above. So I make a little change and it works. That is make the Input section a script inside the Steps section.
pipeline {
agent any
stages {
stage('test') {
steps {
script {
input message: 'Proceed?', ok: 'Yes', submitter: 'admin'
}
echo "helloworld"
}
post {
aborted{
echo "test stage has been aborted"
}
}
}
}
post {
aborted {
echo "pipeline has been aborted"
}
}
}
When click Abort button, the output log is:
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/pipeline-demo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] script
[Pipeline] {
[Pipeline] input
Proceed?
Yes or Abort
[Pipeline] }
[Pipeline] // script
Post stage
[Pipeline] echo
test stage has been aborted
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] echo
pipeline has been aborted
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Rejected by admin
Finished: ABORTED
which satisfys me:). I can compose a complex pipeline with user confirmation and auto rolling back now. Hope it also help you.

jenkins pipeline not build remotely

I am trying to build a job by pipeline to my other slave in the master
the pipeline is like this
pipeline {
agent {
label "virtual"
}
stages {
stage("test one") {
steps {
echo " test test test"
}
}
stage("test two") {
steps {
echo " testttttttttt "
}
}
}
}
they syntax not getting the error but it doesn't build on my slave server,
but when I run on freestyle job by Restrict where this project can be run with that label then execute sheel by echo "test test"
it was built on my slave server,
what is wrong with my pipeline ? do I missing something?
after build
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on virtual in /home/virtual/jenkins/workspace/demoo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test one)
[Pipeline] echo
test test test
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (test two)
[Pipeline] echo
testttttttttt
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Add the path you want in the Remote root directory (yellow column) as shown below:-
The build works like you did it already. The steps will be executed on the slave. If you add something like clone a repository to your step, your workspace directory will be created.
Pipeline and Freestylejobs are working here different. The Freestylejob will make the directory in workspace as soon as it runs at the first time. The Pipelinejob will create the directory as soon as it needs this this directory.
I created a simple Pipeline:
pipeline {
agent {
label "linux"
}
stages {
stage("test one") {
steps {
sh "echo 'test test test' > text.txt"
}
}
}
}
I converted your echo to a sh command because my Slave is a linux slave. The sh step creates a text.txt file. As soon as I run this job, the directory will be created:
[<user>#<server> test-pipeline]$ pwd
/var/lib/jenkins/workspace/test-pipeline
[<user>#<server> test-pipeline]$ ls -l
total 4
-rw-r----- 1 <user> <group> 15 Oct 7 16:49 text.txt

Why previous stage is being called again while executing next stage in Jenkinsfile

I am observing the previous stage is being called while executing the next stage, not sure what is wrong here with my jenkinsfile
Followed this documentation :
https://jenkins.io/doc/book/pipeline/syntax/#declarative-pipeline
pipeline {
agent none
options {
gitLabConnection('MY_CONNECTION')
}
stages {
stage('scm_checkout') {
agent {
label 'win_64'
}
steps{
deleteDir()
checkout([$class: 'GitSCM',branches:[[name: '*/master']]
bat 'python -u repo/Jenkins_Scripts/createZip.py'
}
}
stage('scm_build') {
agent {
label 'win_64'
}
steps{
bat 'python -u repo/Jenkins_Scripts/build.py'
}
}
} // end of stages
}
Output
[Pipeline] stage
[Pipeline] { (scm_checkout)
[Pipeline] node
Running on xxxxxx in C:\jennew\workspace\PCQG-Pipeline
[Pipeline] {
[Pipeline] checkout
> git rev-parse --is-inside-work-tree # timeout=10
[Pipeline] withEnv
[Pipeline] {
[Pipeline] deleteDir
[Pipeline] checkout
Cloning the remote Git repository
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (scm_build)
[Pipeline] node
Running on xxxxxx in C:\jennew\workspace\PCQG-Pipeline
[Pipeline] {
[Pipeline] checkout
Cloning the remote Git repository
Likewise, this clones repository again and again with every stage. Not sure where I am committing a mistake.
I think what you do here is slightly confusing. This is a Jenkinsfile right? So agent by default would do checkout on it's own. The stages are not being called from each other.
To turn that off you would need to add this to your agent part:
options { skipDefaultCheckout() }

Aborted jenkins pipeline job continues running later stages

I'm on the latest version of Jenkins and the pipeline plugins, and I have a jenkins declarative pipeline job configured as follows:
pipeline {
agent {
label 'default'
}
stages {
stage('Prepare') {
steps {
dir('foo') {
git ...
}
}
}
stage('Temp1') {
steps {
sh script: 'dotnet run ...'
echo 'Temp1'
}
}
stage('Temp2') {
steps {
echo 'Temp2'
}
}
}
}
If I abort the build during the sh script in the Temp1 stage, my expectation is that neither the rest of the Temp1 stage nor the Temp2 stage would run. However, when I abort it, it stops the sh script, then runs the rest of Temp1 stage and continues on to run the Temp2 stage as well!
Here is the log:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on ...
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Prepare)
[Pipeline] dir
Running in ...
[Pipeline] {
[Pipeline] git
Fetching changes from the remote Git repository
...
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Temp1)
[Pipeline] sh
+ dotnet run ...
Aborted by ...
Sending interrupt signal to process
[Pipeline] echo
Temp1
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Temp2)
[Pipeline] echo
Temp2
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: ABORTED
How do I get the build to actually abort?
This ended up being a dotnet issue, where it returns 0 as the exit code when killed by SIGTERM. https://github.com/dotnet/coreclr/issues/21243 was resolved several days ago but has not been released yet.
As a workaround, you can do something like this:
public static int Main(string[] args)
{
Environment.ExitCode = 1;
// Do stuff
return 0;
}
If Main completes successfully, it will return an exit code of 0. If it receives a SIGTERM, it will return the value in Environment.ExitCode.

Resources