Matrix pipeline stuck waiting to schedule a build - jenkins

I have a matrix pipeline that has an agent label with 4 nodes attached to the label. When I try to run my pipeline, the pipeline schedules the job on 4 of the nodes as I wanted but the jobs never end up actually running. They jobs remain scheduled and display the message "Still waiting to schedule task Waiting for next available executor".
How can I fix this issue? I have scoured stack overflow and have yet to come up with any sort of solution.
Update: I am including the pipeline along with a screenshot of some of the configuration
pipeline {
agent none
stages {
stage('Tests') {
matrix {
agent {label 'UH60V && SCADE'}
axes {
axis {
name 'CHOICE'
values 'ADC', 'CDU_C', 'DCU', 'DISP_A_REND', 'DISP_PROC', 'EGI', 'FD', 'FLT', 'FMS_C', 'IFF', 'liblO', 'libNGC', 'libSC', 'MISCMP_MP', 'MISCMP_GP', 'NAV_MGR', 'RADALT', 'SYS', 'SYSIO1553', 'SYSIO429', 'SYSRED', 'TACAN', 'VOR_ILS', 'VPA', 'WAAS', 'WCA'
}
}
stages {
stage("Test") {
steps {
echo "Running ${CHOICE}"
build job: "UH60V_SCADE_Suite_Testing", parameters: [string(name: "SCADE_SUITE_TEST_PROJECT", value: CHOICE), string(name: "SCADE_SUITE_TEST_ACTION", value: "all"), string(name: "CLEARCASE_VIEW_ROOT", value: "myview")]
}
}
}
}
}
}
}
Configuration screenshot

Related

Using the same node / instance on all build jobs in a Jenkins Pipeline

pipeline {
agent { label 'linux' }
stages{
stage("verify1"){
steps {
script {
build(job: "verfiy1", parameters: [string(name: 'verfiy1', value: "${params.verfiy1}")])
}
}
}
stage("verify2"){
steps {
script {
build(job: "verfiy2", parameters: [string(name: 'verfiy2', value: "${params.verfiy2}")])
}
}
}
stage("verify3"){
steps {
script {
build(job: "verify3", parameters: [string(name: 'verify3', value: "${params.verify3}")])
}
}
}
}
}
=================================================================
Hello
can anyone help me, right now from the above pipeline i am able to build 3 jobs sucessfull but the problem is every single job is executing on new ec2 slave instance instead of the instance where the job has started. I am expecting the output as once the above pipeline starts all the builds in the pipeline must execute on the same node (ec2 instance).
Thanks in advance
You can pass the upstream job's agent node to the downstream job.
Add one more job parameter to accept node
Pass upstream job's agent node via env.NODE_NAME when call build job step
// verify 1 job
pipeline {
agent { label "${params.agentNode}" }
parameters {
string(name: "agentNode",
defaultValue="<give default value in case run it directly>" )
}
}
// upstream job
build(job: "verify1", parameters: [
string(name: 'agentNode', value: "${env.NODE_NAME}"),
string(name: 'verify3', value: "${params.verify3}")
])

How to build pipeline job inside the POST section in Jenkins pipeline

I have a Jenkins pipeline which, among multiple steps should have a final step that should be executed regardless of the status of previous steps. For that to happen, I've tried using post section which looks like this:
pipeline {
agent {
label 'master'
}
stages {
stage('Stage 1') {
steps {
build job: 'stage 1 job', parameters: [
...
]
}
}
stage('Stage 2') {
steps {
build job: 'stage 2 job', parameters: [
...
]
}
}
}
post {
always {
build job: "cleanup", parameters: [
...
]
}
}
}
However, I'm getting following error when trying to execute something like this:
No such DSL method '$' found among steps
Question: Is it even possible to use build job inside post action? If not, what would be good alternative to achieve that "cleanup" job is always executed at the end (regardless of the status of stages above)
Yes, it is possible to use build a job inside post action. Here is the pipeline script:
pipeline {
agent any
stages {
stage('1') {
steps {
script {
echo "Hello"
}
}
}
}
post {
always {
build job: 'schedule-job', parameters: [string(name: 'PLATFORM', value: 'Windows')]
}
}
}
In the above example, I have schedule-job which accepts parameters PLATFORM and it will Always run, regardless of build status
Here is the output:

Jenkins pipeline stuck on build job

I recently create a new jenkins pipeline that mainly relies on other build jobs. Strange thing is, the 1st stage job gets triggered, ran successfully + Finished with "SUCCESS" state. But the pipeline keeps on loading forever after Scheduling project: "run-operation".
Any idea what mistake i made below?
UPDATE 1: remove param with hard coded advertiser & query
pipeline {
agent {
node {
label 'slave'
}
}
stages {
stage('1') {
steps {
script{
def buildResult = build job: 'run-operation', parameters: [
string(name: 'ADVERTISER', value: 'car'),
string(name: 'START_DATE', value: '2019-12-29'),
string(name: 'END_DATE', value: '2020-01-11'),
string(name: 'QUERY', value: 'weekly')
]
def envVar = buildResult.getBuildVariables();
}
}
}
stage('2') {
steps {
script{
echo 'track the query operations from above job'
def trackResult = build job: 'track-operation', parameters: [
string(name: 'OPERATION_NAMES', value: envVar.operationList),
]
}
}
}
stage('3') {
steps {
echo 'move flag'
}
}
stage('callback') {
steps {
echo 'for each operation call back url'
}
}
}
}
Console log (despite the job was running, the pipeline doesnt seems to know, see log):
Started by user reusable
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/jobs/etl-pipeline/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (1)
[Pipeline] build (Building run-operation)
Scheduling project: run-operation)
...

Jenkins Pipeline, downstream job and Agent label

I have a Jenkins Pipeline that executes Job A and Job B. I have 10 agents/nodes on which Job A is executed.
If I specify Agent1, when I Build Pipeline, then Job A should execute on Agent1.
Issue:
Pipeline is running on Agent1 and JobA is getting picked up on any random available agent.
Script:
pipeline {
agent none
stages {
stage('JOB A') {
agent { label "${machine}" }
steps {
build job: 'JOB A', parameters: [a,b,c,d,e,f]
}
}
stage('JOB B') {
agent { label 'xyz' }
steps {
build job: 'JOB B', parameters: [a,b,c,d,e,f,]
}
}
}
}
I'm using different label for every agent.
Can someone help me understand how and where the Pipeline and downstream jobs are running?
Thanks!
As rightly pointed by #yong, I 'specified agent label for stage, not for the JOB A'.
So I declared a label parameter in JOB A and passed it downstream via the Pipeline. It's now correctly executing on the specified Agent.
pipeline {
agent { label 'master' }
stages {
stage('JOB A') {
steps {
build job: 'JOB A', parameters: [a, [$class: 'LabelParameterValue', name: 'Agent', label: "${Agent}" ], b, c, d]
}
}
stage('JOB B') {
steps {
build job: 'JOB B', parameters: [x,y,z]
}
}
}
}

Repeatable steps in a Jenkins declarative pipeline

I have a groovy script that calls other jobs to stop and start tasks. (see below). I would like to re-use the code inside the steps { over and over again. Can I do this without having to repeat the code?
Basically I want to have the next stage for another API that I can start or stop, then another, etc. These are then built with parameters on the Jenkins where radio buttons decide whether to stop or start.
#!/usr/bin/env groovy
pipeline {
environment {
containerInstanceIdsToStartOn = "463b8b6f-9388-4fbd-8257-b056e28c0a43"
region = "eu-west-1"
cluster = "mis-core-dev"
}
agent any
stages {
stage('Authentication API (dev)') {
environment {
apiName = "authentication_API"
taskDefinitionFamily = "mis-core-dev-authentication-api"
taskDefinition = "mis-core-dev-authentication-api"
}
steps {
script {
if (params."${apiName}".contains('Stop Task')) {
build(job: 'Stop ECS Task (utility)',
parameters: [
string(name: 'region', value: params."${region}"),
string(name: 'cluster', value: params."${cluster}"),
string(name: 'family', value: params."${taskDefinitionFamily}")
])
}
else if (params."${apiName}".contains('Start Task')) {
build(job: 'Start ECS Task (utility)',
parameters: [
string(name: 'region', value: params."${region}"),
string(name: 'cluster', value: params."${cluster}"),
string(name: 'taskDefinition', value: params."${taskDefinition}"),
string(name: 'containerInstanceIds', value: params."${containerInstanceIdsToStartOn}")
])
}
else if (params."${apiName}" == null || params."${apiName}" == "") {
echo "Did you forget to check a box?"
}
}
}
}
}
post {
always {
cleanWs()
}
}
}
It is not possible to share parts of a declarative pipeline. The declarative pipeline DSL is processed in a special way at runtime where you can't split out some of the pieces. You could share some logic in how some of the blocks are executed (like the code used inside of a script block), but the sharing capabilities are limited to basically the entire pipeline definition itself
From the Shared Library documentation:
Only entire pipelines can be defined in shared libraries as of this time. This can only be done in vars/*.groovy, and only in a call method. Only one Declarative Pipeline can be executed in a single build, and if you attempt to execute a second one, your build will fail as a result.

Resources