Jenkins declarative pipline multiple slave - docker

I have a pipeline with multiple stages, some of them are in parallel. Up until now I had a single code block indicating where the job should run.
pipeline {
triggers { pollSCM '0 0 * * 0' }
agent { dockerfile { label 'jenkins-slave'
filename 'Dockerfile'
}
}
stages{
stage('1'){
steps{ sh "blah" }
} // stage
} // stages
} // pipeline
What I need to do now is run a new stage on a different slave, NOT in docker.
I tried by adding an agent statement for that stage but it seems like it tries to run that stage withing a docker container on the second slave.
stage('test new slave') {
agent { node { label 'e2e-aws' } }
steps {
sh "ifconfig"
} // steps
} // stage
I get the following error message
13:14:23 unknown flag: --workdir
13:14:23 See 'docker exec --help'.
I tried setting the agent to none for the pipeline and using an agent for every step and have run into 2 issues
1. My post actions show an error
2. The stages that have parallel stages also had an error.
I can't find any examples that are similar to what I am doing.

You can use the node block to select a node to run a particular stage.
pipeline {
agent any
stages {
stage('Init') {
steps {
node('master'){
echo "Run inside a MASTER"
}
}
}
}
}

Related

passing Jenkins env variables between stages on different agents

I've looked at this Pass Artifact or String to upstream job in Jenkins Pipeline and this Pass variables between Jenkins stages and this How do I pass variables between stages in a declarative Jenkins pipeline?, but none of these questions seem to deal with my specific problem.
Basically I have a pipeline consisting of multiple stages, each run in its own agent.
In the first stage I run a shell script. Here two variables are generated. I would like to use these variables in the next stage. The methods I've seen so far seem to only work when passing variables within the same agent.
pipeline {
stages {
stage("stage 1") {
agent {
docker {
image 'my_image:latest'
}
}
steps {
sh ("""
export VAR1=foo
export VAR2=bar
""")
}
}
stage("stage 2") {
agent {
docker {
image 'my_other_image:latest'
}
}
steps {
sh ("echo "$VAR1 $VAR2")
//expecting to see "foo bar" printed here
}
}

Force complete Jenkins job

I am running Jenkins pipeline script.
From this pipeline script we are running one script which keeps on producing output,so this process does not end.
So because of this my Jenkins job is not getting completed but as all the steps completed here i somehow want to mark this job complete after say 10 mins.
Is there any way to complete jenkins job from pipeline scipt after say 10 mins.
Below is my pipeline script and runspbt.sh is that never ending script.
pipeline {
agent {label 'Executionmachine3089'}
stages {
stage('Run Script') {
steps {
bat "ssh rxx11pp#G0XXXX209 /home/rxx11pp/runspbt.sh"
}
}
}}
you can wait and check for some success output (or maybe a marker file in the workspace) .
steps{
sh "sleep 600s"
script{
if(testSuccessMethod){
currentBuild.result = "SUCCESS"
return
}else{currentBuild.result = "FAILURE"
return
}
}
You may want to use linux's timeout command:
pipeline {
agent { label 'Executionmachine3089' }
stages {
stage('Run Script') {
steps {
bat "ssh rxx11pp#G0XXXX209 timeout 90 /home/rxx11pp/runspbt.sh"
}
}
}
}
This will start the script and signal it to exit after 90 seconds.
Note this is not related to Jenkins.

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

How to run Jenkins scripted pipeline job on free slave?

I want to run Jenkins scripted pipeline job at the specified slave at the moment when no other job is running on it.
After my job will be started, no other jobs should be performed on this slave, they will have to wait for my job ending running
All the tutorials that I found allowed me to run job on the node when it is free but did not protect me from launching other jobs on this node
Could you tell me how can I do this?
Because Pipeline has two Syntax, there are two ways to achieve that. For scripted pipeline please check the second one.
Declarative
pipeline {
agent none
stages {
stage('Build') {
agent { label 'slave-node​' }
steps {
echo 'Building..'
sh '''
'''
}
}
}
post {
success {
echo 'This will run only if successful'
}
}
}
Scripted
node('your-node') {
try {
stage 'Build'
node('build-run-on-this-node') {
sh ""
}
} catch(Exception e) {
throw e
}
}

Jenkins pipeline step happens on master instead of slave

I am getting started with Jenkins Pipeline. My pipeline has one simple step that is supposed to run on a different agent - like the "Restrict where this project can be run" option.
My problem is that it is running on master.
They are both Windows machines.
Here's my Jenkinsfile:
pipeline {
agent {label 'myLabel'}
stages {
stage('Stage 1') {
steps {
echo pwd()
writeFile(file: 'test.txt', text: 'Hello, World!')
}
}
}
}
pwd() prints C:\Jenkins\workspace\<pipeline-name>_<branch-name>-Q762JIVOIJUFQ7LFSVKZOY5LVEW5D3TLHZX3UDJU5FWYJSNVGV4Q.
This folder is on master. This is confirmed by the presence of the test.txt file.
I expected test.txt to be created on the slave agent.
Note 1
I can confirm that the pipeline finds the agent because the logs contain:
[Pipeline] node
Running on MyAgent in C:\Jenkins\workspace\<pipeline-name>_<branch-name>-Q762JIVOIJUFQ7LFSVKZOY5LVEW5D3TLHZX3UDJU5FWYJSNVGV4Q
But this folder does not exist on MyAgent, which seems related to the problem.
Note 2
This question is similar to Jenkins pipeline not honoring agent specification
, except that I'm not using the build instruction so I don't think the answer applies.
Note 3
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
echo "${env.NODE_NAME}"
}
}
stage('Stage 2') {
agent {label 'MyLabel'}
steps {
echo "${env.NODE_NAME}"
}
}
}
}
This prints the expected output - master and MyAgent. If this is correct, then why is the workspace located in a different folder on master instead of being on MyAgent?
here is an example
pipeline {
agent none
stages {
stage('Example Build') {
agent { label 'build-label' }
steps {
sh 'env'
sh ' sleep 8'
}
}
stage('Example Test') {
agent { label 'deploy-label' }
steps {
sh 'env'
sh ' sleep 5'
}
}
}
}
I faced similar issue and the following pipeline code worked for me (i.e. the file got created on the Windows slave instead of Windows master),
pipeline {
agent none
stages {
stage("Stage 1") {
steps {
node('myLabel'){
script {
writeFile(file: 'test.txt', text: 'Hello World!', encoding: 'UTF-8')
}
// This should print the file content on slave (Hello World!)
bat "type test.txt"
}
}
}
}
}
I'm debugging a completely unrelated issue and this fact was thrown in my face. Apparently the pipeline is processed in the built-in node (previously known as the master node), with the steps being forwarded to the agent.
So even though echo runs on the agent, but pwd() will run on the built-in node. You can do sh 'pwd' to get the path on the agent.

Resources