Jenkins - not able to run pipelines from Jenkinsfile - jenkins
I have a Jenkinsfile where the build and tests run on the same Slave.
My requirement is that the build must be on one Slave (say A) and the tests must run on another slave (say B).
I just setup the slave B and I can see both my slaves A and B in Jenkins->Manage nodes.
The problem is that the build stage works successfully , but my prepare_test and test stages are not executed on slave B.
Below are problems seen:
1.) I get the following error after the build stage is successful:
"java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps [archive, bat, build, catchError, checkout, deleteDir, dir, dockerFingerprintFrom, dockerFingerprintRun, dockerNode, echo, emailext, emailextrecipients, envVarsForTool, error, fileExists, findBuildScans, getContext, git, input, isUnix, junit, library, libraryResource, load, lock, mail, milestone, node, parallel, powershell, properties, publishHTML, pwd, readFile, readTrusted, resolveScm, retry, script, sh, sleep, stage, stash, step, svn, timeout, timestamps, tm, tool, unarchive, unstable, unstash, validateDeclarativePipeline, waitUntil, warnError, withContext, withCredentials, withDockerContainer, withDockerRegistry, withDockerServer, withEnv, wrap, writeFile, ws] or symbols [all, allOf, always, ant, antFromApache, antOutcome, antTarget, any, anyOf, apiToken, architecture, archiveArtifacts, artifactManager, attach, authorizationMatrix, batchFile, booleanParam, branch, brokenBuildSuspects, brokenTestsSuspects, buildButton, buildDiscarder, buildTimestamp, buildTimestamp
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
"
2.) Do not see the stages 'prepare_test' and 'test' for my branch, while I can see the stage related to build.
Attached my jenkinsfile code:
pipeline {
agent none
options {
skipDefaultCheckout true
}
stages {
stage('build') {
agent {
docker {
image 'XYZ'
args '-v /etc/localtime:/etc/localtime:ro'
}
}
options { skipDefaultCheckout(true) }
steps {
echo '########################################## Building #########################################'
// trigger the build
sh 'scm-src/build.sh all-projects'
}
}
}
}
pipeline {
agent {
label 'laptop-hp'
}
stages {
stage('prepare_test') {
agent {
docker {
image 'ABC'
args '-v /home/jenkins/.ssh/:/home/jenkins/.ssh:ro -v /etc/localtime:/etc/localtime:ro'
}
}
options { skipDefaultCheckout(true) }
steps {
echo '####################################### Prepare Test Environment ############################'
sh 'scm-src/test.sh prepare'
}
}
stage('test') {
agent {
docker {
image 'ABC'
args '-v /home/jenkins/.ssh/:/home/jenkins/.ssh:ro -v /etc/localtime:/etc/localtime:ro'
}
}
options { skipDefaultCheckout(true) }
steps {
echo '########################################## Testing ##########################################'
sh 'scm-src/test.sh run'
}
}
}
}
The name of my slave B is 'laptop-hp' as seen in Jenkinsfile
Is there a problem with Jenkinsfile or do I miss some settings on my slave B ?
Regards
kdy
You cannot have more than one pipeline {} block in a Declarative Pipeline. The correct syntax is to append your test stages right after the build stage. Then define the parameter label for each stage within docker {} to run on the corresponding slave. Also, it is sufficient to add the option skipDefaultCheckout true once at the top-level of the pipeline.
Your pipeline should now look like:
pipeline {
agent none
options {
skipDefaultCheckout true
}
stages {
stage('build') {
agent {
docker {
image 'XYZ'
label 'slave-A'
args '-v /etc/localtime:/etc/localtime:ro'
}
}
steps {
echo '########################################## Building #########################################'
// trigger the build
sh 'scm-src/build.sh all-projects'
}
}
stage('prepare_test') {
agent {
docker {
image 'ABC'
label 'laptop-hp'
args '-v /home/jenkins/.ssh/:/home/jenkins/.ssh:ro -v /etc/localtime:/etc/localtime:ro'
}
}
steps {
echo '####################################### Prepare Test Environment ############################'
sh 'scm-src/test.sh prepare'
}
}
stage('test') {
agent {
docker {
image 'ABC'
label 'laptop-hp'
args '-v /home/jenkins/.ssh/:/home/jenkins/.ssh:ro -v /etc/localtime:/etc/localtime:ro'
}
}
steps {
echo '########################################## Testing ##########################################'
sh 'scm-src/test.sh run'
}
}
}
}
Related
Jenkins using docker agent with environment declarative pipeline
I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins. ERROR: Attempted to execute a step that requires a node context while ‘agent none’ was specified. Be sure to specify your own ‘node { ... }’ blocks when using ‘agent none’. I try to set agent any but this time I received an error "Still waiting to schedule task Waiting for next available executor" pipeline { agent none // environment{ proxy = https:// // stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded" // } stages { stage('Build') { agent { docker { image 'maven:3-alpine'} } steps { sh 'mvn --version' echo "$apigeeUsername" echo "Stable Revision: ${env.stable_revision}" } } stage('Test') { agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } environment { HOME = '.' } steps { script{ try{ sh 'npm install' sh 'node --version' //sh 'npm test/unit/*.js' }catch(e){ throw e } } } } // stage('Policy-Code Analysis') { // steps{ // sh "npm install -g apigeelint" // sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js" // } // } stage('Promotion'){ steps{ timeout(time: 2, unit: 'DAYS') { input 'Do you want to Approve?' } } } stage('Deployment'){ steps{ sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update" //sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} " } } } }
Basically your error message tells you everything you need to know: ERROR: Attempted to execute a step that requires a node context while ‘agent none’ was specified. Be sure to specify your own ‘node { ... }’ blocks when using ‘agent none’. so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here. The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed. stage('Promotion'){ steps{ timeout(time: 2, unit: 'DAYS') { input 'Do you want to Approve?' } } } stage('Deployment'){ steps{ sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update" //sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} " } } so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages: pipeline { agent { docker { image 'maven:3-alpine'} } ... For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage). Also, I see some more issues. Your Test stage specifies two images for the same stage. agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded. To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/
How to setup sonar scanner in Jenkins Declarative Pipeline
I'm facing a problem in implementing SonarQube scanner for my repository in Jenkinsfile. I don't know where should I add the properties of SonarQube scanner in the Jenkinsfile. I've set Jenkins locally on my windows system. The projects are purely based on Python, Ruby & React. agent {label 'master'} triggers { GenricTrigger ([ genricVariables: [ key: 'pr_from_branch', value: '$.pullrequest.source.branch.name'], [ expressionType: 'JsonPath', regexpFilter: '', defaultValue: ''], token: 'test']) } options { buildDiscarder ( logRotator(numToKeepStr:'5')) } stages { stage ('Initialize & SonarQube Scan') { steps { def scannerHome = tool 'sonarScanner'; withSonarQubeEnv('My SonarQube Server') { bat """ ${scannerHome}/bin/sonar-runner.bat pip install -r requirements.txt """ } } } stage('Quality Gate') { sleep time: 3000, unit: 'MILLISECONDS' timeout(time: 1, unit: 'MINUTES') { // Just in case something goes wrong, pipeline will be killed after a timeout def qg = waitForQualityGate() // Reuse taskId previously collected by withSonarQubeEnv if (qg.status != 'OK') { error "Pipeline aborted due to quality gate failure: ${qg.status}" } } } stage ('Smoke Test') { steps { bat """ pytest -s -v tests/home/login_test.py currentBuild.result = 'SUCCESS' """ } } } } The properties include: -----------------Sonarqube configuration........................ sonar.projectKey=<*****> sonar.projectName=<project name> sonar.projectVersion=1.0 sonar.login=<sonar-login-token> sonar.sources=src sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch,/node_modules/, sonar.host.url=http://<url>/ -----------------Sonar for bitbucket plugin configuration................... sonar.bitbucket.repoSlug=<project name> sonar.bitbucket.accountName=<name> sonar.bitbucket.oauthClientKey=<OAuth_Key> sonar.bitbucket.oauthClientSecret=<OAuth_secret> sonar.analysis.mode=issues I can manually add these properties in sonar-project.properties file and set this file in my project root directly but it will be running locally not on the server. So to avoid that I want to add these properties to Jenkinsfile
We run Sonar scanner as a Docker container but it should give you a fair idea of how to use your properties for the same in Jenkinsfile. stage("Sonar Analysis"){ sh "docker pull docker.artifactory.company.com/util-sonar-runner:latest" withSonarQubeEnv('sonarqube'){ sh "docker run --rm -v ${workspace}:/opt/spring-service -w /opt/spring-service -e SONAR_HOST_URL=${SONAR_HOST_URL} -e SONAR_AUTH_TOKEN=${SONAR_AUTH_TOKEN} docker.artifactory.company.com/util-sonar-runner:latest /opt/sonar-scanner/bin/sonar-scanner -Dsonar.host.url=${SONAR_HOST_URL} -Dsonar.login=${SONAR_AUTH_TOKEN} -Dsonar.projectKey=spring-service -Dsonar.projectName=spring-service -Dsonar.projectBaseDir=. -Dsonar.sources=./src -Dsonar.java.binaries=./build/classes -Dsonar.junit.reportPaths=./build/test-results/test -Dsonar.jacoco.reportPaths=./build/jacoco/test.exec -Dsonar.exclusions=src/test/java/**/* -Dsonar.fortify.reportPath=fortifyResults-${IMAGE_NAME}.fpr -Dsonar.password=" } }
You run the pipeline step like this. The sonar server properties can be defined under the profile of the pom.xml file. steps { withSonarQubeEnv('SonarQube') { sh 'mvn -Psonar -Dsonar.sourceEncoding=UTF-8 org.sonarsource.scanner.maven:sonar-maven-plugin:3.0.2:sonar' } } The SonarQube scanner needs to be defined on Jenkins Global tool Configuration section.
Jenkins declarative pipline multiple slave
I have a pipeline with multiple stages, some of them are in parallel. Up until now I had a single code block indicating where the job should run. pipeline { triggers { pollSCM '0 0 * * 0' } agent { dockerfile { label 'jenkins-slave' filename 'Dockerfile' } } stages{ stage('1'){ steps{ sh "blah" } } // stage } // stages } // pipeline What I need to do now is run a new stage on a different slave, NOT in docker. I tried by adding an agent statement for that stage but it seems like it tries to run that stage withing a docker container on the second slave. stage('test new slave') { agent { node { label 'e2e-aws' } } steps { sh "ifconfig" } // steps } // stage I get the following error message 13:14:23 unknown flag: --workdir 13:14:23 See 'docker exec --help'. I tried setting the agent to none for the pipeline and using an agent for every step and have run into 2 issues 1. My post actions show an error 2. The stages that have parallel stages also had an error. I can't find any examples that are similar to what I am doing.
You can use the node block to select a node to run a particular stage. pipeline { agent any stages { stage('Init') { steps { node('master'){ echo "Run inside a MASTER" } } } } }
How to build docker images using a Declarative Jenkinsfile
I'm new to using Jenkins.... I'm trying to automate the production of an image (to be stashed in a repo) using a declarative Jenkinsfile. I find the documentation to be confusing (at best). Simply put, how can I convert the following scripted example (from the docs) node { checkout scm def customImage = docker.build("my-image:${env.BUILD_ID}") customImage.push() } to a declarative Jenkinsfile....
You can use scripted pipeline blocks in a declarative pipeline as a workaround pipeline { agent any stages { stage('Build image') { steps { echo 'Starting to build docker image' script { def customImage = docker.build("my-image:${env.BUILD_ID}") customImage.push() } } } } }
I'm using following approach: steps { withDockerRegistry([ credentialsId: "<CREDENTIALS_ID>", url: "<PRIVATE_REGISTRY_URL>" ]) { // following commands will be executed within logged docker registry sh 'docker push <image>' } } Where: CREDENTIALS_ID stands for key in Jenkis under which you store credentials to your docker registry. PRIVATE_REGISTRY_URL stands for url of your private docker registry. If you are using docker hub then it should be empty.
I cannot recommend the declarative syntax for building a Docker image bcos it seems that every important step requires falling back to the old scripting syntax. But if you must, a hybrid approach seems to work. First a detail about the scm step: when I defined the Jenkins "Pipeline script from SCM" project that fetches my Jenkinsfile with a declarative pipline from git, Jenkins cloned the repo as the first step in the pipeline even tho I did not define a scm step. For the build and push steps, I can only find solutions that are a hybrid of old-style scripted pipeline steps inside the new-style declarative syntax. For example see gustavoapolinario's work at Medium: https://medium.com/#gustavo.guss/jenkins-building-docker-image-and-sending-to-registry-64b84ea45ee9 which has this hybrid pipeline definition: pipeline { environment { registry = "gustavoapolinario/docker-test" registryCredential = 'dockerhub' dockerImage = '' } agent any stages { stage('Cloning Git') { steps { git 'https://github.com/gustavoapolinario/microservices-node-example-todo-frontend.git' } } stage('Building image') { steps{ script { dockerImage = docker.build registry + ":$BUILD_NUMBER" } } } stage('Deploy Image') { steps{ script { docker.withRegistry( '', registryCredential ) { dockerImage.push() } } } } stage('Remove Unused docker image') { steps{ sh "docker rmi $registry:$BUILD_NUMBER" } } } } Because the first step here is a clone, I think he built this example as a standalone pipeline project in Jenkins (not a Pipeline script from SCM project).
Jenkins pipeline step happens on master instead of slave
I am getting started with Jenkins Pipeline. My pipeline has one simple step that is supposed to run on a different agent - like the "Restrict where this project can be run" option. My problem is that it is running on master. They are both Windows machines. Here's my Jenkinsfile: pipeline { agent {label 'myLabel'} stages { stage('Stage 1') { steps { echo pwd() writeFile(file: 'test.txt', text: 'Hello, World!') } } } } pwd() prints C:\Jenkins\workspace\<pipeline-name>_<branch-name>-Q762JIVOIJUFQ7LFSVKZOY5LVEW5D3TLHZX3UDJU5FWYJSNVGV4Q. This folder is on master. This is confirmed by the presence of the test.txt file. I expected test.txt to be created on the slave agent. Note 1 I can confirm that the pipeline finds the agent because the logs contain: [Pipeline] node Running on MyAgent in C:\Jenkins\workspace\<pipeline-name>_<branch-name>-Q762JIVOIJUFQ7LFSVKZOY5LVEW5D3TLHZX3UDJU5FWYJSNVGV4Q But this folder does not exist on MyAgent, which seems related to the problem. Note 2 This question is similar to Jenkins pipeline not honoring agent specification , except that I'm not using the build instruction so I don't think the answer applies. Note 3 pipeline { agent any stages { stage('Stage 1') { steps { echo "${env.NODE_NAME}" } } stage('Stage 2') { agent {label 'MyLabel'} steps { echo "${env.NODE_NAME}" } } } } This prints the expected output - master and MyAgent. If this is correct, then why is the workspace located in a different folder on master instead of being on MyAgent?
here is an example pipeline { agent none stages { stage('Example Build') { agent { label 'build-label' } steps { sh 'env' sh ' sleep 8' } } stage('Example Test') { agent { label 'deploy-label' } steps { sh 'env' sh ' sleep 5' } } } }
I faced similar issue and the following pipeline code worked for me (i.e. the file got created on the Windows slave instead of Windows master), pipeline { agent none stages { stage("Stage 1") { steps { node('myLabel'){ script { writeFile(file: 'test.txt', text: 'Hello World!', encoding: 'UTF-8') } // This should print the file content on slave (Hello World!) bat "type test.txt" } } } } }
I'm debugging a completely unrelated issue and this fact was thrown in my face. Apparently the pipeline is processed in the built-in node (previously known as the master node), with the steps being forwarded to the agent. So even though echo runs on the agent, but pwd() will run on the built-in node. You can do sh 'pwd' to get the path on the agent.