I have this JenkinsFile
pipeline {
agent {
node {
label 'SERVER'
}
}
stages {
stage('Notificando Inicio do Job') {
steps {
bitbucketStatusNotify buildState: 'INPROGRESS'
}
}
stage('Restore') {
steps {
powershell(script: 'dotnet restore', returnStatus: false)
}
}
stage('Build') {
steps {
powershell(script: 'dotnet build ./path', returnStatus: false)
}
}
stage('Test') {
steps {
powershell(script: 'dotnet test ./path', returnStatus: false)
}
}
}
post{
always{
echo "Finalizando Build..."
}
success{
bitbucketStatusNotify buildState: 'SUCCESS'
}
failure{
bitbucketStatusNotify buildState: 'FAILED'
}
}
}
My desire is notify Bitbucket with build status.
This Jenkinsfile works perfectly ok when I run from an Multi-Branch pipeline, but, when I use it with a simple pipeline, it does not work.
Then, I go to the jenkins logs, and the only this related is...
But, from plugin doc, these parameters are not mandatory.
My oauth client is correctly configured on bitbucket.
And on my jenkins, I have the credentials ok;
What I'm doing wrong?
How can I make it happen?
Related
What I'm trying to achieve:
I'm trying to execute a pipeline script where SCM (AccuRev) is checked out on 'any' agent and then the stages are executed on that same agent per the local workspace. The build stage specifically is expecting the code checkout to just be available in the workspace that is mapped into the container.
The problem:
When I have more than one agent added to the Jenkins configuration, the SCM step will checkout the code on one agent and then start the build step starting the container on the other agent, which is a problem because the code was checked out on the other agent.
What works:
Jenkins configured with a single agent/node
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
What I have tried, but doesn't work:
Jenkins configured with 2 agent(s)/node(s)
pipeline {
agent {
docker {
image 'ubuntu'
}
}
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The above doesn't work because it is expecting AccuRev to be installed in the container. I could go this route, but it is not really scalable and will cause issues on containers that are based on an older OS. There are also permission issues within the container.
I also tried adding 'reuseNode true' to the docker agent, as in the below:
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
I'm somewhat aware or have read about 'automatic checkout scm' as with the following, but this is odd as there is no place to define the target stream/branch to checkout. This is why I'm declaring a specific stage to handle scm checkout. It is possible this would handle the checkout without needing to specify the agent, but I don't get how to do this.
pipeline {
agent any
stages {
stage ('Build') {
steps {
sh 'cat Jenkinsfile'
}
}
}
}
Edit: adding a solution that seems to work, but need more testing before confirming.
The following seems to do what I want, executing the checkout stage on 'any' agent and then reusing the same agent to execute the build state in a container.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The below appears to have given me the functionality that I needed. The pipeline starts on "any" agent allowing the host level to handle the Checkout stage, and the "reuseNode" informs the pipeline to start the container on the same node, where the workspace is located.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
I running our maven project on a jenkins server with multiple stages inside the pipeline.
Every time I decide that the branch test does not need to continue and click on abort in the jenkins ui, I need to repeat this many times until the jenkins pipeline really stops.
I guess that our jenkinsfile does not really pick up that the job was aborted and I need to abort every stage to come to the end.
Is there a way to help jenkins to get out of the pipeline?
For example a variable I can check?
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
if (!currentBuild.isAborted) {
stage('Unit Tests') {
steps {
echo 'Unit Testing'
}
}
}
if (!currentBuild.isAborted) {
stage('Deploy') {
steps {
echo 'Deploying'
}
}
}
if (!currentBuild.isAborted) {
stage('Backend Integration Tests') {
steps {
echo 'Backend Int Tests'
}
}
}
if (!currentBuild.isAborted) {
stage('Frontend Integration Tests') {
steps {
echo 'Deploying....'
}
}
}
// done
}
}
I am using declarative Jenkinsfile for a multi-branch pipeline as shown here. SCM is set to poll for every 5 minutes.
pipeline {
agent none
stages {
stage('Build Jar') {
agent {
docker {
image 'maven:3.6.0-jdk-11'
args '-v $HOME/.m2:/root/.m2'
}
}
steps {
sh 'mvn clean package release:clean release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true" -DscmCommentPrefix="[skip ci]"'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myname/myimage")
}
}
}
//other stages here
}
Problem:
maven release commits changes to the repo which triggers another build. So it gets triggered indefintely. I came across this SCM Skip plugin.
scmSkip(deleteBuild: true, skipPattern:'.*\\[skip ci\\].*')
But unfortunately it needs an agent to run!!
I also tried by using agent any. no luck.
pipeline {
agent any
stages {
stage('SCM Check') {
steps {
scmSkip(deleteBuild: true, skipPattern:'.*\\[skip ci\\].*')
}
}
stage('Build Jar') {
steps {
sh 'mvn clean package release:clean release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true" -DscmCommentPrefix="[skip ci]"'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myname/myimage")
}
}
}
//other stages here
}
How do you guys skip build on certain messages?
I had to go with the below plugin which excludes the certain commiter. It works great.
https://github.com/jenkinsci/ignore-committer-strategy-plugin
As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword.
I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones.
The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch.
The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent.
pipeline {
agent any
tools {
maven 'M3'
jdk 'JDK8'
}
stages {
stage('Notifications') {
steps {
sh 'mkdir tmpPom'
sh 'mv pom.xml tmpPom/pom.xml'
checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]])
sh 'mvn clean test'
sh 'rm pom.xml'
sh 'mv tmpPom/pom.xml ../pom.xml'
}
}
}
post {
success {
script {
currentBuild.result = 'SUCCESS'
}
when {
branch 'master|release/*'
}
steps {
sh 'mvn deploy'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result,
)
}
failure {
script {
currentBuild.result = 'FAILURE'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result
)
}
}
}
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive.
So what you can do is test the conditions using an if in a script:
post {
success {
script {
if (env.BRANCH_NAME == 'master')
currentBuild.result = 'SUCCESS'
}
}
// failure block
}
Using a GitHub Repository and the Pipeline plugin I have something along these lines:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
make
'''
}
}
}
post {
always {
sh '''
make clean
'''
}
success {
script {
if (env.BRANCH_NAME == 'master') {
emailext (
to: 'engineers#green-planet.com',
subject: "${env.JOB_NAME} #${env.BUILD_NUMBER} master is fine",
body: "The master build is happy.\n\nConsole: ${env.BUILD_URL}.\n\n",
attachLog: true,
)
} else if (env.BRANCH_NAME.startsWith('PR')) {
// also send email to tell people their PR status
} else {
// this is some other branch
}
}
}
}
}
And that way, notifications can be sent based on the type of branch being built. See the pipeline model definition and also the global variable reference available on your server at http://your-jenkins-ip:8080/pipeline-syntax/globals#env for details.
Ran into the same issue with post. Worked around it by annotating the variable with #groovy.transform.Field. This was based on info I found in the Jenkins docs for defining global variables.
e.g.
#!groovy
pipeline {
agent none
stages {
stage("Validate") {
parallel {
stage("Ubuntu") {
agent {
label "TEST_MACHINE"
}
steps {{
sh "run tests command"
recordFailures('Ubuntu', 'test-results.xml')
junit 'test-results.xml'
}
}
}
}
}
post {
unsuccessful {
notify()
}
}
}
// Make testFailures global so it can be accessed from a 'post' step
#groovy.transform.Field
def testFailures = [:]
def recordFailures(key, resultsFile) {
def failures = ... parse test-results.xml script for failures ...
if (failures) {
testFailures[key] = failures
}
}
def notify() {
if (testFailures) {
... do something here ...
}
}
I can create pipelines by putting the following code into "Jenkinsfile" in my repository(called repo1) and creating a new item, through Jenkins GUI, to poll the repository.
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
}
}
}
stage('Deploy') {
steps {
sh 'echo \'uploading artifacts to some repositories\''
}
}
}
}
But I have a case where I am not allowed create new items through Jenkins GUI but have a pre-defined job which reads JobDSL files in a repository I provide. So, I need to create the same pipeline through JobDSL but I cannot find the corresponding syntax for all the things, for instance, I couldn't find 'agent' DSL command.
Here is a job DSL code I was trying to change.
pipelineJob('the-same-pipeline') {
definition {
cps {
sandbox()
script("""
node {
stage('prepare') {
steps {
sh '''echo 'hello''''
}
}
}
""".stripIndent())
}
}
}
For instance, I could not find 'agent' command. Is it really possible to have the exact pipeline by using job DSL?
I found a way to create the pipeline item through jobDSL. So, the following jobDSL is creating another item which is just a pipeline.
pipelineJob('my-actual-pipeline') {
definition {
cpsScmFlowDefinition {
scm {
gitSCM {
userRemoteConfigs {
userRemoteConfig {
credentialsId('')
name('')
refspec('')
url('https://github.com/muatik/jenkins-as-code-example')
}
}
branches {
branchSpec {
name('*/master')
}
}
browser {
gitWeb {
repoUrl('')
}
}
gitTool('')
doGenerateSubmoduleConfigurations(false)
}
}
scriptPath('Jenkinsfile')
lightweight(true)
}
}
}
You can find the Jenkinsfile and my test repo here: https://github.com/muatik/jenkins-as-code-example