Jenkins Mercurial plugin does not detect changes - jenkins

When my pipeline polls the Mercurial repo for changes it does not detect any change, and new builds are not triggered.
Following the plugin docs, I set up a push hook to trigger the polling, which works fine, but is not able to detect changes. All I get is
Mercurial Polling Log
Started on May 19, 2018 11:58:10 PM
no polling baseline in /var/lib/jenkins/workspace/test-repo on
Done. Took 0 ms
No changes
I am working with:
- Jenkins v2.107.3
- Mercurial plugin v2.3
I just created a test mercurial repo with some files with random content to test the setup, and a jenkins pipeline 'polling-test' which checks out the repo and echoes "hello world".
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout changelog: true,
poll: true,
scm: [
$class: 'MercurialSCM',
credentialsId: 'jenkins',
revision: 'default',
revisionType: 'BRANCH',
source: 'ssh://hg-user#hg-server/test-repo'
]
}
}
stage('Tests') {
steps {
echo "Hello World"
}
}
}
}
Also the Poll SCM option is checked out, and without any schedule.
I modify the repo doing something like:
$ echo "foo" > bar
$ hg add bar
$ hg commit -m "change"
$ hg push
And then the polling is triggered with
$ curl "https://jenkins-server/mercurial/notifyCommit?url=ssh://hg-user#hg-server/test-repo"
Scheduled polling of polling-test
The polling log shows it has triggered, but found no changes.
What am I doing wrong? How can changes be detected?

I was able to make the polling work properly by adding a Mercurial installation in the "global tools", changing the pipeline script to
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout([$class: 'MercurialSCM', credentialsId: 'jenkins', installation: 'Mercurial', source: 'ssh://hg-user#hg-server/test-repo'])
}
}
stage('Tests') {
steps {
echo "Hello World"
}
}
}
}
while keeping the Polling option checked, and of course running the pipeline a first time manually to get a reference changeset.

Related

jenkins configuration for building on different branches

I am doing code review with gerritcodereview and I need to create a jenkins pipeline for CI, CD. I am using the events triggered by gerrit trigger plugin.
I want to obtain this:
PastchSet Created
build start on refs/changes/**/**/** branch
report results to gerrit for code review
Change Merged(into develop) or Ref Updated(develop)
build start on origin/develop branch
deploy code to internal server
Ref Updated(master)
build start on origin/master branch
deploy code to external server
Questions for which I didn't find good answers:
do I need to use a simple pipeline or multibranch pipeline?
how do I start the build on the correct branch?
how can I checkout the correct branch using a Jenkinsfile instead of using the configuration page?
You should create multibranch pipeline, and write your declarative/scripted
pipeline in Jenkinsfile
example pipeline
pipeline {
agent any
tools {
maven 'maven-3.3.6'
jdk 'jdk-11'
}
options {
buildDiscarder(logRotator(numToKeepStr: '5'))
}
stages {
stage('Build/Test') {
when {
changeRequest()
}
steps {
sh "mvn clean verify"
}
post {
success {
gerritReview labels: [Verified: 1], message: "Successful build, ${env.RUN_DISPLAY_URL}."
}
unstable {
gerritReview labels: [Verified: 0], message: "Unstable build, ${env.RUN_DISPLAY_URL}"
}
failure {
gerritReview labels: [Verified: -1], message: "Failed build, ${env.RUN_DISPLAY_URL}"
}
}
}
stage('Deploy') {
when {
branch 'develop'
}
steps {
sh 'mvn deploy'
}
}
}
}
stage build&test will run for any change in changeRequest, any new change, or patchset will trigger this stage
stage deploy will be triggered for any change merged to develop.
You could have multiple stages for one branch, they will be executed in sequence

Best way to orchestrate a series of near-identical builds on the same hardware

I'm trying to test OS builds on bare metal via Jenkins.
Right now I've got as series of 60 or so jobs (monthly builds * 3 OSes * 1 job for build and one for validation) multiplied by 10 or so hosts, each set of ten chained in a loop.
This is...unwieldy to say the least.
I'm trying to use Jenkins Pipeline to cut this down - and I'd prefer to only kick off the first build when any of several git repos that are relevant change, but then I'd like to have them cascade.
I have tried a loop in my jenkinsfile, but then if any individual stage fails all the stages after fail.
What do you suggest?
My current "plan" is to have 1 project per OS version per host, and parameterize the "watch upstream" for each. This still seems like more hassle than it's worth.
pipeline {
agent any
options {
ansiColor('xterm')
}
//properties([
//pipelineTriggers([
// [$class: "SCMTrigger", scmpoll_spec: "H/5 * * * *"],
//])
//])
stages {
stage ('Checkout Build Script') {
steps {
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
extensions: [],
userRemoteConfigs: [[credentialsId: 'xxx', url: 'xxx/host_rebuild.git']]])
}
}
stage ("Build Test OS") {
steps {
sh """./build.py -H ${TARGET_HOST} -p ${PROFILE}"""
archiveArtifacts allowEmptyArchive: true, artifacts: 'messages,anamon,ipaclient,hardware_info,firmware_info', caseSensitive: true, defaultExcludes: true, fingerprint: false, onlyIfSuccessful: false
}
}
stage ('Checkout Validation Code') {
steps {
checkout([$class: 'GitSCM',
+------ 3 lines: branches: [[name: 'master']],-----------------------------------------------------------------------------------------------------------------------------------------------------------
])
}
}
stage ("Validate OS") {
steps {
sh """kinit xxx.COM -k -t /var/lib/jenkins/secrets/xxx.keytab"""
withEnv(["TARGET_OS=${MAJ_OS}", "TARGET_HOST=${TARGET_HOST}"]){
sh """rake spec"""
}
}
}
}
}
Attempt at explaining the "short" form:
I want to run a series of projects using a non-concurrent object (a remote bare-metal host) one after the other. I want each one to be triggered by the one before it. I want to use declarative pipeline.
I want the subsequent projects to run regardless of the end state of the previous project.
I would prefer the first project to be triggered by an SCM check.
I would prefer all jobs to use the same jenkinsfile.
Is this possible?

How can I rename Jenkins' pull request builder's "status check" display name on GitHub

We have a project on GitHub which has two Jenkins Multibranch Pipeline jobs - one builds the project and the other runs tests. The only difference between these two pipelines is that they have different JenkinsFiles.
I have two problems that I suspect are related to one another:
In the GitHub status check section I only see one check with the following title:
continuous-integration/jenkins/pr-merge — This commit looks good,
which directs me to the test Jenkins pipeline. This means that our build pipeline is not being picked up by GitHub even though it is visible on Jenkins. I suspect this is because both the checks have the same name (i.e. continuous-integration/jenkins/pr-merge).
I have not been able to figure out how to rename the status check message for each Jenkins job (i.e. test and build). I've been through this similar question, but its solution wasn't applicable to us as Build Triggers aren't available in Multibranch Pipelines
If anyone knows how to change this message on a per-job basis for Jenkins Multibranch Pipelines that'd be super helpful. Thanks!
Edit (just some more info):
We've setup GitHub/Jenkins webhooks on the repository and builds do get started for both our build and test jobs, it's just that the status check/message doesn't get displayed on GitHub for both (only for test it seems).
Here is our JenkinsFile for for the build job:
#!/usr/bin/env groovy
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')), [$class: 'ScannerJobProperty', doNotScan: false]])
node {
stage('Initialize') {
echo 'Initializing...'
def node = tool name: 'node-lts', type: 'jenkins.plugins.nodejs.tools.NodeJSInstallation'
env.PATH = "${node}/bin:${env.PATH}"
}
stage('Checkout') {
echo 'Getting out source code...'
checkout scm
}
stage('Install Dependencies') {
echo 'Retrieving tooling versions...'
sh 'node --version'
sh 'npm --version'
sh 'yarn --version'
echo 'Installing node dependencies...'
sh 'yarn install'
}
stage('Build') {
echo 'Running build...'
sh 'npm run build'
}
stage('Build Image and Deploy') {
echo 'Building and deploying image across pods...'
echo "This is the build number: ${env.BUILD_NUMBER}"
// sh './build-openshift.sh'
}
stage('Upload to s3') {
if(env.BRANCH_NAME == "master"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'EBAX8TMG6XHCK', paths:['/*']);
}
};
if(env.BRANCH_NAME == "PRODUCTION"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'E6JRLLPORMHNH', paths:['/*']);
}
};
}
}
Try to use GitHubCommitStatusSetter (see this answer for declarative pipeline syntax). You're using a scripted pipeline syntax, so in your case it will be something like this (note: this is just prototype, and it definitely must be changed to match your project specific):
#!/usr/bin/env groovy
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')), [$class: 'ScannerJobProperty', doNotScan: false]])
node {
// ...
stage('Upload to s3') {
try {
setBuildStatus(context, "In progress...", "PENDING");
if(env.BRANCH_NAME == "master"){
withAWS(region:'eu-west-1',credentials:'****') {
def identity=awsIdentity();
s3Upload(bucket:"****", workingDir:'build', includePathPattern:'**/*');
cfInvalidate(distribution:'EBAX8TMG6XHCK', paths:['/*']);
}
};
// ...
} catch (Exception e) {
setBuildStatus(context, "Failure", "FAILURE");
}
setBuildStatus(context, "Success", "SUCCESS");
}
}
void setBuildStatus(context, message, state) {
step([
$class: "GitHubCommitStatusSetter",
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: context],
reposSource: [$class: "ManuallyEnteredRepositorySource", url: "https://github.com/my-org/my-repo"],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: message, state: state]] ]
]);
}
Please check this and this links for more details.
You can use the Github Custom Notification Context SCM Behaviour plugin https://plugins.jenkins.io/github-scm-trait-notification-context/
After installing go to the job configuration. Under "Branch sources" -> "GitHub" -> "Behaviors" click "Add" and select "Custom Github Notification Context" from the dropdown menu. Then you can type your custom context name into the "Label" field.
This answer is pretty much like #biruk1230's answer. But if you don't want to downgrade your github plugin to work around the bug, then you could call the API directly.
void setBuildStatus(String message, String state)
{
env.COMMIT_JOB_NAME = "continuous-integration/jenkins/pr-merge/sanity-test"
withCredentials([string(credentialsId: 'github-token', variable: 'TOKEN')])
{
// 'set -x' for debugging. Don't worry the access token won't be actually logged
// Also, the sh command actually executed is not properly logged, it will be further escaped when written to the log
sh """
set -x
curl \"https://api.github.com/repos/thanhlelgg/brain-and-brawn/statuses/$GIT_COMMIT?access_token=$TOKEN\" \
-H \"Content-Type: application/json\" \
-X POST \
-d \"{\\\"description\\\": \\\"$message\\\", \\\"state\\\": \\\"$state\\\", \
\\\"context\\\": \\\"${env.COMMIT_JOB_NAME}\\\", \\\"target_url\\\": \\\"$BUILD_URL\\\"}\"
"""
}
}
The problem with both methods is that continuous-integration/jenkins/pr-merge will be displayed no matter what.
This will be helpful with #biruk1230's answer.
You can remove Jenkins' status check which named continuous-integration/jenkins/something and add custom status check with GitHubCommitStatusSetter. It could be similar effects with renaming context of status check.
Install Disable GitHub Multibranch Status plugin on Jenkins.
This can be applied by setting behavior option of Multibranch Pipeline Job on Jenkins.
Thanks for your question and other answers!

Deploy to Heroku staging, then production with Jenkins

I have a Rails application with a Jenkinsfile which I'd like to set up so that a build is first deployed to staging, then if I am happy with the result, it can be built on production.
I've set up 2 Heroku instances, myapp-staging and myapp-production.
My Jenkinsfile has a node block that look like:
node {
currentBuild.result = "SUCCESS"
setBuildStatus("Build started", "PENDING");
try {
stage('Checkout') {
checkout scm
gitCommit = sh(returnStdout: true, script: 'git rev-parse HEAD').trim()
shortCommit = gitCommit.take(7)
}
stage('Build') {
parallel 'build-image':{
sh "docker build -t ${env.BUILD_TAG} ."
}, 'run-test-environment': {
sh "docker-compose --project-name myapp up -d"
}
}
stage('Test') {
ansiColor('xterm') {
sh "docker run -t --rm --network=myapp_default -e DATABASE_HOST=postgres ${env.BUILD_TAG} ./ci/bin/run_tests.sh"
}
}
stage('Deploy - Staging') {
// TODO. Use env.BRANCH_NAME to make sure we only deploy from staging
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'Heroku Git Login', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
sh('git push https://${GIT_USERNAME}:${GIT_PASSWORD}#git.heroku.com/myapp-staging.git staging')
}
setBuildStatus("Staging build complete", "SUCCESS");
}
stage('Sanity check') {
steps {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
// TODO. Use env.BRANCH_NAME to make sure we only deploy from master
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'Heroku Git Login', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
sh('git push https://${GIT_USERNAME}:${GIT_PASSWORD}#git.heroku.com/myapp-production.git HEAD:refs/heads/master')
}
setBuildStatus("Production build complete", "SUCCESS");
}
}
My questions are:
Is this the correct way to do this or is there some other best practice? For example do I need two Jenkins pipelines for this or is one project pipeline enough?
How can I use Jenkins' BRANCH_NAME variable to change dynamically depending on the stage I'm at?
Thanks in advance!
for the first question, using one Jenkinsfile to describe the complete project pipeline is desirable. it keeps the description of the process all in one place, and shows you the process flow in one UI, so your Jenkinsfile seems great in that regard.
for the second question, you can wrap steps in if conditions based on branch. so if you wanted to, say, skip the prod deployment and the step that asks the user if staging looks ok (since you're not going to do the prod deployment) if the branch is not master, this would work.
node('docker') {
try {
stage('Sanity check') {
if (env.BRANCH_NAME == 'master') {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
echo 'deploy check'
if (env.BRANCH_NAME == 'master') {
echo 'do prod deploy stuff'
}
}
} catch(error) {
}
}
i removed some stuff from your pipeline that wasn't necessary to demonstrate the idea, but i also fixed what looked to me like two issues. 1) you seemed to be mixing metaphors between scripted and declarative pipelines. i think you are trying to use a scripted pipeline, so i made it full scripted. that means you cannot use steps, i think. 2) your try was missing a catch.
at the end of the day, the UI is a bit weird with this solution, since all steps will always show up in all cases, and they will just show as green, like they passed and did what they said they would do (it will look like it deployed to prod, even on non-master branches). there is no way around this with scripted pipelines, to my knowledge. with declarative pipelines, you can do the same conditional logic with when, and the UI (at least the blue ocean UI) actually understands your intent and shows it differently.
have fun!

Jenkinfile DSL how to specify target directory

I'm exploring Jenkins 2.0 pipelines. So far my file is pretty simple.
node {
stage "checkout"
git([url:"https://github.com/luxengine/math.git"])
stage "build"
echo "Building from pipeline"
}
I can't seem to find any way to set the directory that git will checkout to. I also can't find any kind of documentation related to that. I found https://jenkinsci.github.io/job-dsl-plugin/ but it doesn't seem to match what I see on other tutorials.
Clarification
Looks like you are trying to configure Pipeline job (formerly known as Workflow). This type of job is very distinct from Job DSL.
The purpose of Pipeline job is to:
Orchestrates long-running activities that can span multiple build slaves. Suitable for building pipelines (formerly known as workflows) and/or organizing complex activities that do not easily fit in free-style job type.
Where as Job DSL:
...allows the programmatic creation of projects using a DSL. Pushing job creation into a script allows you to automate and standardize your Jenkins installation, unlike anything possible before.
Solution
If you want to checkout your code to specific directory then replace git step with more general SCM checkout step.
Final Pipeline configuration should look like that:
node {
stage "checkout"
//git([url:"https://github.com/luxengine/math.git"])
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'RelativeTargetDirectory',
relativeTargetDir: 'checkout-directory']],
submoduleCfg: [],
userRemoteConfigs: [[url: 'https://github.com/luxengine/math.git']]])
stage "build"
echo "Building from pipeline"
}
As a future reference for Jenkins 2.0 and Pipeline DSL please use built-in Snippet Generator or documentation.
This can be done by using the directive of dir:
def exists = fileExists '<your target dir>'
if (!exists){
new File('<your target dir>').mkdir()
}
dir ('<your target dir>') {
git url: '<your git repo address>'
}
First make clear that you are using Jenkins Job DSL.
You can do this like this:
scm {
git {
wipeOutWorkspace(true)
shallowClone(true);
remote {
url("xxxx....")
relativeTargetDir('checkout-folder')
}
}
}
https://jenkinsci.github.io/job-dsl-plugin/
This above address gives you the chance simply to type in upper left aread for example 'scm' and than it will show in which relationships 'scm' can be used. Than you can select 'scm-freestylejob' and afterwards click on the '***' than you can see the details.
The general start point for Jenkins Job DSL is here:
https://github.com/jenkinsci/job-dsl-plugin/wiki
You can of course ask here on SO or on Google Forum:
https://groups.google.com/forum/#!forum/job-dsl-plugin
pipeline {
agent any
stages{
stage("Checkout") {
steps {
dir('def exists = fileNotExists \'git\'') {
bat label: '', script: 'sh "mkdir.sh'
}
dir ('cm') {
git branch: 'dev',
credentialsId: '<your credential id>',
url: '<yours git url>'
}
}
} //End of Checkout stage
stage("TestShellScript") {
steps {
bat label: '', script: 'sh "PrintNumber.sh"'
}
}
}//End of stages
} // End of pipeline
Note: cat mkdir.sh
#!/bin/bash
#Create a directory
mkdir git
You are using the Pipeline Plugin, not the Job DSL Plugin. In the Pipeline Plugin, if you want to define something, where there is not yet a function available in the Pipeline syntax, you can define it yourself.

Resources