How to trigger Automation Pipeline after an Build Pipeline is successfully and only triggered by a specific user - jenkins

Suppose Jenkins Build A is Successfully Triggered and after that Automation Pipeline on that Build is to be executed.
The above Scenario is possible using Jenkins Build Triggers using : Build after other projects are built
But in addition when we want to trigger Automation only if the build is generated by a specific used
For e.g.
User A, User B, Users C
So the Automation Pipeline must get generated only if the Build Pipeline is triggered by User A and User B.
The Automation Pipeline must not be triggered if the Build is generated by User C.

If I understand your requirement correctly, you have two Jobs, Build
and Automation. If Build Job is triggered by UserA, UserB.... you want to trigger the Automation Job. In order to achieve this, in your Build job, before triggering the Automation Job you can check who triggered the Build Job and decide whether to trigger the Automation job or not. Check the example below.
pipeline {
agent any
stages {
stage ('BuildJob') {
steps {
script {
def userList = ["admin2", "UserA", "UserB", "UserC"]
def buildTrigger = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
// Here checking username and triggering the automation build
if(buildTrigger != null && !buildTrigger.userId.disjoint(userList)) {
echo "Build the Second JOb========"
build job: 'AutomationJob'
}
}
}
}
}
}

Related

how to make jenkins declarative pipeline waitForWebhook without consuming jenkins executor?

We have a Jenkins declarative pipeline which, after deploying our product on a cloud vm, needs to run some product tests on it. Tests are implemented as a separate job on another jenkins and tests will be run by main pipeline by triggering remote job on 2nd jenkins using parameterized remote trigger plugin parameterized remote trigger plugin.
While this plugin works great, when using option blockBuildUntilComplete it blocks for remote job to finish but doesn't release the jenkins executor. Since tests can take a lot of time to complete(upto 2 days), all this time executor will be blocked just waiting for another job to complete. When setting blockBuildUntilComplete as false it returns a job handle which can be used to fetch build status and result etc. Example here:
while( !handle.isFinished() ) {
echo 'Current Status: ' + handle.getBuildStatus().toString();
sleep 5
handle.updateBuildStatus()
}
But this still keeps consuming the executor, so we still have the same problem.
Based on comments in the article, we tried with waitForWebhook, but even when waiting for webhook it still keeps using the executor.
Based on article we tried with input, and we observed that it wasn't using executor when you have input block in stage and none agent for pipeline :
pipeline {
agent none
stages {
stage("test"){
input {
message "Should we continue?"
}
agent any
steps {
script{
echo "hello"
}
}
}
}
}
so input block does what we want to do with waitForWebhook, at least as far as not consuming an executor while waiting.
What our original idea was to have waitForWebhook inside a timeout surrounded by try catch surrounded by a while loop waiting for remote job to finish :
while(!handle.isFinished()){
try{
timeout(1) {
data = waitForWebhook hook
}
} catch(Exception e){
log.info("timeout occurred")
}
handle.updateBuildStatus()
}
This way we could avoid using executor for long period of time and also benefit from job handle returned by plugin. But we cannot find a way to do this with the input step. input doesn't use executor only when it's separate block inside stage, but it uses one if inside steps or steps->script. So we cannot use try catch, cannot check handle status andcannot loop on it. Is there a way to do this?
Even if waitForWebhook works like input is working we could use that.
Our main pipeline runs on a jenkins inside corporate network and test jenkins runs on cloud and cannot communicate in with our corp jenkins. so when using waitForWebhook, we would publish a message on a message pipeline which a consumer will read, fetch webhook url from db corresponding to job and post on it. We were hoping avoid this with our solution of using while and try-catch.

Can manually triggered jobs of pipeline take user-input for paramters?

I have two jobs (Job1 & Job2). Both are parameterized with the same parameters but the parameter values differ and are designed using the Active-Choice-Parameter (uno) plugin.
I wish to run both jobs in pipeline however, below is the exact requirement.
When the pipeline is executed Job1 executes and prompts user to enter parameters (UI). The user enters / selects the values and triggers it to build.
Once the build on Job1 completes the user is prompted for (approval) to proceed to the next Job2. The user approves by clicking "OK/Proceed" button; & thereby Job2 of the pipeline gets triggered.
Note: I have achieved this using "input" feature of Groovy Script.
The parameter values of Job1 should be passed and should showup in Job2; however the user should be able to see and modify the passed values for any parameter in Job2 (UI).
Note: I'm able to pass the parameter values using "Parameterized Trigger Plugin" on "Post-Build-Actions" of Job1
Problem statement:
Running the pipeline does not show users parameter screen (UI) for either Job1 or Job2 so that the user could enter / select and change the parameters for either Job1 or Job2 during the pipeline run.
Note:
I'm able to overcome the Problem Statement by using Build Pipeline Plugin:
But the reason i do not wish to consider this solutions is
I don't know how can I inject the groovy pipeline script input element which prompts for approval between jobs.
I have read that using the pipeline plugin has advantages over using Build Pipeline Plugin
Below is Groovy script (Pipeline script)
agent any //agent specifies where the pipeline will execute.
stages {
stage ("build PROD") { //an arbitrary stage name
steps {
build 'job1' //this is where we specify which job to invoke.
}
}
stage ("build DR") { //an arbitrary stage name
input{
message "Press Ok to continue"
submitter "user1,user2"
parameters {
string(name:'username', defaultValue: 'user', description: 'Username of the user pressing Ok')
}
}
steps {
echo "User: ${username} said Ok."
build 'job2' //this is where we specify which job to invoke.
}
}
}
}
Any solution would be of great help. Thanks.
Is there a reason you are keeping the jobs separate? What I would do is re-evaluate your job flow and see if it makes more sense to have the jobs merge into one pipeline.
You could simply use the parameter https://jenkins.io/doc/book/pipeline/syntax/#parameters
Then you have the default user interface, which is simpler then you custom groovy code.

Jenkins - run long job nightly if new work done?

Right now, I have two sets of benchmarks, a short one and a long one. The short one runs on checkin for every branch. Which set to run is a parameter - SHORT or LONG. The long one always runs nightly on the dev branch. How can I trigger other branches to build and run the long benchmark if the branch was built successfully today?
If you want to run those long tests only over the night - I find it easiest to just duplicate the job and modify it so its triggered in the night and has additional checks added after the normal job, I.e. your post-commit jobs just do the short test, the nightly triggered do the short first and then (if no errors) the long one.
I find that much easier to handle then the added complexity of chaining jobs on some condition, like evaluating time of day to skip some tests.
Example 1st job that runs after every commit
node() {
stage('Build') {
// Build
}
stage('Short Test') {
// Short Test
}
}
2nd job that triggers nightly
node() {
stage('Build') {
// Build
}
stage('Short Test') {
// Short Test, fail the build here when not successful
}
stage('Long Tests')
// Long Test, runs only when short test successful
}
}
Edit
A solution that got it all in a single job, however it adds alot of complexity and makes some followup use cases harder to integrate, i.e. different notification for the integration test branch, tracking of build durations etc. I still find it more manageable to have it split in 2 jobs.
Following job must be configured to be triggered by post commit hook and a nightly timer. It runs the long test when
the last build is younger then set (you dont want it to trigger from the last nightly),
last run was successful (dont want to run long test for a broken build), and
was triggered by said timer (dont want to trigger on a check in).
def runLongTestMaxDiffMillis = 20000
def lastRunDiff = (currentBuild.getStartTimeInMillis().toInteger() - currentBuild.getPreviousBuild().getStartTimeInMillis().toInteger())
def lastBuildTooOld = (lastRunDiff > runLongTestMaxDiffMillis)
def isTriggeredByTimer = currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause')
def lastBuildSuccessful = (currentBuild.getPreviousBuild().getResult() == 'SUCCESS')
def runLongTest = (!lastBuildTooOld && isTriggeredByTimer && lastBuildSuccessful)
node() {
if (runLongTest) {
println 'Running long test'
} else {
println 'Skipping long test'
}
}
You can create another pipeline that calls the parameterized pipeline with the LONG parameter, for example:
stage('long benchmark') {
build job: 'your-benchmark-pipeline', parameters: [string(name: 'type', value: 'LONG')]
}
When you configure this new pipeline you can tick the Build after other projects are built checkbox in the Build Triggers section and choose which short benchmarks should trigger it once they complete successfully (the default behavior).
You can use the Schedule Build Plugin to schedule a build of the long job when the short job succeed.
The short job runs on every branch, when a build succeed for a certain branch, it schedules a build of the long job (in the night) with the branch in parameter, so the long job will run on this particular branch.

Jenkins upstreamProjects not starting jobs

I created a pipelined jobs in jenkins and want to get it triggered as another jobs ends.
I introduced into my pipeline this way:
pipeline {
agent any
triggers { upstream(upstreamProjects: "jobname" )}
...
}
It does not start when the first job ends. I tryed with the web interface build trigger section and it worked.
I wonder what am I missing to get it work in the pipeline code.
I also add "../folder/jobname" and "threshold: hudson.model.Result.SUCCESS".

Triggering build information available in child job

I have a job in Jenkins called notification_job which uses the "Build after other projects are built" trigger. The list of jobs will be around 25 and continue to grow. The notification_job needs to know the triggering build's name and build number.
I would like all of the configuration to be done through the notification_job, not through the triggering jobs as that list will grow and become a pain to manage. So how can I retrieve the build name and number in the child job?
Jenkins version is 2.19.3
Thank you,
Duke
I was able to pull the data with a groovy script
import hudson.model.Cause
for (cause in build.getCauses()) {
if (cause instanceof Cause.UpstreamCause) {
println cause.getUpstreamProject()
println cause.getUpstreamBuild()
}
}

Resources