Using Rest API to trigger a specific stage within a yaml pipeline - azure-devops-rest-api

Is there a way to execute a specific stage within a running yaml pipeline which uses an environment with approvals?
I have an on-prem deploy and an on-prem destroy stage both have manual approvals.
What I would like to do is run on-prem destroy stage in the past builds using rest api.
What I achieved so far is get 10 recent builds in descending order for a specific source branch lets call it feature/on-prem-enterprise. Then I do some parsing and find past builds that had a successful deployment but failed, cancelled, or skipped destroy stage, using these results from timeline endpoint, I want to use rest api to run/re-run a destroy stage in those builds.
We get into a situation where we have several deployments but nobody is manually running the destroy stage and because this pipeline is shared amongst all developers for dev builds, its very difficult to find those older builds manually.
If it cannot be achieved, then other solution may be to compile this list of builds and send an email out, but would prefer to have less manual intervention here.

Is there a way to execute a specific stage within a running yaml pipeline which uses an environment with approvals?
The answer is yes.
You could use the REST API Runs - Run Pipeline with below request body to skip other stages to trigger stage which you wanted:
POST https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=6.0-preview.1
Request Body:
{
"stagesToSkip":["Dev","Test"]
}
Postman test result:
And the test result for the pipeline runs:

You can use the Stages - Update REST API method. This is part of the Build resource methods but works fine for YAML Pipelines as well.
PATCH https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/stages/{stageRefName}?api-version=7.1-preview.1
It sounds like you're already getting the buildId programmatically. The stageRefName is the name of the stage as defined in your YAML. Your URI will look something like:
https://dev.azure.com/myorg/myproject/_apis/build/builds/1234/stages/DestroyStageName?api-version=7.1-preview.1
In the request body you'll need:
{
forceRetryAllJobs = $false
state = 1 # state 1 is retry
}
forceRetryAllJobs may be unnecessary. There's an example implementation in PowerShell here.
If you're struggling to identify the appropriate API method to replicate something you do in the Azure DevOps GUI opening your browser's debugger tools and inspecting the requests in the network tab can often help you identify the call that's being used.

Related

Is there any way to link two different pipelines using jenkinsfile?

Developer team needs a pipeline should only be allowed to start only if another related pipeline has completed and they need the pipeline view on same page
It is something like this , a set of stages complete and the next stages would start when project manager starts the pipeline manually .
Simply they need to visualize both pipelines in a single page like below picture.
(https://puppet.com/sites/default/files/2016-09/puppet_continuous_diagram.gif)
You can use the step build job: '<first_pipeline>', parameters: [...] in say, stage 1 of your second pipeline as a upstream job. Then define the steps of your second pipeline from stage 2 onwards. This will make sure that the first pipeline is always built when you trigger the second and also works with delivery pipeline view for single page visualization.
Or, if you just want to check if the first pipeline is completed without actually triggering it, use the api <jenkins_url>/job/<first_pipeline>/lastBuild/api/json in stage 1 of your second pipeline with a while loop to wait until the status is “completed”.
I haven't heard of a way to link them, but you could write the steps in the same pipeline.
You can work with input steps to ask for the project manager to confirm. Note that this halts an entire build processor until confirmed (or you set a timeout).
You might also want to have a look at conditional steps.

Storing Jenkins pipeline job metadata?

Is there a way where to store some metadata from Jenkins pipeline job, e.g:
We have a Jenkinsfile which builds a gradle project, creates docker image and pushes it to google cloud
Then a "Subjob" is launched which runs integration tests (IT) on that docker image. Subjob receives a couple of parameters (one of them - the generated docker image name)
Now sometimes that IT job fails, and I would like to re-run it from the main job view, so idealy:
we have a plugin which renders a custom button in blue ocean UI on the main job
By clicking that button a subjob is invoked again with the same parameters (plugin queries the jenkins api, get params of this job, and resubmits the subjob).
The problem ? How to get/set those parameters. I could not seem to find a mechanism for that, expect artifact storage. I could get away with that by creating a simple json/text file and uploading it as artifact, and then retrieving it in my plugin, but maybe there is a better way?
Stage restart is not coming to Scripted Pipelines so that does not look like ant option.
Maybe you can use the Jenkins API to get the details of the build?
https://your_jenkins_url.com/job/job_name/lastBuild/api/json?pretty=true
Instead of lastBuild you can also use the build number or one of lastStableBuild, lastSuccessfulBuild, lastFailedBuild, lastUnstableBuild, lastUnsuccessfulBuild, lastCompletedBuild
There is a parameters key there with all parameter names and values used in the build.
More details on https://your_jenkins_url.com/job/job_name/api/
Also, any reason you can't use the replay button in the IT job?

How to pass JOB parameters using Jenkins HTTP POST plugin

I want to pass the Build Name and the BUILD ID to the REST API using Jenkins HTTP POST plugin
how to pass the parameters to it?
I am passing:
http://localhost:55223/api/Demo?BuildName=${JOB_NAME}&BuildID=${BUILD_ID}
I am receiving an error
It looks like the Plugin does not expand environment variables, as evidenced by it's source code. As the plugin has not been updated in 2 years, I don't think there's any chance of the developer adding this anytime soon. If you are still wanting to use the plugin, you could make the changes changes necessary to expand the environment variables and then build it from source. For that, I would recommend looking at the Jenkins classes hudson.EnvVars and hudson.model.Run. More specifically, the Run method getEnvironment(TaskListener listener) and the EnvVars method expand(String s).

Setting Jenkins to email a build notification to the BitBucket user who pushed a branch

A project repository has been successfully connected to a Jenkins server using the BitBucket plugin, and a project set up such that:
Each push to a branch in BitBucket will trigger a webhook sent to the Jenkins server
When the Jenkins server receives the webhook it will build the changed branch (by specifying branch name as ** in the config)
After the build is complete a notification is sent back to BitBucket of the build status using the BitBucket notifier
Each of these has been easy to set up with just the instructions in the plugin and a few quick Googles. However I've now run into a problem which is maybe more a matter of wanting to run in an unconventional manner than anything else.
Using the normal emailer plugin or the Email-ext plugin it's possible to set emails to send to people involved in the creation of a build. For example the Email-ext plugin allows choice of:
Requester
Developers (all people who have commits in the build based off its last version)
Recipient list (a pre-set list)
Various "blame" settings for broken builds
The development process being followed involves each project being worked on by one developer in a named branch, e.g. userA/projectB. Obviously other developers could check that out and push to make changes but that's frowned upon. Even in that instance, the user who pushes the change to BitBucket should be notified.
None of the current settings support this. Requester is the closest, but that only works for manual builds. It seems a very simple requirement that the push to SCM that triggered a build should notify the user who pushed, but this is not documented anywhere that is easy to find.
After a lot of searching it seems the only way to accomplish this is by using a Pre-send script. This is added to the Advanced setting of the Email-ext post-build step, and takes the form of code written in Groovy which is a Java extension.
The script can take advantage of Environment variables, but is hard to test as there's no way to run the script with these in place. You can test simple Groovy scripts from Home -> Manage Jenkins -> Script console.
One important "gotcha" with the environment variables is that they are "included" in the script, rather than variables or constants. E.g. before the script compiles and runs, the content of the variable is pasted in place of its $NAME. In the example below the multi-line string syntax is used to include the BitBicket payload, whereas it might be expected that def payload = $BITBUCKET_PAYLOAD would simply work.
import javax.mail.Message.RecipientType
import javax.mail.Address
import javax.mail.internet.InternetAddress
import javax.mail.internet.MimeMessage
import groovy.json.JsonSlurper
def jsonSlurper = new JsonSlurper()
def bitbucket = jsonSlurper.parseText('''
$BITBUCKET_PAYLOAD'''
)
switch (bitbucket.actor.username){
case "userA":
msg.setRecipients(MimeMessage.RecipientType.TO, InternetAddress.parse("user.a#domain.com"));
break;
case "userB":
msg.setRecipients(MimeMessage.RecipientType.TO, InternetAddress.parse("user.b#domain.com"));
break;
}
The setRecipients command overwrites any existing recipient. Thus the recipient list or other email configuration should be set as a fallback for if the user is not recognised. Additionally, if there is nobody selected to send the email to, the script won't run at all. As added debugging, including the username in the body might help.
If the script fails, stack traces should be printed to the console log output of the test, and the build pass/fail shouldn't be affected, but the normal email address setup will be used instead. In stack traces look for lines with Script() in them, as that's the container which evaluates the Groovy script.

Jenkins plugin code that should excute before any kind of job is run in Jenkins

I am new to Jenkins plugin development. M trying to write a plugin that should be executed before any Multi configuration type job runs in Jenkins.
In this plugin I want to write rules that will check what configuration parameters user has selected while submitting the job, based on selected parameters, I want to decide whether to allow the job to run or to restrict it.
User should be shown reason as to why that job cannot be run in the Console Output.
Does anyone have any ideas which class I need to extend or which interface I need to implement in order to get a hook into Jenkins job run?
You could look at the Matrix Execution Strategy which allows for a groovy script to select which matrix combinations to run. I would think if your script threw an exception it would stop the build.
For background, the multi configuration projects run a control job (or flyweight) which runs the SCM phase then starts all the actual combinations. This plugin runs after the flyweight SCM checkout.
If nothing else, this will give you a working plugin to start from
Disclaimer: I wrote this plugin
Blocked queue job plugin was what I needed
Out of the box that plugin supports two ways to block the jobs -
Based on the result of last run of another project.
Based on result of last run of the current project
In that plugin the BlockQueueItemTaskDispatcher.java extends Jenkin's QueueTaskDispatcher providing us a hook into Jenkins logic to allow or block the jobs present in the queue from running.
I used this plugin as a starting point for developing a new plugin that allows us to restrict project based on parameters selected and the current time. Ultimate goal is to restrict production migrations from running during the day.
Overriding the isBlocked() method of QueueTaskDispatcher gave access to hudson.model.Queue.Item instance as an argument to me. Then I used the Item instance's getParams method to get access to build parameters selected by the user at runtime. Parsed the lifecyle value from it. Checked the current time. If lifecycle was Production and current time was day time then restricted the job by returning non null CauseOfBlockage from isBlocked() method. If that condition was false, then returnedCauseOfBlockage as null allowing the queued job to run.

Resources