How to batch releases in TFS? - tfs

In TFS 2018 we may "Batch changes while a build is in progress," so that if a Git push occurs while a build is in progress the second one waits for the first to complete. In this way, we can stop multiple builds from running simultaneously.
However, there doesn't seem to be a similar option for releases.
Given my severely limited bandwidth, a given release can take far longer to complete than the build that triggered it. It's quite possible that this second build, even when queued, will trigger a new release when one is already in progress. I need to queue the entire pipeline until the current release finishes, not just the build.
I've been able to do this with a clunky and brittle series of PowerShell scripts (which is tedious to configure in its current state), but I'd like something a little more solid if possible.
How can I best accomplish this?
Test-PipelineStatus.ps1
$BuildDefinitionName = (Get-Item Env:BUILD_DEFINITIONNAME).Value
$ArtifactsDirectory = (Get-Item Env:BUILD_ARTIFACTSTAGINGDIRECTORY).Value
$SourcesDirectory = (Get-Item Env:BUILD_SOURCESDIRECTORY).Value
$LocatorFilePath = "$ArtifactsDirectory\Locator.txt"
$StatusDirectory = "$SourcesDirectory\Pipeline"
$StatusFilePath = "$StatusDirectory\Status.txt"
Set-Content $LocatorFilePath $StatusFilePath
If ((Test-Path $StatusDirectory) -eq $False) {
Write-Output "Creating pipeline status directory"
New-Item $StatusDirectory -ItemType Directory
}
Write-Output "Getting current pipeline status"
If (Test-Path $StatusFilePath) {
$Status = Get-Content $StatusFilePath
If ($Status -eq "Stopped") {
Write-Output "Setting current pipeline status to [Running]"
Set-Content $StatusFilePath "Running"
} Else {
Write-Error "Pipeline [$BuildDefinitionName] is already in progress. Failing this build."
Exit 1
}
} Else {
Write-Output "Setting current pipeline status to [Running]"
Set-Content $StatusFilePath "Running"
}
Get-StatusFilePath.ps1
$ArtifactsDirectory = (Get-Item Env:SYSTEM_ARTIFACTSDIRECTORY).Value
$ReleaseDefinition = (Get-Item Env:RELEASE_DEFINITIONNAME).Value
$LocatorFilePath = "$ArtifactsDirectory\$ReleaseDefinition\drop\Locator.txt"
$StatusFilePath = Get-Content $LocatorFilePath
Write-Output "Setting variable [StatusFilePath] to [$StatusFilePath]"
Write-Host "##vso[task.setvariable variable=StatusFilePath]$StatusFilePath"
Remove-Item $LocatorFilePath
Set-ReleaseComplete.ps1
[CmdletBinding()]
param(
[Parameter(Mandatory)][string] $StatusFilePath
)
Write-Output "Marking pipeline as complete"
Set-Content $StatusFilePath -Value "Stopped"

You can accomplish this within the release definition editor, nothing special required. For all of the environments in the release, under the pre-deployment conditions (where you'd set pre-deployment approvals and gates), expand the deployment queue settings and change the number of parallel deployments to 1, with subsequent releases set to deploy latest and cancel the others.
This way, if you're running release 1 and release 2, 3, 4, 5, and 6 get queued up, it will cancel 2-5 and deploy 6 when 1 finishes.

Related

Failed to create Release artifact directory error after cancelling Release and starting new one

This seems to only happen if I cancel a release deployment and then start a new one. It forces me to go into the agents and manually restart them.
The actual error is..
"Failed to create Release artifact directory 'C:\agent_work\r3\a'. ---> System.IO.IOException: The process cannot access the file '\?\C:\agent_work\r3\a' because it is being used by another process."
Is there a way in TFS to clean up any of these potential issues when creating a new release after a cancelled one? If I let it fully run its course, the new release runs fine no problem. This only happens when I cancel and attempt to start a new one.
You can use utility like handle to write a script that releases locked files or folders.
For example:
$pathToRelease = $env:System.DefaultWorkingDirectory
Write-Host "$PathToRelease is locked! trying to kill the process..."
$processes = path\to\handle64.exe -nobanner -accepteula $PathToRelease
# Remove empty lines
$processes = $processes | Where-Object {$_ -ne ""}
Write-Host $processes.ForEach({ Write-Host $_ })
if($processes -notmatch "No matching handles found.")
{
foreach($process in $processes)
{
# Some excluded processes, you can decide what which you want
if($process -match "explorer.exe" -or $process -match "powershell.exe"
{
continue
}
$pidNumber = $process.Substring(($process.IndexOf("pid") + 5),6)
$isProcessStillAlive = Get-Process | Where-Object {$_.id -eq $pidNumber}
if($Null -ne $isProcessStillAlive)
{
Stop-Process -Id $pidNumber -Force
Start-Sleep -Seconds 1
}
}
}
else
{
exit 0
}
Configure the script to run even the release is canceled.

jenkins complicated buildflow, is it possible?

I would like to have a Jenkins build flow that looks like this.
After the build is triggered all slaves run the same job in parallel (a setup job).
If any slaves fail this job they should not continue on.
For the all the slaves that to pass that job, they should grab a job out of a pool of jobs that need to be completed. And once a slave completes a job they should go back to complete another job in the pool.
I have only started working with Jenkins a few weeks ago and they way I have it setup now is as each job is picked up by a slave they have to run the setup job first. This really slows down build times because I have about 30 jobs and the setup takes ~2 minutes.
I am using Jenkins as an automated testing platform and all the jobs in the job pool can run independently of each other. I have 5 slaves currently and ~30 jobs.
The following should do the trick:
def jobPool = new ArrayDeque()
jobPool.add({
echo "Doing stuff on ${env.NODE_NAME}"
});
jobPool.add({
echo "Doing other stuff on ${env.NODE_NAME}, a little slower"
sleep 4
});
jobPool.add({
echo "Doing more stuff on ${env.NODE_NAME}, even slower"
sleep 10
});
jobPool.add({
echo "Doing stuff quick on ${env.NODE_NAME}"
});
jobPool.add({
echo "Doing stuff quicker on ${env.NODE_NAME}"
});
def par = [:]
for (x in ["master", "urban"]) {
def nodeName = x; // needed due to variable scoping
par[nodeName] = {
node (nodeName) {
try {
echo "Doing setup on ${env.NODE_NAME}!"
// Do you're setup
echo "Done with setup"
} catch (Exception e) {
echo "Will not use this node as it failed setup!"
return;
}
while (true) {
// echo "${jobPool.size()}"
def subTask = jobPool.poll()
//echo "${jobPool.size()} ${subTask}"
if (subTask == null) {
break;
}
// Might wan't try catch around the next line if you wan't to continue if a job fails
subTask()
}
}
}
}
parallel par
if (!jobPool.isEmpty()) {
error "Not all tasks was done!"
}
Simply add your "job pool jobs" to the jobPool variable and modify the setup part.
It seems like you want separate stages in the same job. This is made much easier in jenkins 2's pipelines. There are some pictures here:
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Stage+View+Plugin
the [groovy] code ends up looking like this:
node {
stage 'Checkout'
svn 'https://svn.mycorp/trunk/'
stage 'Build'
sh 'make all'
stage 'Test'
sh 'make test'
}

Do a manual abort in Jenkins job

My original requirement is to check space and if enough space then proceed with the job or else abort the job. I am not failing the job as there has been another mail and process attached to failing of job. Thus I want something like:
if [ not enough space ]
then
... abort the job ....
fi
How can I abort the job through shell script or if any better option is there please help me.
Put this script in the pre build step (add shell build step before your job build step)
#max allowed usage .. maybe the whole disk
LIMIT = <put limit>
# you may replace the "." with your workspace dir
USED=`df . | awk '{print $5}' | sed -ne 2p | cut -d"%" -f1`
if [ $USED -gt $LIMIT ]
#If used space is bigger than LIMIT
then
exit 1
#This would end the build and report failure
fi

Jenkins - Stop concurrent job with same parameter

I have a Jenkins job for a db rollback script that uses a choice parameter for each environment (using NodeLabel Parameter Plugin).
I want the jobs to able to be run concurrently, but only for different environments.
"Execute concurrent builds if necessary" is enabled.
E.g. If the job is running for LIVE, allow someone to run the job again for TEST (this works). However, if LIVE is already running and someone runs the job for LIVE again, then do not run.
This plugin seems to suit my needs but is not shown on the list of available plugins in Manage Jenkins.
https://wiki.jenkins-ci.org/display/JENKINS/Concurrent+Run+Blocker+Plugin
Are there any other ways around this?
There's a solution with existing Jenkins plugins:
Create a Freestyle project named like Starter for concurrent builds exclusively on nodes.
☑ This build is parameterized
Node [NodeLabel Parameter Plugin]
Name: NODE
Choice Parameter
Name: JOB
Choices: ... the jobs' names you'd like to start with this ...
Build
Conditional step (single) [Conditional BuildStep Plugin]
Run?: Not
!: Execute Shell
Command:
#!/bin/bash +x -e
# Bash 4 needed for associative arrays
# From http://stackoverflow.com/questions/37678188/jenkins-stop-concurrent-job-with-same-parameter
echo ' Build --> Conditional step (single) --> Execute Shell'
echo " Checking whether job '$JOB' runs on node '$NODE'"
echo ' Creating array'
declare -A computers
# ------------------------------------------------------------------------
# Declare your nodes and their executors here as mentioned, for instance,
# in the API URI 'http://<jenkins>/computer/(master)/executors/0/api/xml':
computers=( # ^^^^^^ ^
[master]="0 1 2 3"
[slave]="0 1"
)
# Note: Executor indices DO NOT conform to the numbers in Jenkins'
# Build Executor Status UI.
# ------------------------------------------------------------------------
echo " Checking executors of node '$NODE'"
for computer in ${!computers[#]} ; do
for executorIdx in ${computers[$computer]} ; do
if [[ $computer == $NODE ]] ; then
if [[ "$computer" == "master" ]] ; then
node="(${computer})"
else
node=$computer
fi
url="${JENKINS_URL}/computer/${node}/executors/${executorIdx}/api/xml?tree=currentExecutable\[url\]"
echo " $url"
xml=$(curl -s $url)
#echo $computer, $executorIdx, $xml
if [[ "$xml" == *"/job/${JOB}"* ]] ; then
echo " Job '$JOB' is already building on '$computer' executor index '$executorIdx'"
echo ' Exiting with 1'
exit 1
fi
fi
done
done
echo ' Exiting with 0'
Builder: Set the build result
Result: Aborted
Conditional step (single)
Run?: Current build status
Builder: Trigger/call build on other projects
Build Triggers:
Projects to build: $JOB [ignore the error message]
Node Label parameter
Name: NODE [or how you call it in your downstream job(s)]
Node: $NODE

How to get the BUILD_USER in Jenkins when job triggered by timer?

I wanted to show the user who triggered a Jenkins job in the post job email. This is possible by using the plugin Build User Vars Plugin and the env variable BUILD_USER.
But this variable do not get initialized when the job is triggered by a scheduler.
How can we achieve this? I know we have a plugin called - EnvInject Plugin, and that can be used...
But I just want to know how we can use this and achieve the solution...
Build user vars plugin wasn't working for me so I did a quick-and-dirty hack:
BUILD_CAUSE_JSON=$(curl --silent ${BUILD_URL}/api/json | tr "{}" "\n" | grep "Started by")
BUILD_USER_ID=$(echo $BUILD_CAUSE_JSON | tr "," "\n" | grep "userId" | awk -F\" '{print $4}')
BUILD_USER_NAME=$(echo $BUILD_CAUSE_JSON | tr "," "\n" | grep "userName" | awk -F\" '{print $4}')
SIMPLE SOLUTIONS (NO PLUGINS) !!
METHOD 1: Via Shell
BUILD_TRIGGER_BY=$(curl -k --silent ${BUILD_URL}/api/xml | tr '<' '\n' | egrep '^userId>|^userName>' | sed 's/.*>//g' | sed -e '1s/$/ \//g' | tr '\n' ' ')
echo "BUILD_TRIGGER_BY: ${BUILD_TRIGGER_BY}"
METHOD 2: Via Groovy
node('master') {
BUILD_TRIGGER_BY = sh ( script: "BUILD_BY=\$(curl -k --silent ${BUILD_URL}/api/xml | tr '<' '\n' | egrep '^userId>|^userName>' | sed 's/.*>//g' | sed -e '1s/\$/ \\/ /g'); if [[ -z \${BUILD_BY} ]]; then BUILD_BY=\$(curl -k --silent ${BUILD_URL}/api/xml | tr '<' '\n' | grep '^shortDescription>' | sed 's/.*user //g;s/.*by //g'); fi; echo \${BUILD_BY}", returnStdout: true ).trim()
echo "BUILD_TRIGGER_BY: ${BUILD_TRIGGER_BY}"
}
METHOD 3: Via Groovy
BUILD_TRIGGER_BY = "${currentBuild.getBuildCauses()[0].shortDescription} / ${currentBuild.getBuildCauses()[0].userId}"
echo "BUILD_TRIGGER_BY: ${BUILD_TRIGGER_BY}"
OUTPUT:
Started by user Admin / user#example.com
Note: Output will be both User ID and User Name
This can be done using the Jenkins Build User Vars Plugin which exposes a set of environment variables, including the user who started the build.
It gives environment variables like BUILD_USER_ID, EMAIL, etc.
When the build is triggered manually by a logged-in user, that user's userid is available in the BUILD_USER_ID environment variable.
However, this environment variable won't be replaced / initialized when the build is automatically triggered by a Jenkins timer / scheduler.
Attached a screenshot for details
This can be resolved by injecting a condition to the Job by using Conditional Build Step Plugin / Run Condition Plugin,where in to each job we can add a condition to initialize the variable BUILD_USER_ID only when the build is caused or triggered by the Timer or scheduler, by setting a condition using the regular expression..
Without Plugin ->
def cause = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
echo "userName: ${cause.userName}"
Install 'Build User Vars Plugin' and use like below:- [ See https://plugins.jenkins.io/build-user-vars-plugin ]
Be sure to check mark the Set jenkins user build variables checkbox under Build Environment for your Jenkins job's configuration.
I found similar but really working on Jenkins 2.1.x and easy for my understanding way.
And it works without any plugins.
if (currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')['userId']){
// Will be run only if someone user triggers build
// Because in other cases this contructions returns null
}
You can use in this construction any classes described here.
They will be returns maps with usable values.
This gets the username who clicked "Build Now" in a Jenkins pipeline job.
#NonCPS
def getBuildUser() {
return currentBuild.rawBuild.getCause(Cause.UserIdCause).getUserId()
}
I'm using a combination of the 'Execute Shell' and 'Env Inject' plugin as follows:
Create an 'Execute Shell' build step that uses shell parameter substitution to write default the value and echo that value into a file. Example highlighted in screen shot below.
Use the 'Env Inject' file to read that file as properties to set.
The token $BUILD_CAUSE from the email-ext plugin is what you are looking for.
You can see the full content token reference when you click the ? just after the Attach build log combobox at the email content configuration.
Some tokens get added by plugins, but this one should be aviable by default.
Edit: As pointed out by bishop in the comments, when using the EnvInject plugin, the $BUILD_CAUSE token gets changed to behave differently.
I have written a groovy script to extract the started by which would correctly get the source, regardless if user, scm or timer (could add more). It would recursively navigate the build tree to get the "original" 'started by' cause https://github.com/Me-ion/jenkins_build_trigger_cause_extractor
I wanted to trigger build initiator info to one of my slack/flock group so I used following way to get build initiator email and name by writing in Declarative fashion .
I am just printing here, you can use to store in some environment variable or write in one file giving file path according to your own convenience..
pipeline {
environment {
BRANCH_NAME = "${env.BRANCH_NAME}"
}
agent any
stages{
stage('Build-Initiator-Info'){
sh 'echo $(git show -s --pretty=%ae)'
sh 'echo $(git show -s --pretty=%an)'
}
}
}
Just to elaborate on Musaffir Lp's answer. The Conditional Build Step plugin now supports the Build Cause directly - it requires the Run Condition Plugin also.
If you wanted to detect when the build was started by a timer you can select a Run? value of Build Cause, with Build Cause of: TimerTrigger
This is a little simpler and more robust than using a regex. There are also other triggers you can detect, for example when the build was a result of Source Control Management commit, you can select: SCMTrigger.
This below is working for me.
Install "user build vars plugin"
Build Name = ${BUILD_NUMBER}_${TICKET}_${ENV,var="BUILD_USER_ID"}
I created a function that return the Triggered Job Name:
String getTriggeredJob(CURRENT_BUILD) {
if (CURRENT_BUILD.upstreamBuilds.size() > 0) {
TRIGGERED_JOB = CURRENT_BUILD.upstreamBuilds[0].projectName
if (!TRIGGERED_JOB.isEmpty()) {
return TRIGGERED_JOB
}
}
return "Self"
}
CURRENT_BUILD is env var currentBuild
How to return Username & UserId:
UserName: currentBuild.rawBuild.getCause(Cause.UserIdCause).getUserName()
UserId: currentBuild.rawBuild.getCause(Cause.UserIdCause).getUserId()
There is other way to get user_id, where you don't need to install anything.
BUILD_USER_ID = sh (
script: 'id -u',
returnStdout: true
).trim()
echo "bUILD USER: ${BUILD_USER_ID }"
For declarative pipeline syntax, here is a quick hack, base on #Kevin answer.
For declarative pipeline you need to enclose them in a node, else you will get an error/ build failure
node {
def BUILD_FULL = sh (
script: 'curl --silent '+buildURL+' | tr "{}" "\\n" | grep -Po \'"shortDescription":.*?[^\\\\]"\' | cut -d ":" -f2',
returnStdout: true
)
slackSend channel: '#ci-cd',
color: '#000000',
message: "The pipeline was ${BUILD_FULL} ${GIT_COMMIT_MSG} "
}
The output will be slack notification sent to your slack channel with the git short description

Resources