TFS rest API - queue build on same machine - tfs

I am trying to see if it is possible to queue a build on the same VM as another build. Note, I am in the vNext system. I know it is possible to configuration information about the agent queues. but I want to get a little more specific than that. I'm not sure if what I want falls outside of build best practices, but I'd like to ask anyway.
Consider the following setup for builds A & B:
There are 4 agents: 2 on server 1, 2 on server 2 Any agent can pick up build A or build B .... I know how to set up the demands so only agents on those VMs can pick up this builds.
Build A uses the rest API to queue build B, and wait for it to complete. Right now, under current configurations, the spawned build B can be picked up by any of the 3 remaining agents on (so the free agent on the same machine as build A, or any of the agents on the other server).
I want to try to run build B on the same server as build A when build A launches build B. In other words, if an agent on server 1 picks up build A, I want the build B it launches to use the same server (using the other agent on the machine). Conversely, if the server 2 picks up build A, I want the build B it launches to be picked up by server 2 as well. Assuming I know things like the Agent ID, Agemt Machine Name, etc, I wanted to use the rest API in c# to simulating launching build B to try to control which machine picks it up.
I could configure agents so that both builds only run on 1 machine, but I am trying to avoid running too many agents on one machine. I could configure the above 4 agents to run on 1 machine, but I want to distribute the agents as much as possible. I know it is possible to specify information about agents queues / pools etc, bot not the machine. I am trying to avoid having to restrict the number of machines that run a set of agents.
I know this may seem a bit unusual ... but I am dealing with a scenario where I want to share information between the builds that will be simplified if they run on the same machine. If build B is launched by itself, it doesn't matter what machine picks it up. If this isn't possible, I will try other ways to share information between the builds.
Is it possible to do this?

Yes, you can achieve that by set up the demands with REST API when queue the build.
For example, you can use below PowerShell script to queue a build with the specific agent:
Param(
[string]$collectionurl = "http://server:8080/tfs/DefaultCollection/",
[string]$projectName = "ProjectName",
[string]$keepForever = "true",
[string]$BuildDefinitionId = "1",
[string]$user = "username",
[string]$token = "password"
)
# Base64-encodes the Personal Access Token (PAT) appropriately
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))
function CreateJsonBody
{
$value = #"
{
"definition": {
"id": $BuildDefinitionId
},
"sourceBranch": "$/xxxx",
"demands":["Agent.Name -equals Agent1"]
}
}
"#
return $value
}
$json = CreateJsonBody
$uri = "$($collectionurl)/$($projectName)/_apis/build/builds?api-version=2.0"
$result = Invoke-RestMethod -Uri $uri -Method Post -Body $json -ContentType "application/json" -Headers #{Authorization=("Basic {0}" -f $base64AuthInfo)}
In your scenario: (Assume that Agent1 is the name of the agent, you can get it from Agent Capabilities)
Server1 : Agent1, Agent2
Server2 : Agent3, Agent4
If Build A queued with Agent1, then you an set "demands":["Agent.Name -equals Agent2"] to queue Build B.
Same thing, if Build A queued with Agent3, then you an set "demands":["Agent.Name -equals Agent4"] to queue Build B.
You can also create a simple PowerShell script and add a PowerShell step as the end of the build step in definition A, then run the PS script to queue build B once Build A completed.
If you want to queue build with the REST API in C#, then you can reference below thread:
Queue vNext build from c# using REST API (Tfs on premise)

Related

How to setup concurrent build in azure devops service - tfs

I need to build the code one after another in TFS. When the first build pipeline is completed, the second build pipeline should be triggered automatically
If you are using Azure DevOps service:
You could simply chain related builds together using build completion triggers.
Add a build completion trigger to run your build upon the successful
completion of the triggering build. You can select any other build in
the same project.
After you add a build completion trigger, select the triggering build.
If the triggering build is sourced from a Git repo, you can also
specify branch filters. If you want to use wildcard characters, then
type the branch specification (for example, features/modules/*) and
then press Enter.
Source Link
If you are using on-premise and your TFS version do not support build completion triggers:
There are two ways to run another build in your current build.
Option 1: add PowerShell task in your current build definition to queue another build by REST API
Assume another build id is 5, so you can add PowerShell task with the script:
$body = #{
definition = #{
id = 5
}
}
$Uri = "http://account.visualstudio.com/DefaultCollection/project/_apis/build/builds?api-version=2.0"
$buildresponse = Invoke-RestMethod -Method Post -UseDefaultCredentials -ContentType application/json -Uri $Uri -Body (ConvertTo-Json $body)
Option 2: install related extension in Market place
There are some extensions you can install for your VSTS account, then you can add the task to queue another build. such as Queue Build(s) Task, Trigger New Build, Queue New Build etc.

Establish relationship between two Jenkins Jobs available on different Jenkins server

I am building Jenkins for Test / QA automation scripts, lets name it TEST_JOB. For application, I have application source code Jenkins build, name it DEV_JOB.
My scenario is when DEV_JOB completes execution (successfully), execute TEST_JOB immediately. I am aware about setting up project upstream / downstream [ Build after other projects are built ] to accomplish this task. But here, Problem is DEV_JOB is on different server than TEST_JOB. Due to which, TEST_JOB fails to recognize DEV_JOB.
Now, how would I achieve this scenario?
You can use Jenkins API for remote trigger of Job.
Say you have job on DEV_JOB on JENKINS_1, add a penultimate step(or upstream/downstream project having only this step) which invokes TEST_JOB using remote API call of JENKINS_2 server.
Example command would be
$(curl --user "username:password" "http://JENKINS_2/job/TEST_JOB/buildWithParameters?SOMEPARAMETER=$SOMEPARAMETER")
username:password is a valid user on JENKINS_2.
Avoid using your own account here but rather a 'build trigger' account that only has permissions to start those jobs.

Jenkins - make agents wait for other agent to finish

I'm new to Jenkins and I'm trying to setup a project which will use few build executors.
The flow shall be as follows:
two build executors with webservice label return their IP addresses and wait for the third build executor to finish its job
third build executor with tester label collects those IP addresses and performs some long running job (e.x. sends HTTP requests to the webservices deployed on those two agents)
How to achieve such behavior in Jenkins?
I've found that when an build executor finishes its job it is immediately released and I don't know how to make it wait for other build executors to finish their jobs.
Edit:
I forgot to mention that I want the build executors with the webservice label to be reserved (not available for other jobs) till the build executor with the tester label will finish its long-running job.
Also all these build executors should be on separate slaves each. That means each slave has only one build executor.
I've finally managed to do this using Pipeline and below script:
node('webservice') {
def firstHostname = getHostname()
node('webservice') {
def secondHostname = getHostname()
node('tester') {
println 'Running tests against ' + firstHostname + ' and ' + secondHostname
// ...
}
}
}
def getHostname() {
sh 'hostname > output'
readFile('output').trim()
}
It acquires two build executors with webservice label. I'm getting their hostnames (I'm using them instead of the IP addresses) and pass them to the build executor with a tester label. Finally the tester runs some long-running tests.
Those two webservice build executors are blocked till the tester finishes its job, and no other project may use them during that time.
As Alex O mentioned, you can configure the master and slave relationship between the projects /executors inside the Jenkins projects /executors. There is option for that, "Build Triggers" -> Build after other projects are built
or use plugin to achieve it
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
or
https://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin
What you actually want is probably that your job uses three slaves at the same time.
Re-thinking the setup in that way, it won't be necessary to consider the collection of IPs and the subsequent usage of the slaves as three different steps that must be aligned in some way.
Unfortunately, Jenkins does not support using multiple slaves for one build out-of-the box, but it will be possible to achieve what you want e.g. using the Multijob plugin and the Join plugin that Aaron mentioned already.
See also this question for information on how to use two slaves at the same time.

Jenkins - Upstream project dependency issue

Here is something I want to achieve:
I have a jenkins project which has 4 upstream projects. But I don't want to trigger this project when the upstream jobs are done building, but I want the trigger the project via remote API, which then waits on upstream projects until they are done building, if these projects are building.
Lets say all the 4 upstream projects can build the source code from any branch passed via API, but I want the downstream project to start only when a specific branch is passed to these upstream projects.
Scenario:
Lets say I have two clusters A and B, for the sake of this question, I want to deploy my code to cluster A, i.e front end and backend code. Now I have a project to build front end and 1 project to build backend (these two projects can build code for cluster A and B, based on the branch passed). Now, I have two deploy projects for cluster A which will deploy front end and backend. So, when I pass a branch to build code for cluster A, it will trigger the build projects. But now I only want these two deploy projects to start when this specific branch was passed.
If you want to control the builds remotely then use the Jerkins cli - I have found it very useful http://jenkinshost:8080/cli
You need to get the ssh key config right, add the public key of the user running the cli to the user you want to run the job in Jenkins using the Jenkins user configuration (not on the command line
Test key setup with
java -jar jenkins=cli.jar -s http://jenkinshost:8080 who-am-i
This should then report which user will be used to run the build in Jenkins
But I think you can use the Conditional Build Step plugin for your problem
https://wiki.jenkins-ci.org/display/JENKINS/Conditional+BuildStep+Plugin
This will allow you to put a conditional wrapper around a build step i.e.
if branch==branchA then
trigger step - deploy to clusterA
if branch==branchB then
trigger step - deploy to clusterB
Personally I find this plugin a bit clunky and it makes the job config page a little messy
Another solution I came up with was to always call the child job and then let it decide if it runs.
So I have a script step at the start of the child job to see if it should run
if [${branch}="Not the right branch name" ] ; then
echo "EXIT_GREEN"
exit 1
fi
You have now failed this job which would cause the parent job to go red but by using the Groovy Postbuild plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin you can add a post build step like this
if (manager.logContains(".*EXIT_GREEN.*")) {
manager.addBadge("info.gif","This job had nothing to do")
manager.build.#result = hudson.model.Result.SUCCESS
}
Child job has run green (with an info icon against the build) but has actually not done anything. Obviously if the branch is one you want deploy then the first script step does not run the exit 1 and the job continues as normal

Team Build - Automatically Reenable Agent After Becoming Unreachable

We have a central Team Foundation Server (2008) deployment where all projects get stored. Each project sets up their own build server running Team Build to do their own automated builds.
Here's the problem. When a connection error is detected between TFS and the Team Build server, it moves the build agent's status to 'unreachable' which means it's not available for any subsequent builds. Our servers have scheduled reboot windows and when TFS can't communicate with those agents (or vice-versa) during that window, it moves the agent to 'unreachable'. Every morning we come in and find that we have to manually go in and reenable the agent.
Is it possible to have the team build agents come back online as soon they're available again? Or perhaps write a script that brings them back online automatically?
In TFS2008, the AT should ping the unreachable build agent on a regular period (15-30 minutes, can't remember the interval at the moment) to see if it is back up. Are you not seeing this behaviour - do yours stay unreachable?
That said, it is possible to write a bit of .NET code that you could run periodically to set the status of the build agent. Alternatively you could run it as a scheduled task after start-up on the windows machine that is running as your build agent to go talk to TFS and set it's status back to good.
To write the code, you want to use the TFS Build API (Microsoft.TeamFoundation.Build.Client). In particular you want to look at the IBuildAgent. Get the appropriate one from the IBuildServer, change the status and then call buildAgent.Save().
I've also seen that problem myself - Here's a Powershell script that will iterate all build agents on all Team Projects and enabled them. Note that the agents will be updated to enable immediately regardless of whether they are valid (so if the build server is still down when the script runs - as soon as a build triggers - it will revert to Unreachable)
$serverName = "TFSRTM08"
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Client")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.WorkItemTracking.Client")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Build.Client")
$tfs = [Microsoft.TeamFoundation.Client.TeamFoundationServerFactory]::GetServer($serverName)
$wit = $tfs.GetService("Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore")
$bld = $tfs.GetService("Microsoft.TeamFoundation.Build.Client.IBuildServer")
$prjs = $wit.Projects
foreach ($proj in $prjs)
{
$agents = $bld.QueryBuildAgents($proj.Name)
foreach ($agent in $agents)
{
if ($agent.Status -ne "Enabled")
{
Write-Output "Enabling Build Agent: " $agent.Name " on Team Project: " $proj.Name " status was " $agent.Status
$agent.Status = "Enabled"
$agent.Save()
}
}
}

Resources