What builds are using a specific build agent - tfs

My company has a bunch of different builds and half a dozen different build agents, and I need to update some software for one of the builds. I don't want to break any other builds that are using said agent. I would like to get a list of all builds that use said agent so that I can validate them after my software updates on the agent. I would prefer not to individually review each build, as there are dozens, if not hundreds of them. Is there some way to get this information quickly? Either from the agent, or from TFS somehow?

By default, builds are tied to controllers, not agents and could therefore run on any of the agents bound to the controller. Unless as suggested by Daniel Mann, you have your builds tagged to specific agents you won't be able to get that level of detail. Without tagging, your report would be limited to a list of machines that each build could possibly run on.
What I do in this situation is to have a separate, private build controller for build software testing. Upgrade the software on this and then queue test builds for the affected definitions, changing the controller to your test controller in the Queue Build parameters. Once you've verified that your changes won't break the builds you can then schedule downtime to upgrade the production agent machines.

Related

Agents not picking new build requests in TFS 2017

I have 10 build agents configured in two servers under one agent pool. Whenever the first four agents are used, the build requested is in the queue on the first four agents, but there are another six agents which are available and the builds are not queued to those agents.
It's been almost six months and agent 10 has not even once handled a build. Other agents from 5 to 10 are hardly used. Why is there this phenomenon? How can we handle this by using all the agents fairly?
TFS will auto select the available build agent in the pool when running the build. It's more like a conditional random choice. It's not able to prioritize the build agent for now. There has also been a related uservoice as below:
TFS 2015 build vNext agent prioritization
As a workaround you could specify a build agent in vNext build.
You can add a User Capability to that specific build agent. Then in the build definition you just need put in that capability as a demand (General tab).
It seems like the builds are queued on the 'oldest' agents first. So if agent 10 is the last agent you created, it will only be used if the first 9 are in use, assuming they all have the same capabilities.
It does not appear to be a random selection of agents, but based on the order of creation of agents. Ironically that means that if you add a new powerful build server, those agents will be at the bottom of the queue.
The user voice suggestion in PatrickLu-MSFT's answer is to allow the agent to be prioritized.
The workaround at this moment seems to be to remove all (or some) agents and re-create them in the order you want them to be used. Which still means the last agent will be used less, but at least you can influence the distribution of the agents a bit.
We are running into this issue as well. We have six build servers with three agents each and builds are not distributed fairly. I also do not want to assign an agent per definition, but I guess we are going to have to puzzle with it.

How to create complex value stream with multiple pipelines with Jenkins WorkFlow

How do you implement a complex value stream with multiple pipelines in Jenkins WorkFlow? Similar like you can do with Go CD: How do I do CD with Go?: Part 2: Pipelines and Value Streams.
For a distributed system I would like to have each dev team and operation team to start with their own delivery pipeline. One change needs to trigger only the pipeline of the team that made the change. It needs to trigger a new pipeline that needs to take the latest successful artifacts from each of the team's pipelines and move on from there. This mean that the artifacts from the other teams were not rebuild or retested as they were not changed. And after the Fan In we can run a set of automated tests to verify the correct behaviour of the distributed system with the change.
In the documentation I only find you can pull from multiple VCS's but I assume everything is then build and tested with every change. Which is something I want to avoid.
If each delivery pipeline is in it's own Jenkins Job. How can I visualize the complete pipeline and what is the best way to pull in the last successful artifacts or version from the other pipelines?
There is no direct equivalent in Jenkins for value streams, and Workflow jobs do not behave any differently in that respect: you can have upstream jobs and downstream jobs correlated with triggers (in this case the build step, or the core ReverseBuildTrigger), and use (for example) the Copy Artifact plugin to transfer artifacts to downstream builds. Similarly, you could use an external repository manager as the “source of truth” and define job triggers based on snapshots pushed to the repository.
That said, part of the purpose of Workflow is to avoid the need for complex job chains in most situations¹, since it is usually easier to reason about, debug, and customize a single script with standard control flow operators and local variables than to manage a set of interdependent jobs. If the main problem with a single flow is that you need to avoid rebuilding unmodified parts, one solution would be to use something like JENKINS-30412 to check the changelog of particular repository checkouts and skip build steps if empty. I think there would be more features needed to make such a system work in the general case that workspaces are clobbered or discarded by other builds.
¹One case where you definitely need separate jobs is that for security reasons the teams contributing to different projects must not be able to see one another’s sources or build logs.
Assuming that each of your dev teams works on a different module of your project and „One change needs to trigger only the pipeline of the team that made the change“ I'd use Git Submodules:
Submodules allow you to keep a Git repository as a subdirectory of another Git repository.
with one repo, that becomes a submodule of a main module repo, for each team. This will be transparent to the teams since they just work on their designated repos only.
The main module is also the aggregator project for your module projects in terms of the build tool. So, you have the options:
to build each repo/pipeline individually or
to build the whole (main) project at once.
A build pipeline that comprises one or more build jobs is associated to every team/repo/module.
The main pipeline is merely a collection of downstream jobs which represent the starting points of the team/repo/module pipelines.
The build triggers can be any of manually, timed or on source changes.
A decision has also to be made:
whether you version your modules individually, such that other modules depend on release versions only.
Advantage:
Others rely on released, usually more stable versions.
Modules can decide which version of a dependency they want to use.
Disadvantages:
Releases have to be prepared for each module.
It may take longer until the latest changes are available to others.
Modules have to decide which version of a dependency they want to use. And they have to adapt it every time they need functionality added in a newer version.
or whether you use one version for the entire project (which is inherited by the modules then): ...-SNAPSHOT during the development cycle, a release version when releasing the project.
In this case, if there are modules that are essential for others, e.g. a core module, a successful build of it should trigger a build of the dependent modules, as well, so that incompatibilities are recognized as early as possible.
Advantages:
Latest changes are immediately available to others.
A release is prepared for the whole project only once it is to be delivered.
Disadvantages:
Latest changes immediately available to others may introduce not so stable (snapshot) code.
Re „How can I visualize the complete pipeline“
I'm not aware of any plugin that can do this with Workflows at the moment.
There's the Build Graph View Plugin which originally has been created for Build Flows, but it's more than two years old now:
Downstream builds are identified by DownStreamRunDeclarer extension point.
Default one is using Jenkins dependencyGraph and UpstreamCause and as such can detect common build chain.
build-flow plugin is contributing one to render flow execution as a graph
some Jenkins plugins may later contribute dedicated solutions.
(You know, „may“ and „later“ often become will not and never in development. ;)
There's the Build Pipeline Plugin but it apparently is also not suitable for Workflows:
This plugin provides a Build Pipeline View of upstream and downstream connected jobs [...]
Re „way to pull in the last successful artifacts“
Apparently it's not that smooth with Gradle:
By default, Gradle does not define any repositories.
I'm using Maven and there exist local and remote repositories where the latter can also be:
[...] internal repositories set up on a file or HTTP server within your company, used to share private artifacts between development teams and for releases.
Have you considered using a binary repository manager like Artifactory or Nexus?
From what I have seen, people are moving towards smaller, independent pieces of code delivery rather than monolithic deployments. But clearly, there will still be dependencies between different components. At the very least, for example, if you had one script that provisioned your infrastructure and another that built and deployed your app, you would want to be sure your infrastructure update script was run before your app deployment. On the other hand, your infrastructure does not depend on deploying your app code - it can be updated at its own pace, so long as it ideally passes some testing.
As mentioned in another post, you really have two options to accomplish this dependency:
Have a single pipeline (workflow script) that checks out code from both repos and puts them through the same pipeline simultaneously. Any change to one requires the full boat pipeline for everything.
Have two pipelines and this would allow each to go at its own pace independent of what the other does. This isn't a problem for the infrastructure code, but it very well could be for the app code. If you pushed your app code to production without the infrastructure update having happened first, the results may not be pleasant.
What I've started to do with Jenkins Workflow is establish a dependency between my flows. Basically, I declare that one flow is dependent on a particular version (in this case, simply BUILD_NUM) and so before I do a production deploy I verify that the last successful build of the other pipeline has completed first. I'm able to do this using the Jenkins API as part of my flow script that waits for that build or greater to succeed, like so
import hudson.EnvVars
import hudson.model.*
int indepdentBuildNum = 16
waitUntil{
verifyDependentPipelineCompletion("FLDR_CM/WorkflowDepedencyTester2", indepdentBuildNum)
}
boolean verifyDependentPipelineCompletion(String jobName, int buildNum){
def hi = jenkins.model.Jenkins.instance
Item dep2 = hi.getItemByFullName(jobName)
hi = null
def jobs = dep2.getAllJobs().toArray()
def onlyJob = jobs[0] //always 1 job...I think?
def targetedBuild = onlyJob.getLastSuccessfulBuild()
EnvVars me = targetedBuild.getCharacteristicEnvVars()
def es = me.entrySet()
int targetBuildNum = 0;
def vars = es.iterator()
while(vars.hasNext()){
def envVar = vars.next()
if(envVar.getKey().equals("BUILD_ID")){
targetBuildNum = Integer.parseInt(envVar.getValue())
}
}
if (buildNum > targetBuildNum) {
return false
}
return true
}
Disclaimer that I am just beginning this process so I do not have much real-world experience with this yet, but will update this thread if I have more relevant information. Any feedback welcome.

TFS, Jenkins and how to update work items with build numbers

We are using TFS and the TFS Build Service. We are considering to migrate the Build service to Jenkins but we came across some issues. According to this site, there are some things that do not work very well with the TFS and Jenkins plugins. All of them we use a lot:
Associated Change sets – Team Build automatically associates a list of change sets that are included in the build
Associated Work Items – Team Build analysis the relationships and also associates Work Items with a build. Indeed it walks the work item tree (parent) and maintains that association in the chain.
Is this still true? We have this scenario:
A developer checks in a code that fix a bug or resolve a User Story. It does that by associating his check in with the work item ID.
His check in triggers a build that will associate the work item with his changeset. For bugs, the build will update the "Integrated in Build" field with the build number. We use this field to know in witch version the bug was fixed.
Is there any way to make Jenkins behave and do what TFS build service does?
Another option is to mix the two using dummy builds on the TFS side that set the records straight and kick-off the Jenkins' builds. Some hints
How to trigger Jenkins builds remotely and to pass parameters and “Fake” a TFS Build.
This approach requires a bit of effort but has many advantages:
No big-bang, use Jenkins opportunistically
Can continue using existing builds
Having a build identifier in TFS allows you an overall monitoring and to use the Test features
I have a VSTS build definition for one of our projects that requires jenkins to build, but we still have all our other products using VSTS natively. To maintain consistency, this build definition triggers a jenkins build. We configured the build definition to not sync code as jenkins will download it (save time) and not to publish the artifacts back to the agent (i have another script for that found here). This allows developers to continue to use git as normal, and the build/release process is consistent with our other products. Along with task tracking and such.

Disallowing two Jenkins projects to build simultaneously

I have two projects in Jenkins that are not linked to each other in any way (database build, and application build/test build). The two may never build at the same time in Jenkins, because tests access the database and it may not be building while tests run on the other build. Is it possible to make sure that the two projects never build at the same time? Apparently it is possible to do this for child/parent builds, but these two have no formal relation to each other. Thanks.
I would recommend using the Throttle Concurrent Builds plugin. If you use that plugin, then create a category and assign both jobs that category, you can be assured that the two jobs will never build at the same time.

Queuing of TFS build against agents

I have a TFS 2010 build service installed with three agents: Database, Production, and Release with mutually exclusive sets of builds running on each.
When builds are queued, unfortunately, they appear to get queued in groups of three regardless of which agent they're ready to go to. This means that we lose the parallelism I was hoping for if more than three builds from a single agent are run at once as they will take up the entire queue.
Is there a way to make sure that builds are queued once their own agent becomes free so that we can have as many parallel builds as possible?
Using the TFS 2010 default build defintion you cannot select an build agent but only a build controller unless you have customized it. Ideally you should have a single build controller, which has multiple build agents beneath it. Within the build definition you would only be selecting controller name and the build controller pushes the build to the agent which is free at that point. You can also use TAGS to make sure that the build runs on only a particular build agent.

Resources