TFS Build Dependencies - tfs

In TeamCity, you can create build dependencies where one build will not start until another finishes successfully. Is that possible with TFS 2012? Where can I find more information about how to set that up?

The short answer is that TFS doesn't have equivalent functionality, but you can achieve the same goals with a little work.
A common scenario I encounter is a team wants to do a build when they check-in that does some quick stuff (compile, fast unit tests), then immediately after wants to do another build that runs some slower stuff (integration tests, test deployments, etc). I do this often with my teams, and I'll setup a Gated Build that runs in say 5 mins, then have a CI build that is kicked off as soon as the Gated Build checks-in, which may take an hour to run. I like this approach as it gets the developers some feedback quickly, then more detailed feedback shortly thereafter.
Another supported scenario is having a build explicitly kickoff it's dependencies. If you look a the Lab Build Template it does exactly this, it will first kickoff the application TFS Build, and the Lab Build will sit and wait for it to finish, then the Lab Build will continue. In theory you could have Build A kickoff build B which kicks off C & D, etc.
If your needs are more complex than that (e.g. you have multiple applications that you have a build for each, then a Product that includes some applications that needs to be built after each application changes, then maybe a Product Suite build that needs to kickoff whenever a Product changes - this is the scenario I dealt with). I basically implemented a custom build dependency system to handle this. We made an XML file that described the build dependencies, then wrote a simple ISubscriber plug-in that we would deploy to TFS, and it would listen for Build Completed events then consult the dependency config and kickoff the appropriate build(s).

Related

Jenkins CI workflow with separate build and automated test both in source control

I am trying to improve our continuous integration process using Jenkins and our source control system (currently svn, but git soon).
Maybe I am thinking about this overly complicated, or maybe I have not yet seen the right hints.
The process I envisioned has three steps and associated roles:
one or more developers would do their job and ultimately submit the code changes for the actual software ("main software") as well as unit tests into source control (git, or something else). Jenkins shall build the software, run unit tests and perhaps some other steps (e.g. static code analysis). If none of this fails, the work of the developers is done. As part of the build, the build number is baked into the main software itself as part of the version number.
one or more test engineers will subsequently pickup the build and perform tests. Some of them may be manual, most of them are desired to be automated/scripted tests. These shall ultimately be submitted into source control as well and be executed through the build server. However, this shall not trigger a new build of the main software (since there is nothing changed). If none of this fails, the test engineers are done. Note that our automated tests currently take several hours to complete.
As a last step, a project manager authorizes release of the software, which executes whatever delivery/deployment steps are needed. Also, the source of the main software, unit tests, and automated test scripts, the jenkins build script - and ideally all build artifacts ("binaries") - are archived (tagged) in the source control system.
Ideally, developers are able to also manually trigger execution of the automated tests to "preview" the outcome of their build.
I have been unable to figure out how to do this with Jenkins and Git - or any other source control system.
Jenkin's pipelines seem to assume that all steps are carried out in sequence automatically. It also seems to assume that committing code into source control starts at the beginning (which I believe is not true if the commit was "merely" automated test scripts). Triggering an unnecessary build of the main software really hurts our process, as it basically invalidates and manual testing and documentation, as it results in a new build number baked into the software.
If my approach is so uncommon, please direct me how to do this correctly. Otherwise I would appreciate pointers how to get this done (conceptually).
I will try to reply with some points. This is indeed conceptually approach as there are a lot of details and different approaches too, this is only one.
You need git :)
You need to setup a git branching strategy which will allow to have multiple developers to work simultaneously, pushing code and validating it agains the static code analysis. I would suggest that you start with Git Flow, it is widely used and can be adapted to whatever reality you do have - you do not need to use it in its pure state, so give some thought how to adapt it. Fundamentally, it will allow for each feature to be tested. Then, each developer can merge it on the develop branch - from this point on, you have your features validated and you can start to deploy and test.
Have a look at multibranch pipelines. This will allow you to test the several feature branches that you might have and to have different flows for the other branches - develop, release and master - depending on your deployment needs. So, when you have a merge on develop branch, you can trigger testing or just use it to run static code analysis.
To overcome the problem that you mention on your second point, there are ways to read your change sets on the pipeline, and in case the changes are only taken on testing scripts, you should not build your SW - check here how to read changes, and here an example of how to read changes and also to prevent your pipeline to build all the stages according to the changes being pushed to git.
In case you still have manual testing taking place, pipelines are pausable which means that you can pause the pipeline asking for approval to proceed. Before approving, testers should do whatever they have to, and whenever they are ready to proceed, just approve the build to proceed for the next steps.
Regarding deployments authorization, it is done the same way that I mention on the last point, with the approvals, but in this case, you can specify which users/roles are allowed to approve that step.
Whatever you need to keep from your builds, Jenkins has an archive artifacts utility. Let me just note that ideally you would look into a proper artefact repository such as Nexus.
To trigger manually a set of tests... You can have a manually triggered job on Jenkins apart from your CI/CD pipeline, that will only execute the automated tests. You can even trigger this same job as one pipeline stage - how to trigger another jobs
Lastly let me say that the branching strategy is the starting point.
Think on your big picture, what SDLC flows you need to have and setup those flows on your multibranch pipeline. Different git branches will facilitate whatever flows you need within the same Jenkinsfile - your pipeline definition. It really depends on how many environments you have to deploy to and what kind of steps you need.

TFS Chained Gated Check In Builds

Currently my TFS 2012 build environment features a Build Verification (BVT) build which mostly follows LabDefaultTemplate.xaml worfklow. The BVT build first queues a Continuous Build (which mostly follows the DefaultTemplate.11.xaml), waits for that build to finish, then performs the necessary tests.
Now, I would like to change this BVT build to be Gated Check-In. Which is to say, I don't want any changes to be committed until the BVT is successful. The problem seems to be queuing the Continuous build definition in such a way that it will pick up the shelf set. This logic appears to be dependent on the build being started with Reason = "CheckInShelveset". However it seems all builds queued from another build always have Reason "UserCreated". Has anyone found a way around this problem? Is it possible to chain builds together while still having Gated Check-Ins?

When should I "Release" my builds?

We just started using Visual Studio Release Management for one of our projects, and we're already having some problems with how we are doing things.
For now, we've created a single release stage, which is responsible for deploying our build artifacts to a dedicated virtual machine for testing. We intend to use this machine to run our integration tests later on.
Right now, we have a gated checkin build process: each checkin fires all the unit tests and we configured the release trigger to happen on this build also. At first, it seemed plausible that, after each checkin, the project was deployed and the integration tests were executed. We noticed that all released builds were polluting the console on Release Management, and that all builds were being marked as "Retain Indefinitely" and our drop folder location was growing fast (after seeing that, it makes sense that the tool automatically does this, since one could promote any build to another stage and the artifacts need to be persisted).
The question then is: what are we doing wrong? I've been thinking about this and it really does not make any sense to "release" every checkin. We should probably be starting this release process when a sprint ends, a point that can be considered a "release candidate".
If we do that though, how and when would we run our automated integration tests? I mean, a deployment process is required for running those in our case, and if we try to use other means to achieve that (like the LabTemplate build process) we will end up duplicating deployment code.
What is the best approach here?
It's tough to say without being inside your organization and looking at how you do things, but I'll take a stab.
First, I generally avoid gated checkin builds unless there's a frequent problem with broken builds. If broken builds aren't a pain point, don't use gated checkin. Why? Simple: If your build/test process takes 10 minutes to run, that's 10 minutes that I have to wait to know whether I can keep working, or if I'm going to get my changes kicked back out at me. It discourages small, frequent checkins and encourages giant, contextless checkins.
It's also 10 minutes that Developer B has to wait to grab Developer A's latest changes. If Developer B needs that checkin to keep working, that's wasted time. Trust your CI process to catch a broken build and your developers to take responsibility and fix them on the rare occasions when they occur.
It's more appropriate (depending on your branching strategy) to do a gated checkin against your trunk, and then CI builds against your dev/feature branches. Of course, that opens up the whole "how do I build once/deploy many when I have multiple branches?" can of worms. :)
If your integration tests are slow and require a deployment to succeed, they're probably not good candidates to run as part of CI. Have a CI/gated checkin build that just:
Builds
Runs fast unit tests
Runs high-priority, non-deployment-based integration tests
Then, have a second build (either scheduled, or rolling) that actually deploys and runs the whole test suite. You can schedule it according to your tastes -- I usually go with one at noon (or whatever passes for "lunch break" among the team), and one at midnight. That way you get a tested build from the morning's work, and one from the afternoon's work.
Using the Release Default Template, you can target your scheduled builds to just go as far as your "dev" (/test/integration/whatever you call it) stage. When you're ready to actually release a build, you can kick off a new release using that specific build that targets Production and let it go through all your stages normally.
Don't get tripped up on the 'Release' word. In MS Release Management (RM), creating a Release does not necessarily mean you will have this code delivered to your customers / not even that it has the quality to move out of dev. It only means you are putting a version of the code on your Release Path. This version/release can stop right in the first stage and that is ok.
Let's say you have a Release Path consisting of Dev, QA, Prod. In the course of a month, you may end up releasing 100 times in Dev, but only 5 times in QA and once in Prod.
You should drive to get each check-in deployed and integration tested. If tests takes a long time, only do the minimal during (gated or not) check-in (for example, unit tests + deployment), and the rest in your second stage of Release Path (which should be automatically triggered after first stage completes). It does not matter if second stage takes a long time. As a dev, check-in, once build completes successfully (and first stage), expect the rest to go smoothly and continue on your next task. (Note that only result of the first stage impacts your TFS build).
Most of the time, deployment and rest will run fine and so there won't be any impact to dev. Every now and then, you will have a failure in first stage, now the dev will interrupt his new work and get a resolution asap.
As for the issue that every build is kept indefinitely, for the time being, that is a side effect of RM. Current customers need to do the clean up manually (or script it). In the coming releases, a new retention policy for releases/builds will be put in place to improve this. This has not been worked on yet, but the intention would be to, for example, instruct RM to keep all releases that went to Prod, keep only the last 5 that went to QA and keep only the last 2 that went to Dev.
This is not a simple question, so also the answer must be articulated.
First of all, you will never keep all of your builds; the older a build, the less interesting to anyone; a build that doesn't get deployed in production is overtaken by builds that reaches that stage.
A team must agree on the criteria that makes a build interesting to keep around and how long to keep it. Define a policy for builds shipped to production or customers: how long do you support them? Until the next release, until the following one, for five years? Potentially shippable builds, still not in your customers' hands, are superseded by newer, so you can use a numeric or a temporal criteria (TFS implements only the first, as the second is more error-prone). Often you have more than one shippable build, when you want a safety net option and being able select from a pool which deliver (the one with more manageable bugs).
The TFS "Retain Indefinitely" should be used when you cannot automate the previous criteria, so you switch to a manually implemented policy. Indefinitely is not forever, means for an unknown time interval.

feedback on tfsbuild setup for mvc app

I am new to TFSBuild but I have been able to create a build definition for my solution. I have a couple questions that help would be great with.
I have created 3 definitions - I wonder if this is the correct way to this.
A definition that fires for every check in, builds the code and runs unit tests only
A definition runs every night, builds everything, runs all unit and integration tests
A definition that I specifically use for deployments - so I specify the environment via a parameter and it builds the code, runs unit and integration tests and ms deploys it to specified environment, again via parameter
When I branch my code etc, I will have to create 3 definitions for each and this could become unmanageable. Feedback on this please?
Is it true that each definition has its own set of build numbers? Can they be shared?
My application is an MVC4 app with VS2012 IDE.
Sadly TFS Build doesn't have very good support for branches, yes this typically means you duplicate your build definitions for each branch. There are a few custom build process template that I've seen in the past which try to get around this, but nothing built in.
You could replace #2 with a windows scheduler task to run #1 with custom parameters, not the nicest solution, but could be extended to queue every build definition at midnight with the integration test flag.
For #3 instead of using a build definition to deploy I use an external tool called TFS Deployer, this allows me to use any build to deploy simply by changing the build quality of the build. Reducing the number of builds that need to be setup by 1 for each branch.
Each build definition has its own build numbers, there's no built in way to share however I believe this is set by the build definition, so you may be able to hack around it somehow.

MSBuild task or custom activity to increment version number

I am working with Visual Studio 2012 .NET 4.5 ASP.NET MVC 4 project that uses TFS for source control and TFS Build for continuous integration (CI).
I want to create functionality that on each check in the build number gets updated prior to the CI build is kicked off.
From research it seems that a custom activity can be created and integrated in TFS 2010 build template.
I have also seen examples of this can be achieved with MSBuild task.
I haven't done work in this area before, so I am wondering which is the better approach or the recommended approach based upon the options open to me? In general when would I use MSBuild tasks as oppose to custom activity? For example, I will be looking to run FxCop and StyleCop against check ins also in the future, so I would like a common approach to this.
In the case of incrementing the build number, I'd vote for the TFS Build Activity so that the implementation is not tied to your msbuild implementation. This allows you to easily apply the TFS workflow activity to any number of branches without tying it to the branches directly. In addition, it keeps your MSBuild project files clean of the task so that it isn't mistakenly executed on developer machines.
Holistically, I'd say that you need to take a variety of factors into account when deciding between MSBuild and Workflow activities:
1 - Does MSBuild support the functionality out of the box (like Code Analysis/FxCop)?
2 - Does the build step need to run on developer boxes as well as servers (StyleCop/FxCop)?
3 - Does the build step need to interact with the TFS API or source control directly (checking out/in a version file for incrementing)?
4 - Are you going to change build job schedulers later to something free (for example, Jenkins)?
It's the combination of these things that determines the implementation of any given tool integration in my book. I'd implement FxCop, StyleCop and any other tool that should be run on a developer box build via MSBuild. I'd implement build steps such as version incrementing, bin-placing and CI deployment invocation (for example, deployment of a SharePoint webpart as a post-build step) via a Code Activity or some scriptware.

Resources