We have a "continuous integration build" definition in our TFS project. Each time there is a check-in, the source is fetched, compiled, and some automated tests are run. On the "Repository" tab of the build definition, there is a Clean option, i.e.:
screen grab showing clean option on tab
We are currently using TFVC, but are considering moving to Git.
Currently, we have the clean set to 'false', so when the build starts, the build agent does an incremental get of the source files. Throughout most of the day, this is exactly what we want since it does shorten the build times. However, it seems prudent to periodically have the agent empty the build directories, e.g. once/day. It would be nice to automate this in some way, for example if it was true whenever the $(Rev) equals 1, i.e. the first build of the day.
I've tried a few variations, entering "$(Rev) == 1" into the drop-down, or using a variable from the 'Variables' tab which takes that value, but neither seems to work. I've looked the on-line MSDN documentation, but it fails to even suggest anything other than 'true' or 'false' are valid values.
I'd welcome any guesses or suggestions for further tests, or ideas.
For now, the clean is only two option true/false no matter you are using TFVC and GIT.
Clean:
If you set it to true, the build agent cleans the repo this way:
undo pending changes
scorch
Set this to false if you want to define an incremental build to
improve performance.
Tip: In this case, if you are building Visual Studio projects, on the
Build tab, you can also uncheck the Clean check box of the Visual
Studio Build or MSBuild step.
You could add a uservoice here, TFS PM will kindly review your suggestion. As a workaround, you could add a Scheduled Build at the end of a day do the clean repo operation.
We are using Tfs2015.Update2.1 [14.95.25229.0].
We have vNext build definitions with a retention policy of 21 days. The maximum retention policy is set to 90 days. Yet we have builds that are over 100+ days old that have not been deleted (they still show up in the list of completed builds). It appears that the retention policy is not being applied at all. What can I do to verify that the retention policy cleanup process is actually running?
If you specify build retention policies, retention policies will delete the items below:
The build record
Logs
Published artifacts
Automated test results
Published artifacts
Published symbols
First please make sure you have set Delete build record = true.
Also note completed builds may be exempted from their associated retention policy in the view of builds by selecting Retain Indefinitely from their context menu. The Release Manager will set the builds as //kmm, please refer to the information in this User Voice.
Please view this build definition’s builds in Build Explorer window, check if some builds be set as Retain Indefinitely in there.
You can also double check the older build recoder to see whether has been deleted. The deleted build record should be the same as below picture:
There was an issue with the background job responsible for retention. Team Foundation 2017 Update 3.1 includes the fix.
Currently my TFS 2012 build environment features a Build Verification (BVT) build which mostly follows LabDefaultTemplate.xaml worfklow. The BVT build first queues a Continuous Build (which mostly follows the DefaultTemplate.11.xaml), waits for that build to finish, then performs the necessary tests.
Now, I would like to change this BVT build to be Gated Check-In. Which is to say, I don't want any changes to be committed until the BVT is successful. The problem seems to be queuing the Continuous build definition in such a way that it will pick up the shelf set. This logic appears to be dependent on the build being started with Reason = "CheckInShelveset". However it seems all builds queued from another build always have Reason "UserCreated". Has anyone found a way around this problem? Is it possible to chain builds together while still having Gated Check-Ins?
We just started using Visual Studio Release Management for one of our projects, and we're already having some problems with how we are doing things.
For now, we've created a single release stage, which is responsible for deploying our build artifacts to a dedicated virtual machine for testing. We intend to use this machine to run our integration tests later on.
Right now, we have a gated checkin build process: each checkin fires all the unit tests and we configured the release trigger to happen on this build also. At first, it seemed plausible that, after each checkin, the project was deployed and the integration tests were executed. We noticed that all released builds were polluting the console on Release Management, and that all builds were being marked as "Retain Indefinitely" and our drop folder location was growing fast (after seeing that, it makes sense that the tool automatically does this, since one could promote any build to another stage and the artifacts need to be persisted).
The question then is: what are we doing wrong? I've been thinking about this and it really does not make any sense to "release" every checkin. We should probably be starting this release process when a sprint ends, a point that can be considered a "release candidate".
If we do that though, how and when would we run our automated integration tests? I mean, a deployment process is required for running those in our case, and if we try to use other means to achieve that (like the LabTemplate build process) we will end up duplicating deployment code.
What is the best approach here?
It's tough to say without being inside your organization and looking at how you do things, but I'll take a stab.
First, I generally avoid gated checkin builds unless there's a frequent problem with broken builds. If broken builds aren't a pain point, don't use gated checkin. Why? Simple: If your build/test process takes 10 minutes to run, that's 10 minutes that I have to wait to know whether I can keep working, or if I'm going to get my changes kicked back out at me. It discourages small, frequent checkins and encourages giant, contextless checkins.
It's also 10 minutes that Developer B has to wait to grab Developer A's latest changes. If Developer B needs that checkin to keep working, that's wasted time. Trust your CI process to catch a broken build and your developers to take responsibility and fix them on the rare occasions when they occur.
It's more appropriate (depending on your branching strategy) to do a gated checkin against your trunk, and then CI builds against your dev/feature branches. Of course, that opens up the whole "how do I build once/deploy many when I have multiple branches?" can of worms. :)
If your integration tests are slow and require a deployment to succeed, they're probably not good candidates to run as part of CI. Have a CI/gated checkin build that just:
Builds
Runs fast unit tests
Runs high-priority, non-deployment-based integration tests
Then, have a second build (either scheduled, or rolling) that actually deploys and runs the whole test suite. You can schedule it according to your tastes -- I usually go with one at noon (or whatever passes for "lunch break" among the team), and one at midnight. That way you get a tested build from the morning's work, and one from the afternoon's work.
Using the Release Default Template, you can target your scheduled builds to just go as far as your "dev" (/test/integration/whatever you call it) stage. When you're ready to actually release a build, you can kick off a new release using that specific build that targets Production and let it go through all your stages normally.
Don't get tripped up on the 'Release' word. In MS Release Management (RM), creating a Release does not necessarily mean you will have this code delivered to your customers / not even that it has the quality to move out of dev. It only means you are putting a version of the code on your Release Path. This version/release can stop right in the first stage and that is ok.
Let's say you have a Release Path consisting of Dev, QA, Prod. In the course of a month, you may end up releasing 100 times in Dev, but only 5 times in QA and once in Prod.
You should drive to get each check-in deployed and integration tested. If tests takes a long time, only do the minimal during (gated or not) check-in (for example, unit tests + deployment), and the rest in your second stage of Release Path (which should be automatically triggered after first stage completes). It does not matter if second stage takes a long time. As a dev, check-in, once build completes successfully (and first stage), expect the rest to go smoothly and continue on your next task. (Note that only result of the first stage impacts your TFS build).
Most of the time, deployment and rest will run fine and so there won't be any impact to dev. Every now and then, you will have a failure in first stage, now the dev will interrupt his new work and get a resolution asap.
As for the issue that every build is kept indefinitely, for the time being, that is a side effect of RM. Current customers need to do the clean up manually (or script it). In the coming releases, a new retention policy for releases/builds will be put in place to improve this. This has not been worked on yet, but the intention would be to, for example, instruct RM to keep all releases that went to Prod, keep only the last 5 that went to QA and keep only the last 2 that went to Dev.
This is not a simple question, so also the answer must be articulated.
First of all, you will never keep all of your builds; the older a build, the less interesting to anyone; a build that doesn't get deployed in production is overtaken by builds that reaches that stage.
A team must agree on the criteria that makes a build interesting to keep around and how long to keep it. Define a policy for builds shipped to production or customers: how long do you support them? Until the next release, until the following one, for five years? Potentially shippable builds, still not in your customers' hands, are superseded by newer, so you can use a numeric or a temporal criteria (TFS implements only the first, as the second is more error-prone). Often you have more than one shippable build, when you want a safety net option and being able select from a pool which deliver (the one with more manageable bugs).
The TFS "Retain Indefinitely" should be used when you cannot automate the previous criteria, so you switch to a manually implemented policy. Indefinitely is not forever, means for an unknown time interval.
When I have defined a Gated Build, when somebody checks in code the Integration Build field of a work item changes to the Gated Build number (if the developer associates his check in with work items, of course). Once a CI build is triggered this field changes to the CI build number.
My question is: Is there any way of not changing the Integration Build field of a work item once a Gated Build is triggered?
EDIT
Let me be more clear about how we work.
We have several work itens (some are user stories and some are bugs). When a developer checks in code he or she associates his/her check in with those user stories that gets the Resolved state and a "Gated x.x.x.x" in the integration build field. We never test gated builds. Instead, every night we manually trigger a build and those work itens gets updated again, but this time with a "Release x.x.x.x" in the integration build field. In the next day we test those work itens but the process continues and developers keep check in more US or Bugs (that will have the Gated ...).
Sometime we get confused and we test work itens that should not be tested because they are in the "Gated state".
Even if we have branches that will not solve our problem because the developer associates a check in with work itens and we cant change that.
We do not test gated builds because our QA team is small. The dev team have 20 developers and the QA team have only 2. The process of deploy the application takes about 10 minutes and it can be a pain to wait 10 minutes on every developer check in. Also changing the code while we are testing is never a good idea because it can mess up with our test.
Somebody can think that our process is wrong and suggest a new approach. This will be very welcome, but what we do is working very well besides that small issue.