We just started using Visual Studio Release Management for one of our projects, and we're already having some problems with how we are doing things.
For now, we've created a single release stage, which is responsible for deploying our build artifacts to a dedicated virtual machine for testing. We intend to use this machine to run our integration tests later on.
Right now, we have a gated checkin build process: each checkin fires all the unit tests and we configured the release trigger to happen on this build also. At first, it seemed plausible that, after each checkin, the project was deployed and the integration tests were executed. We noticed that all released builds were polluting the console on Release Management, and that all builds were being marked as "Retain Indefinitely" and our drop folder location was growing fast (after seeing that, it makes sense that the tool automatically does this, since one could promote any build to another stage and the artifacts need to be persisted).
The question then is: what are we doing wrong? I've been thinking about this and it really does not make any sense to "release" every checkin. We should probably be starting this release process when a sprint ends, a point that can be considered a "release candidate".
If we do that though, how and when would we run our automated integration tests? I mean, a deployment process is required for running those in our case, and if we try to use other means to achieve that (like the LabTemplate build process) we will end up duplicating deployment code.
What is the best approach here?
It's tough to say without being inside your organization and looking at how you do things, but I'll take a stab.
First, I generally avoid gated checkin builds unless there's a frequent problem with broken builds. If broken builds aren't a pain point, don't use gated checkin. Why? Simple: If your build/test process takes 10 minutes to run, that's 10 minutes that I have to wait to know whether I can keep working, or if I'm going to get my changes kicked back out at me. It discourages small, frequent checkins and encourages giant, contextless checkins.
It's also 10 minutes that Developer B has to wait to grab Developer A's latest changes. If Developer B needs that checkin to keep working, that's wasted time. Trust your CI process to catch a broken build and your developers to take responsibility and fix them on the rare occasions when they occur.
It's more appropriate (depending on your branching strategy) to do a gated checkin against your trunk, and then CI builds against your dev/feature branches. Of course, that opens up the whole "how do I build once/deploy many when I have multiple branches?" can of worms. :)
If your integration tests are slow and require a deployment to succeed, they're probably not good candidates to run as part of CI. Have a CI/gated checkin build that just:
Builds
Runs fast unit tests
Runs high-priority, non-deployment-based integration tests
Then, have a second build (either scheduled, or rolling) that actually deploys and runs the whole test suite. You can schedule it according to your tastes -- I usually go with one at noon (or whatever passes for "lunch break" among the team), and one at midnight. That way you get a tested build from the morning's work, and one from the afternoon's work.
Using the Release Default Template, you can target your scheduled builds to just go as far as your "dev" (/test/integration/whatever you call it) stage. When you're ready to actually release a build, you can kick off a new release using that specific build that targets Production and let it go through all your stages normally.
Don't get tripped up on the 'Release' word. In MS Release Management (RM), creating a Release does not necessarily mean you will have this code delivered to your customers / not even that it has the quality to move out of dev. It only means you are putting a version of the code on your Release Path. This version/release can stop right in the first stage and that is ok.
Let's say you have a Release Path consisting of Dev, QA, Prod. In the course of a month, you may end up releasing 100 times in Dev, but only 5 times in QA and once in Prod.
You should drive to get each check-in deployed and integration tested. If tests takes a long time, only do the minimal during (gated or not) check-in (for example, unit tests + deployment), and the rest in your second stage of Release Path (which should be automatically triggered after first stage completes). It does not matter if second stage takes a long time. As a dev, check-in, once build completes successfully (and first stage), expect the rest to go smoothly and continue on your next task. (Note that only result of the first stage impacts your TFS build).
Most of the time, deployment and rest will run fine and so there won't be any impact to dev. Every now and then, you will have a failure in first stage, now the dev will interrupt his new work and get a resolution asap.
As for the issue that every build is kept indefinitely, for the time being, that is a side effect of RM. Current customers need to do the clean up manually (or script it). In the coming releases, a new retention policy for releases/builds will be put in place to improve this. This has not been worked on yet, but the intention would be to, for example, instruct RM to keep all releases that went to Prod, keep only the last 5 that went to QA and keep only the last 2 that went to Dev.
This is not a simple question, so also the answer must be articulated.
First of all, you will never keep all of your builds; the older a build, the less interesting to anyone; a build that doesn't get deployed in production is overtaken by builds that reaches that stage.
A team must agree on the criteria that makes a build interesting to keep around and how long to keep it. Define a policy for builds shipped to production or customers: how long do you support them? Until the next release, until the following one, for five years? Potentially shippable builds, still not in your customers' hands, are superseded by newer, so you can use a numeric or a temporal criteria (TFS implements only the first, as the second is more error-prone). Often you have more than one shippable build, when you want a safety net option and being able select from a pool which deliver (the one with more manageable bugs).
The TFS "Retain Indefinitely" should be used when you cannot automate the previous criteria, so you switch to a manually implemented policy. Indefinitely is not forever, means for an unknown time interval.
Related
As is good practise, I've got Jenkins set up at work to automatically build everything for continuous integration, pulling files from our Git repositories. On our development branches, builds get kicked off automatically whenever anyone commits a change. When we want to do formal testing, we pull the build from Jenkins and use that; and when we want to sign off a change request, we quote the Jenkins build number where the change went in. So far, so good.
The problem we have is that builds are a significant size. For our SDK, we have to build across multiple platforms so that we can check it works on all of them. At maybe 50MB per build, this starts to mount up! Short term I can keep asking IT to give me more storage space, but longer term I'd like a more strategic solution
The obvious answer in Jenkins is to set up deletion rules, whether deleting after some time or after some number of builds. The problem then though is that if we delete that older development build, we lose the traceability of what we tested. I'm sure most engineers at one time or another have had to do a binary chop through older builds to find an obscure bug/regression which was only spotted some time later. For me, it is unacceptable to lose that history.
The important feature of build history though is not the binary build artifacts, but the build log recording what Git commits (or anything else; toolchain versions for example) went into each build. That's what lets us go back to investigate older builds and recreate them if required. The build log is relatively small (and highly compressible, being a text file). We do still need to keep build artifacts for recent builds though, so that testers can use them. So I'm thinking a better alternative would be to preserve the build log in Jenkins for all builds, but to have Jenkins automatically delete build artifacts after some time.
Does anyone know of a way in Jenkins (perhaps a plugin?) which would let us automatically delete/archive build artifacts from older builds, but still keep the build details and log for those builds? I'm happy to do a Jenkins upgrade if necessary to get this feature. And of course this needs to be only for selected development build jobs - all release build jobs need their build artifacts to be preserved forever, as do any builds which have the "keep forever" button ticked.
If it's absolutely necessary, I could set up a separate cron job to do this on the Jenkins file area. That's a nasty hack though, and I suspect it's likely to cause some issues with Jenkins, so I'd rather not do something that brute-force if there's a better alternative.
I think you need this option in your jenkinsfile
buildDiscarder(logRotator(artifactNumToKeepStr: '10'))
artifactNumToKeepStr: This number of builds have their artifacts kept.
The most aggressive build retention policy one can set for pull request builds is described in "Clean up pull request builds"
a policy that keeps a minimum of 0 builds
Still, it means that successful PR builds (with artifacts no one will ever need) will be deleted only after the next automatic retention cleanup - usually the next day, but in reality it results in nearly two days worth of no longer needed builds.
In our particular case it seems to be desirable to find a way to clean successful PR builds ASAP due to their frequency and artifact's sheer size that may periodically strain our not yet fully organized infrastructure dedicated to PR handling (it will be significantly improved, but not as soon as we'd like to, and those successful PR builds would still remain no less of a dead weight).
And as far as I see the only way to do it would be to delete builds manually.
While it is not too difficult to implement, I'd still like to check whether there is a simpler standard way to delete successful PR builds automatically.
P.S.: There is one particularity in our heavily customized build process - we have multiple dependent artifacts. Like create A, use it to build B, create C to test B... So trying not to Publish artifacts on overall successful build with custom condition like it is suggested below is not exactly feasible.
Let's look at the problem from a different perspective: The problem isn't that builds are retained, the problem is that your PR builds are publishing artifacts.
You can make the Publish Artifacts steps conditional so that they don't run during PRs. Something like and(succeeded(), ne(variables['Build.Reason'], 'PullRequest')) will make the task only run if it's not a PR.
I've always worked with the Continuous Integration (CI) build in TFS. However, in my last project we started to use the gated check-in trigger.
Are there any disadvantages when using a gated check-in? Because if it prevents the team from checking in broken code, what's the purpose of a CI trigger?
Gated checkin is a form of continuous integration build. In TFS, it creates a shelveset containing the code that's being validated, then runs a build of that code. Only if that code builds successfully and all configured unit tests pass does the code actually get committed.
Continuous integration is different -- in CI, the code is committed regardless of what happens as a result of the build. If a CI build fails due to bad code being committed, the code is still there, in source control, available for everyone to grab.
Now for the opinion-based part:
Gated checkin is great if you have a large number of developers of varying levels of skill/experience, since it prevents broken code from going into source control. The downside is that it increases the time between code being committed and code being available for others, and thus can lead to situations where people are sitting around twiddling their thumbs waiting for the build to complete successfully.
I recommend using gated check-in as a stopgap. If you have a ton of gated check-in build failing, then it's doing its job and preventing bad code from getting committed. If, over time, the team matures and gated check-in builds fail infrequently, then it's serving less purpose and should be switched over to continuous integration and correcting failing builds as they come, instead of delaying every commit in the off chance there's a problem.
Gated check-ins are fundamentally flawed because they validate dirty local state, not versioned state, so they cannot replace independent checks based on a clean slate (such as in a CI pipeline). Similarly QA often needs to be done with a matrix of environments (different compiler version, different browsers, different OS, ...), the cost to set up are better invested in a central CI.
Gated checkins also make committing harder and slower. That is commonly a bad thing, because:
In TDD, members should be able to push commits with failing tests
Reporting bugs as failing tests is super useful
When cooperating on a WIP (work in progress) branch, members should be able to push dirty changes to make them available quickly to others
When working on a big change, it can be useful to let other members review unfinished work before investing the time to finish
Many people like regularly committing incomplete work as a form of snapshot/backup
committing incomplete work is great when frequently switching between branches (stashing only of limited help in particular for new files)
QA cannot be time-limited, but committing should not take long
So scenarios where gated checked are ok as workaround or quick and dirty mitigation:
Your VCS makes branching hard, or your company does not allow branching
The project is tiny
Only one developer
No CI present
Only specific long-lived branches are protected by the gates (but that's not an alternative to CI)
In our company we use Gated Checkin to make sure commited code doesn't have any problems and also we run all of our unit tests there. We have around 450 unit tests.
To build the entire solution it takes 15-20 seconds and for tests maybe 3 mins on my local computer. When I build it on the server it takes 10 minutes. Why is that? Is there an extra stuff that will be fired that I don't know of?
Be aware that there are additional overheads (clean/get workspace is the main culprit most of the time) in the workflow prior to the actual build and then test cycle. I have seen the same behaviour myself and never really got to a point where the performance was that close to what it would be locally.
Once the build is running, you can view the progress and see where the time is being taken, this will also be in the logs.
In the build process parameters you can skip some extra steps if you just want to build the checked in code.
Set all these to False: Clean Workspace, Label sources, Clean build, Update work items with build number.
You could also avoid publishing (if you're doing it) or copying binaries to a drop folder (also, if you're doing it).
As others have suggested, take a look at the build log, it'll tell you what's consuming the time.
Using TFS, we have the following:
A main baseline
A development branch for each development effort. These get merged back to the baseline.
A release branch that is created with each release. Bug fixes are made here, released, and merged back to the baseline.
Using shelvesets, we can share code across development branches if needed without contaminating the baseline. Useful for code reviews.
When we deliver our development changes to baseline we have an automated build that kicks off and automatically places our changes on the test server.
The problem is that the business analysts can't see our changes until they're on the test server, and currently the only way to get our changes on the test server is to check them into baseline. So if the BA's find something wrong, the code is, unfortunately, already in baseline and we would have to go through the trouble of taking it back out.
Is there a way we can change our branching strategy or process to get the BA's what they want to see without contaminating our baseline?
Your branching strategy sounds exactly what we decided on at my company. I don't think the issue is with your branching strategy, I think the issue is that you have to check changes into the baseline in order to apply them to the test server.
At my company, changes aren't checked into the baseline until they are promoted and running in production. Release branches are what are deployed to the test servers... if bugs are found, or the BAs want to change something, we don't have to go through the pain of removing the changes from the baseline.
However, if you have a lot of concurrent releases, this can become a pain to merge all of the releases together before moving them to production, since you aren't merging into the baseline until later in the process. At my company, we have a very strict release schedule, and try to only have a single release working its way to production at a time. Because of this, waiting to merge the release into the baseline until the release has been promoted into production hasn't created any issues for us, or extra work so far...
How often do you do releases? Would you be able to deploy release branches onto your test servers, and have the baseline represent what is currently deployed in production?
(I'd make this a comment, but I'm still working on earning that privilege...)
I would not prefer this approach, I would suggest:
A main baseline which contains stabilized code. The code will be merged into this branch from respective release branch only after successful release.
A Release branch which gets created from Main for each release. This branch will be used to generate Release Builds and will be deployed to test environment.
A Development Branch created from Release Branch, it will be used for Development efforts and will be merged to Release when I'm ready to give my build to test.