I'm starting to dive into TFS 2012 and I have a basic understanding of the tiers and how build servers, controllers and agents work and how different build scripts can have different configurations and projects.
However, one of the things I'm struggling with is a requirement for our source control solution that says that I need to be able to prove a particular changeset or shelfset produced a particular build. That is, given a particular binary, I can point to a release changeset that generated that binary. I should also be able to point to the test changeset that was merged into the release branch. The idea here is not just a separation of duty, but validating that because the release and test changesets are identical, no code was injected into a project by a code reviewer.
I've read one blog post that talks about "Binary promotions" -- would that concept be useful in my situation? I'm having a hard time finding how this binary promotion is set up in TFS.
Deployment
Out of the box TFS doesn't really support deployments, it can deploy to 1 location on build which often is a test server (think lab management). TFS 2012 has built in support for Azure deployments, but they still happen at the end of a build and the build artifacts cannot be automatically deployed to a new location.
You could modify the build template to allow to release to different locations, but that would still be a fresh build for every environment and not true binary promotions.
TFS does, however, have a concept of build quality and actually fires off events when this quality is changed. TFS Deployer is a 3rd party tool that hooks into the quality change event and can execute powershell scripts. This means with a simple change of a dropdown value you can automatically kickoff a script that releases to any environment you want. You can customize the build quality list (per team collection) to be a list of environments (dev, uat, staging, production etc) which the script then figures out where to release the specific build to.
VS2012 also has some nice improvements to web deploy which means deployment configurations are stored in source control with the project, which in theory means they'll be available in the drop folder for TFS Deployer to make use of.
I don't believe TFS keeps a history of build qualities, which means you can't really use the build quality history to maintain a list of what is deployed to which environment. You could fairly easily record this information as part of the deployment script though. Or at the very least add a custom summary node to the build with information about the release.
TFS2012 does have the ability to mark a build as deployed as part of the Azure deployment functionality, you mark tfs deployer builds as deployed using a script but it doesn't feel very useful.
Octopus Deploy is another project that's worth checking out, and could be used instead of TFS Deployer if your build template creates NuGet packages. It requires a bit more control over the production hardware as you need to install agents on each environment to handle releases, but it solves a lot of other issues with deployment.
Versioning
Once you have a nice consistent way of automatically releasing that people don't bypass, you can look at enhancing the build template to inject the build version, or changeset number as the assembly version for anything built as part of that automated build. There's a number of different ways to do it and plenty of blog posts and tools to help you achieve that.
Alternatively you could just use automatic assembly versioning ([assembly: AssemblyVersion("1.0.*")]) to give you the date/time the build occurred, which ends up like 1.0.1234.123 where 1234 is something like the days since jan 1st 2000, and 123 is the minutes since midnight (my specifics may be wrong here).
If you're deploying websites, then I highly recommend injecting the current build version into the html somewhere. This way you can check what version a website is running without needing access to the bin directory. It can also be appended as a querystring to css/js file imports to ensure no browser caching occurs between versions.
Thoughts
Personally I'm hoping Microsoft realise that the xaml build workflows are trying to do too much and that they split the different concerns (build, test, deployment...) into different scriptable parts. Of course that would not be until the next major release of TFS which is years away. Although with Team Foundation Service they are trying to iterate a lot quicker, so they may actually extend the Azure deployment stuff into something more useful in the nearer future.
Related
Environment:
TFS 2018 with source code in TFS Git
developers are using gitflow-like workflow (main, develop and short-lived feature branches)
there is a build definition used for CI (off of develop branch)
... and another one for releases (off of main branch)
as project evolves build definitions get updated (new steps, etc)
What is the best approach that allows reproduction of previous builds (or, at minimum, release builds)? (in case if previously made build was lost in boating accident)
Ideally I need to be able to plug in version (e.g. 8.5.12345.1) somewhere, press OK and eventually receive data identical to that produced by corresponding build in the past.
Your best approach is to switch to YAML builds and releases. That way your pipeline is versioned together with the code.
If you don't do that, you may need to clone your build and releases every time you make breaking changes.
Alternatively, use the version diff view in your pipeline to go back to an older version or use the json to create a new definition using the API.
Upgrading to Azure DevOps Servers 2020 will give you more advanced YAML features not yet available in Team Foundation Server 2018.
Note: for truly reproducible builds, you'll need to also find a way to lock the build tasks themselves, TFS and Azure DevOps will automatically roll forward to the latest minor version of a given build task. While task authors should try to prevent any breaking changes in those minor upgrades there are no guarantees. You can also never rely on any tool installers that use a v2.x notation or a task that relies on latest. Azure DevOps isn't ideally suited for full reproducible builds.
You can pin task versions in YAML now, if I remember correctly, this was added in Azure DevOps 2020.
You can set which minor version gets used by specifying the full version number of a task after the # sign (example: GoTool#0.3.1). You can only use task versions that exist for your organization.
See: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml#task-versions
The Tasks docs offer special scripts to pin the versions of out-of-the-box tasks as well.
My team has TFS build machine building checkins.
Today, we have more than a dozen build definitions to prevent building projects/running test in unaffected areas. This was when we weren't using incremental build.
Now that we enabled incremental sync/build, I am thinking about creating one giant gate definition that include all my team's source code. Since incremental sync/build is enabled, unchanged files don't get built anyway. But TFS is still running all tests.
Is there a way to dynamically pick just test assemblies that were built recently?
I can do it by modifying build template to filter out test assemblies that are more x hrs old but before I go that route, I wanted to check if there is something already available.
thanks
Customizing the template is the right path. Beside the official documentation, take a look at ALM Rangers' Build Customization Guide.
The FindMatchingFiles activity has no date filtering, so you have to roll your own.
Our project group stored binary files of the project that we are working on in SVN repository for over a year, in the end our repository grew out of control, taking backups of SVN repo became impossible at one point since each binary that is checked in is around 20 MB.
Now we switched to TFS,we are not responsible for backing the repository up, our IT tream takes care of it and we have more network and storage capacity for backups because of that but we want to decide what to do with the binaries. As far as I know TFS stores deltas and for binary files but deltas will be huge, but we might end up reaching our disk space quota one day, so I would like to plan things better from the start, I don't want to get caught up in a bad situation when it's too late to fix the problem.
I would prefer not keeping builds in the source control but our project group insists to keep a copy of every binary for reproducing the problems that we see in the production system, I can't get them to get the source code from TFS, build it and create the binary, because it is not straightforward according to them.
Does TFS offer a better build versioning method? If someone can share some insight I'd really be grateful.
As a general rule you should not be storing build output in TFS. Occasionally you may want to store binaries for common libraries used by many applications but tools such as nuget get around that.
Build output has a few phases of its life and each phase should be stored in a separate place. e.g.
Build output: When code is built (by TFS / Jenkins / Hudson etc.) the output is stored in a drop location. This storage should be considered volatile as you'll be producing a lot of builds, many of which will be discarded.
Builds that have been passed to testers: These are builds that have passed some very basic QA e.g. it compiles, static code analysis tools are happy, unit tests pass. Once a build has been deemed good enough to be given to test it should be moved from the drop location to another area. This could be a network share (non production as the build can be reproduced) there may be a number of builds that get promoted during the lifetime of a project and you will want to keep track of what versions the testers are using in each environment.
Builds that have passed test and are in production: Your test team deem the build to be of a high enough quality to ship. As part of your go live process, you should take the build that has been signed off by test and store it in a 3rd location. In ITIL speak this is a Definitive Media Library. This can be a simple file share, but it should be considered to be "production" and have the same backup and resilience criteria as any other production system.
The DML is the place where you store the binaries that are in production (and associated configuration items such as install instructions, symbol files etc.) The tool producing the build should also have labelled the source in TFS so that you can work out what code was used to produce the binary. Your branching strategy will also help with being able to connect the binary to the code.
It's also a good idea to have a "live like" environment, this should be separate from your regular dev and test environments. As the name suggests it contains only the code that has been released to production. This enables you to quickly reproduce bugs in production
Two methods that may help you:
Use Team Foundation Build System. One of the advantages is that you can set up retention periods for finished builds. For example, you can order TFS to store the 10 latest successful builds, and the two latest failed ones. You can also tell TFS to store certain builds (e.g. "production builds"/final releases) indefinitely. These binaries folders can of course also be backed up externally, if needed.
Use a different collection for your binaries, with another (less frequent) backup schedule. TFS needs to backup whole collections, but by separating data that doesn't change as frequently as the source you can lower the backup cost. This of course depends on the frequency you are required to have the binaries backed up.
You might want to look into creating build definitions in TFS to give your project group an easy 'one button' push to grab the source code from a particular branch and then build it and drop it to a location. That way they get to have their binaries, and you don't have to source control them.
If you are using a branching strategy where you create Release or RTM branches when you push something to production, then you can point your build definitions at those branches and they can manually trigger them from the TFS portal or from within Visual Studio.
In 2009, there was a SO question on the same topic.
I'm wondering if later versions of Team Foundation Server are better at longer build pipelines. Refer features of Jenkins, TeamCity, ThoughtWorks' Go (my employer).
The visualizations of the build pipelines are important to me, as well as the notification about individual stages passing or failing. That and the eminent clone-ability of say a 'trunk' pipeline into one for a release branch as that branch leaps into being.
Secondly, a personal holy-grail is the CI server storing its config in the SCM that's holding the buildable thing itself, and even picking up on the creation of branches silently to provision new pipelines; Can TFS be configured to store the CI definitions/scripts in its SCM side rather than its accompanying SqlServer?
TFS build consists of three components:
The build definition - stored on the SQL server data tier.
The build workflow - a XAML file stored in the source control.
The supporting MSBuild scripts - usually contains user defined actions, also stored in the source control.
As the build progresses, you can see visualization of the build steps and you also get a different log for the main build and the MSBuild output.
The build definition in TFS is merely a collection of build settings, similar to CC.Net's config file and TeamCity's build configuration tab which both stored on the file system as well. Assuming there's a backup plan on the database you don't really need to store the build defintions on the source control, but if you must it's possible by exporting the tbl_BuildDefinition table.
The TFS Power Tools adds cloning functionality for build definitions.
There's no OOTB support for provisioning build definitions from a new branch though it's fairly feasible using the TFS-API.
Bit late to the party, but just don't bother with TFS if you want advanced build pipeline automation. It simply doesn't cut it.
I have used Jenkins and TFS both extensively. Tfs is just. pure. crap. Here's why.
No down/up stream build.
No piepline/orchestraion build. (like jenkins)
Obscure ways of adding build steps and falls back to using MsBuild.
Slow and still polls the source control.
Ties you to MsTest.
And please don't point me to "Oh look you can do everything if you write a custom activity". I'm not wasting time doing development for a closed source, sub-par platform. If I am going to contribute something, it's to a FREE. OPEN SOURCE platform.
I would like to hear the best practices or know how people perform the following task in TFS 2008.
I am intending on using TFS for building and storing web applications projects. Sometimes these projects can contain 100's of files (*.cs, *.acsx etc)
During the lifetime of the website, a small bug will get raised resulting in say a stylesheet change, and a change to default.aspx.cs for example.
On checking in these changes to TFS, and automated build would be triggered (great!), however for deploying the changes to the target production machine, I only need to deploy for example:
style.css
default.asx
MyWebApplications.dll
So my question is, can MSBuild be customized to generate a "code pack" of only the files which require deploying to the production server based on the changeset which cause the re-build?
You are probably going to have a hard time getting MSBuild itself to do this, but the ideal tool to use in your situation is the Web Deployment Tool, aka MSDeploy. With this tool you can tell it to deploy the changes to the target website. It will determine only the changed files and then just deploy those. Also you can perform customization to the deployment and a whole bunch of other stuff. It's a really great tool.