Continuous TFS local deployment - tfs

I have a configured CI with TFS. What are the best ways to organize post-build (or even better post-test) deployment. My binaries are some libraries with single executable file.
Here is what I need:
Build on each commit. (This is configured and done)
When build is successful (or tests), grep binaries and drop it to some specific folder on the same build machine with full replacement of previous files and folders. (I`d like to be able to configure somehow the folder location)
Launch the application with some parameters and I need to have standart output redirection. For example: App.exe param=paramValue > log.txt
And before starting the application I need to kill the previous instance of it. (This is some kind of server instance that is alive all the time)
The most obvious solution that I tried was to do this with post-build script. But this try failed. See here

Use Release Management in conjunction with PowerShell (or better still, Desired State Configuration) scripts. Depending on your MSDN licensing, it could be free for you, and it's specifically designed from the ground up to handle managing releases.
Overextending the build process to also do deployment is an awful idea. The build tools were designed to build, and they're good at it! They're not good at the types of considerations you have when you're trying to do deployments.
The problem is that most CI solutions (TFS included) would get you to the point where you had binaries, then say "Welp, you're on your own! Have fun figuring out how to deploy this stuff!" This never ends well -- you end up with something inflexible and very difficult to troubleshoot and maintain.
The modern "devops" approach here is to have your application's requirements in source control, treated as code (in this case, as a DSC script or scripts).
One other consideration: It sounds like you're trying to treat a console application as a service. This is going to be a big, big pain for you, since most software that handles releases will not run in an interactive session. Turn it into a true Windows service and your life will be easier.

Related

How to setup Incremental Build in TFS?

I want to set up an Incremental Build in TFS as we want to deploy only modified files into Physical path, not the entire code.
We want the feature to build & deploy only the files that have been changed from the previous deployment. This will reduce the build and deployment time and the developers won't have to wait longer to see their changes deployed.
What you're describing is not an "incremental build". You a describing a much more complex situation than an incremental build.
What you're describing has never been an out-of-the-box option, and is in fact incredibly difficult to do properly, and ultimately would probably not impact things as much as you're hoping, anyway.
First of all, it's actually very difficult to determine a subset of files that have changed between deployments. And if you're building and deploying properly, then you're making a single build and deploying it along a pipeline of environments. This means that "what's different" at any given time is potentially different for every environment in your pipeline. Ex: DEV has version 5, QA has version 4, and PROD has version 3. So you have to start by assuming that you're going to use the oldest version. Build systems have no innate knowledge of "releases", so you'd have to build something into your build and release pipelines to track what source version constitutes the latest code in production.
Let's say you've solved that problem. You now have the ability to retrieve just the delta between what's deployed to production and the commit being built.
If you're working with compiled code, then you still need all of the source code, because you're going to have to rebuild the whole thing. Every assembly is going to get regenerated, and different metadata at compilation time is going to mean those assemblies are different even if the code that constitutes those assemblies is the same. And since assemblies can reference other assemblies, you have no straightforward way of determining at build time which assemblies have actually changed and need to be deployed. So you pretty much have no choice but to deploy all compiled assets every time. Note that this still applies to TypeScript or anything else that goes through a compiler/transpiler process; you need all of the code available, and it has to go through the entire build process.
So at this point, you still have to build your entire application to get the deployable output. Build time hasn't gone down at all. You've managed to bring down just a subset of static content (i.e. HTML pages, images, etc) to be deployed, though. That may have sped your deployments up a bit!
However, if the thing that's making your build and deployment process slow is that you have a ton of non-code-related static content, then you've gone through a very long and convoluted process to arrive at a much simpler solution: Move static content to a CDN and get it out of source control, or have a separate process that manages static content so that it can be deployed independently of unrelated application code.
You haven't really provided any information that can be used to provide a recommendation on how to proceed, but hopefully this answer is helpful in understanding why what you want to do is not going to solve your problem, unless you are dealing entirely with static content or scripts that don't require building.

Power tradeoff between buildscript and CI server

Although this question specifically involves Gradle and Bamboo, it really is a question about any build system (Ant/Maven/Gradle/etc.) and any CI tool (Bamboo/Jenkins/Hudson/etc.).
I was always under the impression that the purpose of a CI build is to:
Check out code from VCS
Run a buildscript (Gradle, etc.)
Deploy a binary (WAR, etc.) to an environment
Hence, all the guts and heavy-lifting (running automated tests, code analysis, test coverage, compiling, Javadocs, packaging, etc.) was all to be done from inside the buildscript.
But Bamboo seems to allow you to break this heavy-lifting out of the buildscript and into Bamboo itself. In Bamboo, you can add build stages and decompose the stages into tasks. Each task is something just as atomic/fundamental as an Ant task.
So it got me thinking: how much should one empower the CI tool? What typical buildscript functionality should be transferred over to Bambooo/CI? For instance, should I be compiling from a Gradle task, or from a Bamboo task? Same goes for all tasks/stages.
For some reason, I view this as the same problem as to whether or not to use stored procedures or put the data processing all at the application layer. What are the pros/cons of each approach?
TL;DR at the bottom
My experience is with Jenkins, so examples will relate to that.
One thing with any build system (be it CI server or a buildscript), is that it should be stable, simple and self-contained so that an untrained receptionist (with printed instructions and proper credentials) could do it.
Ease of use and re-use
Based on the above, one would think that a buildscript wins. Not always. As with the receptionist example, it's about easy of use and easy of reproducibility.
If a buildscript has interdependent build targets that only work in correct order, dependence on pre-supplied property files that have to be adjusted for the correct branch ahead of build, reliance on environment variables that no-one remembers who created in the first place, and a supply of SCM revision numbers that have to be obtained by looking at the log of the commits for the last month... This is in no way better than a Jenkins job that can be triggered with a single button.
Likewise, a Jenkins workflow could be reliant on multiple dependant jobs, each being manually pre-configured before the build, and need artifacts uploaded from one place to another... which no receptionist will do.
So, at this point, a self-contained good buildscript that only requires ant build command to do everything from beginning to end, is just as good as a Jenkins job that only required build now... button to be pressed.
Self-contained
It is easy to think that since Jenkins will (at some point) end up calling at least a portion of a buildscript (say ant compile), that Jenkins is "compartmentalizing" the buildscript into multiple steps, thus breaking away from being self-contained.
However, instead you should zoom out by one level, and treat the whole Jenkins job configuration as a single XML file (which, by the way, can be stored and versioned through an SCM just like the buildscript)
So, at this point, it doesn't matter if the whole build logic is inside a single buildfile, or a single XML job configuration file. Both can be self-contained when done right.
The devil you know
In majority of cases, it comes down to what you know.
Some people find it easier to use Jenkins UI to visually arrange their build workflow, reporting, emailing, and archiving (and for anything that doesn't fit as wanted, find a plugin). For them, figuring out a build script language is more time consuming then simply trying it in UI.
Others prefer to know exactly what every single line of their build script does, and don't like giving control to some piece of foreign code obfuscated by UI.
Both points have merits from all sides Quality-Time-Budget triangle
The presentation
So far, things have been more or less balanced. However:
My Jenkins will email a detailed HTML report with a link to a job page and send it straight up to the (non tech-savvy) CEO. He can look at the list of latest builds, along with SCM changes for each build, linking him to JIRA issues fixed for each build (all hyperlinks to relevant places). He can select the build with the set of changes that he wants, and click "install iOS package" right off his iPad that he just used to view all this information. Meanwhile I can go to the same job page, and review the build logs and artifacts of each log, check the build time trends and compare the parameters that were used between the failing and succeeding jobs (and I didn't have to write any echos to display that, it's just all there, cause Jenkins does that for you)
With a buildscript, even if you piped the output to a file, would you send that to your (non tech-savvy) CEO? Unlikely. But wait, you know this devil very well. A few quick changes and hacks, couple Red Bulls... and months of thankless work (mostly after-hours) later... you've created a buildscript that will create and start a webserver, prepare HTML reports, collect statistics and history, email all the relevant people, and publish everything on a webpage, just like Jenkins did. (Ohh, if people could only see all the magic you did escaping and sanitizing all that HTML content in a buildscript). But wait... this only works for a single project.
So, a full case of Red Bulls later, you've managed to make it general enough to build any project, and you've created...
Another Jenkins/Bamboo/CI-server
Congratulations. Come up with a name, market it, and make some cash of it, cause this ultimate buildscript just became another CI solution a la Jenkins.
TL;DR:
Provided the CI-server can be configured simply and intuitively so that a receptionist could run the build, and provided the configuration can be self-contained (through whatever storage method the CI-server uses) and versioned in SCM, it all comes down to the Quality-Time-Budget triangle.
If you have little time and budget to learn the CI server, you can still greatly increase the quality (at least of the presentation) by embracing the CI-server's way of organizing stuff.
If you have unlimited time and budget, by all means, make your own Jenkins with the buildscript.
But considering the "unlimited" part is rather unrealistic, I would embrace the CI-server as much as possible. Yes, it's a change. However a little time invested in learning the CI-server and how it compartmentalizes or breaks into tasks the different parts of the build flow, this time spent can go a long way to increasing the quality.
Likewise, if you have no time and/or budget, figuring out the quirks of all the plugins/tasks/etc and how it all comes together will only bring your overall quality down, or even drag the time/budget down with it. In such cases, use the CI-server for bare minimum needed to trigger your existing buildscripts. However, in some cases, the "bare minimum" is no better than not using the CI-server in the first place. And when you are at this place... ask yourself:
Why do you want a CI-server in the first place?
Personally (and with today's tools), I'd take a pragmatic approach. I'd do as much as feasible on the build side (clearly better from an automation perspective), and the rest (e.g. distribution of work across machines) on the CI server. Anything that a developer might want to do on his own machine should definitely be automated on the build level. As to the concrete steps you gave, I'd generally check out code from the CI server, and deploy binaries from the build. I'd try to make every CI job look the same, invoking the build tool in the same way (e.g. gradlew ciBuild).
In Bamboo, you can add build stages and decompose the stages into tasks. Each task is something just as atomic/fundamental as an Ant task.
To some extent, this overlap in functionality is natural, as neither build tool nor CI server can assume existence of the other, and both want to provide as complete a solution as possible.
For some reason, I view this as the same problem as to whether or not to use stored procedures or put the data processing all at the application layer.
It's not an unfair comparison, and hence opinions will be as diverse, contextual, and nuanced.
Disclaimer: I'm a Gradle(ware) developer.

continuous integration for many languages

I want to setup a continous integration system that upon a commit or similar trigger should:
run tests on a fortran/C/C++ code, if needed.
compile that code using cmake.
run tests on a rails app.
compile the rails ap.
restart the server.
I'm looking at Jenkins. Is it the best choice for this kind of work? Also, what's the difference between using a bash script that makes all that (if possible) and using jenkins? I'm asking not because I'm thinking about using a script, but to better understand jenkins.
It sounds like Jenkins would certainly be a reasonable choice for this. Apart from the ability to run arbitrary scripts as build steps, there's also a large number of plugins, which provide better integration with cmake for example.
Even if you're using a single bash script to do all of this, using Jenkins on top of it would still have a number of advantages. You get a web interface, email notifications and build history for free, with all that this entails. By integrating your tests "properly" with Jenkins, you can also get things like graphs that show how many tests succeeded/failed over time.
I am using Jenkins for java projects and have to say it is easy to configure. I used to add lots of plugins for better configuration of build steps, but tend to go back to using scripting languages for build and deploy steps because of two main reasons. If I have a build script, it's easier to configure the same job on a different Jenkins server or run the script manually if need be and the build configuration is not so cluttered (I still have one maven job with more than 50 post build steps). The second reason is, that it is easier to version the scripts in SVN, compared to having the build config in SVN.
So to answer your questions. I don't know if it is the 'best' tool, but it is good enough for me. Regarding scripting: use each tool for what it is build for. Jenkins a glorified cron deamon with great options when it comes to displaying analysis. The learning curve for people to use it is minimal (i.e. starting a job, seeing whether it failed.) Configuring Jenkins needs a little bit more learning, but it's very easy to set up simple jobs and go then to the more complicated tasks.
For the first four activities Jenkins will do the job and is rather the best choice nowadays, but for things like restarting the server (which is actually "remote execution"), better have a look at:
http://saltstack.com/
or:
https://wiki.opscode.com/display/chef/Home
http://cfengine.com/
http://puppetlabs.com/
http://cfengine.com/
Libraries like Fabric(Python) or Capistrano(Ruby) might be useful too.

Working with MSBuild and TFS

I'm trying to work with MSBuild and TFS.
I've managed to create my own MSBuild script, that works great from the command-line. The script works with csproj files, and compiles, obfuscate, sign and copies everything that's needed.
However, looking at the documentation of TFS & Team Build, it appears that it expect solutions as the "input" for the script.
Also, I haven't found an easy/intuitive way of performing a "Get Latest Version" from the TFS as part of the script. I'm assuming that the Team Build automatically do a "Get Latest" on the solutions it's suppose to compile, but again - I don't (want to) work with solutions...
Any insights? any pointers? any links?
Team Build defines about 25 targets of its own. When you queue a Team Build, they are automatically run for you in the predefined order listed # MSDN. Don't modify this process. Instead, simply set a couple of these properties that determine how the tasks behave. For example, set <IncrementalGet> to "true" if you want ordinary Get behavior, or "false" if you want something closer to tf get /force.
As far as running your own MSBuild script, again this shouldn't be necessary. Start with the TFSBuild.proj file that's provided for you. It should only require minimal modifications to do everything you describe. Call your obfuscation & signing code by overriding a task like AfterCompile or AfterTest. Put your auto-deploy code in AfterDropBuild. Etc.
Even really complex scenarios are possible if you refactor appropriately. See past answers #1 #2.
As far as the actual compile, you're right that Team Build operates on solutions. I recommend giving it what it wants. I'll be the first to admit that *.sln files are ugly and largely undocumented, but at least you're offloading the work to a well tested & supported product.
If you really wanted to, you could give it a blank/dummy solution and override the CoreCompile task with your custom compiler logic. But this is really asking for trouble. At bare minimum, you lose all of Team Build's flexibility WRT building multiple platforms and flavors. More practically, you're bound to spend a lot of time debugging something that's designed to "just work" -- and there are no good MSBuild debuggers yet (that I know of). Not worth it, IMO.
BTW, the solution files do not affect the Get process. As you can see in the 1st link, the Get is done very early on, long before Team Build even reads the solution file(s). Apart from a few options like <IncrementalGet>, this is not controlled from MSBuild at all -- in particular, the paths to be downloaded are determined by the workspace mappings associated with the build definition. I.e., they are stored in the Team Build SQL database, not the filesystem, and managed with tools (like Team Explorer) that call the TFS webservice API.

How to Sandbox Ant Builds within Hudson

I am evaluating the Hudson build system for use as a centralized, "sterile" build environment for a large company with very distributed development (from both a geographical and managerial perspective). One goal is to ensure that builds are only a function of the contents of a source control tree and a build script (also part of that tree). This way, we can be certain that the code placed into a production environment actually originated from our source control system.
Hudson seems to provide an ant script with the full set of rights assigned to the user invoking the Hudson server itself. Because we want to allow individual development groups to modify their build scripts without administrator intervention, we would like a way to sandbox the build process to (1) limit the potential harm caused by an errant build script, and (2) avoid all the games one might play to insert malicious code into a build.
Here's what I think I want (at least for Ant, we aren't using Maven/Ivy right now):
The Ant build script only has access to its workspace directory
It can only read from the source tree (so that svn updates can be trusted and no other code is inserted).
It could perhaps be allowed read access to certain directories (Ant distribution, JDK, etc.) that are required for the build classpath.
I can think of three ways to implement this:
Write an ant wrapper that uses the Java security model to constrain access
Create a user for each build and assign the rights described above. Launch builds in this user space.
(Updated) Use Linux "Jails" to avoid the burden of creating a new user account for each build process. I know little about these though, but we will be running our builds on a Linux box with a recent RedHatEL distro.
Am I thinking about this problem correctly? What have other people done?
Update: This guy considered the chroot jail idea:
https://www.thebedells.org/blog/2008/02/29/l33t-iphone-c0d1ng-ski1lz
Update 2: Trust is an interesting word. Do we think that any developers might attempt anything malicious? Nope. However, I'd bet that, with 30 projects building over the course of a year with developer-updated build scripts, there will be several instances of (1) accidental clobbering of filesystem areas outside of the project workspace, and (2) build corruptions that take a lot of time to figure out. Do we trust all our developers to not mess up? Nope. I don't trust myself to that level, that's for sure.
With respect to malicious code insertion, the real goal is to be able to eliminate the possibility from consideration if someone thinks that such a thing might have happened.
Also, with controls in place, developers can modify their own build scripts and test them without fear of catastrophe. This will lead to more build "innovation" and higher levels of quality enforced by the build process (unit test execution, etc.)
This may not be something you can change, but if you can't trust the developers then you have a larger problem then what they can or can not do to your build machine.
You could go about this a different way, if you can't trust what is going to be run, you may need a dedicated person(s) to act as build master to verify not only changes to your SCM, but also execute the builds.
Then you have a clear path of responsibilty for builds to not be modified after the build and to only come from that build system.
Another option is to firewall off outbound requests from the build machine to only allow certain resources like your SCM server, and your other operational network resources like e-mail, os updates etc.
This would prevent people from making requests in Ant to off the build system for resources not in source control.
When using Hudson you can setup a Master/Slave configuration and then not allow builds to be performed on the Master. If you configure the Slaves to be in a virtual machine, that can be easily snapshotted and restored, then you don't have to worry about a person messing up the build environment. If you apply a firewall to these Slaves, then it should solve your isolation needs.
I suggest you have 1 Hudson master instance, which is an entry point for everyone to see/configure/build the projects. Then you can set up multiple Hudson slaves, which might very well be virtual machines or (not 100% sure if this is possible) simply unprivileged users on the same machine.
Once you have this set up, you can tie builds to specific nodes, which are not allowed - either by virtual machine boundaries or by Linux filesystem permissions - to modify other workspaces.
How many projects will Hudson be building? Perhaps one Hudson instance would be too big, given the security concerns you are expressing. Have you considered distributing the Hudson instances out - one per team. This avoids the permission issue entirely.

Resources