TFS 2010 - How to set up for a new application - tfs

I have started at a new site that is using .Net applications for the first time. As a developer I am used to VSS but this product is dying a death so we are using TFS (BASIC) instead.
I have been using TFS for source control up until now. But now we are having new servers installed for a live environment.
Now I am not sure what I should be doing. There are no books on TFS 2010 that I can find and I am wondering what tips you can give me. Does TFS need to be installed again, or should I use the existing installation? I am thinking I ought to set up a daily build for a test server. I have not been using TDD up until now, but for the next project this may change.
What must I absolutely get right, and what pitfuls should I avoid?

Without being there in your environment, it's hard to make appropriate recommendations. I've made some assumptions about what your installation based on what you said, but these may be wildly wrong.
You say you're using TFS (BASIC)-- I'm not sure what you mean by that, but if you are using TFS installed on one of the developers workstations, and you're starting to move towards a more robust development environment, I would recommend that you get a separate server (or servers) for your TFS installation.
It sounds like you're relatively small, so having your application tier and your data tier on the same machine shouldn't be that much of an issue. Just make sure that you have enough RAM on the machine to support both processes, and that you have enough disk space allocated for the growth of the database.
You talk about Test Driven Development (TDD), but what I think you're actually talking about is Continuous Integration (CI). When you have a CI environment set up, builds happen automatically based on either a schedule, or triggered by check-ins. Having this set up is never a bad idea, and would recommend that you get into the rhythm of CI builds as soon as possible.
If you're looking for a build server, you are probably going to be ok hosting the build agent on the combined application/data tier. If you find that you're getting performance hits when you do builds, you can move your builds to a different server without much effort.
You will also want to look at migrating your source code repository from your current environment to your future environment. The TFS installation wizard might be able to help you with that. If not, there are other options available, such as moving the database files to the new machine, or using the codeplex-based TFS Integration Platform.

Related

CI / CD: Principles for deployment of environments

I am not a developer, but reading about CI/CD at the moment. Now I am wondering about good practices for automated code deployment. I read a lot about the deployment of code to a pre-existing environment so far.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created. For example for testing of a newly created build, deleting the environment again after testing.
I know that there are various plugins to interact with AWS, Azure etc. that could be used to develop a job for deployment of a virtual machine.
There are also plugins to trigger Puppet to deploy infra (as code) and there are plugins to invoke an infrastructure orchestration.
So everything is available to be able to deploy the infrastructure and middleware before deploying code (with some extra effort of course).
Is this something that is used in real life? How is it done?
The background of my question is my interest in full automation of development with as few clicks as possible, and cost saving in a pay-per-use model by not having idle machines.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created
Yes it is good practice to deploy an environment from scratch. Like you say, Jenkins and Jenkins pipelines can certainly help with kicking off and orchestrating that process depending on your specific requirements. Deploying a full environment from scratch is one of the hardest things to automate, and if that is automated, it implies that a lot of other things are also automated, such as infrastructure, application deployments, application configuration, and so on.
Is this something that is used in real life?
Yes, definitely. A lot of shops do this. The simpler your environments, the easier it is, and therefore, a startup with one backend app would have relatively little trouble achieving this valhalla state. But even the creation of the most complex environments--with hundreds of interdependent applications--can be fully automated; it just takes more time and effort.
The background of my question is my interest in full automation of development with as less clicks as possible and cost saving in a pay-per-use model by not having idling machines.
Yes, definitely. The "spin up and destroy" strategy benefits all hosting models (since, after full automation, no one ever has to wait for someone to manually provision an environment), but those using public clouds see even larger benefits in terms of cost (vs always leaving AWS environments running, for example).
I appreciate your thoughts.
Not a problem. I will advise that this question doesn't fit stackoverflow's question and answer sweet spot super well, since it is quite general. In the future, I would recommend chatting with your developers, finding folks who are excited about this sort of thing, and formulating more specific questions when you all get stuck in the weeds on something. Welcome to stackoverflow!
All is being used in various combinations; the objective is to deliver continuous value to end user. My two cents:
Build & Release
It depends on what you are using. I personally recommend to use what is available with the tool. For example, VSTS (Visual Studio Team Services) offers complete CI/CD pipeline. But if you have a unique need which can only be served by Jenkins then you must use that and VSTS offers that out of the box.
IAC (Infrastructure as code)
In addition to Puppet etc. You can take benefits of AZURE ARM (Azure Resource Manager) Template to Build and destroy an environment. Again, see what is available out of the box with the tool set you have.
Pay-per-use
What I have personally used is Azure Dev/Test Labs and have the code deployed to that via CI/CD pipeline. Later setup Shutdown policy on the VM so it will auto-start and auto-shutdown based on time provided. This is a great feature to let you save cost on the resources being used and replicate environments.
For example, UAT environment might not needed until QA is signed off. But using IAC you can quickly spin up the environment automatically and then have one-click deployment setup to deploy code to UAT.

Can I host my own travis runner?

I work on a large open source project based on ruby on rails. We use Github, Travis, Code Climate and others. Our test suite takes a long time to run and we have many pull requests opened and updated through the day, which creates a large backlog. We even implemented a build killer in our bot to prevent any unnecessary builds, however we still have a backlog. Is it possible for us to host our own runner to increase the number of workers?
There's Travis CI Enterprise (https://enterprise.travis-ci.com/) that lets people host their own runners, but that's probably mostly only for paid-for customers. Have you guys swapped over to the container-based builds? Might speed things up a bit. What's the project?

How to Debug the asp.net web application without disturbing the release build?

I'm working asp.net web based application, I have deployed this application on server, Its getting response on port 80 from a outside client.
I want the to fix the bugs so I want to run this application in Debug mode so that I can attach the worker process with the application and this is making the Performance down and its disturbing the QA team.
So can I have two application one can run in release mode so that QA activity does not get disturbed and parallelly I can debug the build and fix the bugs or can do further development.
I'm facing the same problem during the development activity, If multiple developers are working paralley , only one is able to debug the application other one has to wait.
So please suggest me, If I can get rid of this situation.
I have only one server on which I can test this application.
This is a way too long discussion, but I will try to offer you a few ideas:
Each developer should develop on his own machine (sources and database should be local).
In order to sync your work you should use:
a. a source control solution like TFS or SVN (this is free) for your sources.
b. database changes can easily be synced by generating update scripts using SQL Schema Compare directly from Visual Studio (you will need SQL Server Data Tools for this), Redgate SQL Compare or another application that can compare the database strucure (there are many available online, some of them free).
You should have a separate server (DB and app) to testing.
You should have a separate server (DB anb app) for production.
You say you have one server to test the application. But I suppose each developer has his own computer, right? In this case you need to skip #3 and use the same server for testing and production, but with different databases and applications.
I suggest you check this website for similar answers (see Best practice for test and production environments for example) to find the best solution that applies in your case.

Steps to ensure our TFS 2012 deployment is robust and recoverable from disaster

Our TFS2012 deployment is currently quite simple:
Virtual Windows Server with TFS, Sharepoint, Reporting, SQL Server and Builds all on the same machine!
Is using the TFS admin console backup tool and/or backup of the entire machine enough to recover from a disaster?
There is no clear-cut criteria, you may take a look at TFS planning and disaster recovery guidance for a more comprehensive answer.
Shortly, you must be sure at least that
Backups are saved on different hardware, and possibly copied to a remote location
Along with your backups you have the recover instructions and install packages
This guarantees that you are able to recover, but it can take a long time, depending on the disaster impact (someone deleted a record vs. the server room has burnt) and the size of your data.

Are nightly builds useful for a web application?

I keep reading about how important nightly builds (and automated builds in general) are to the development process, and I would love to implement them on my projects. The problem is, I develop MVC web applications. I cannot imagine how I would be able to easily run an arbitrary build. I would need to deploy it to a server including any dll, view page, and DB changes necessary, and possibly seed the DB with appropriate test data. That does not sound automatic or even useful.
Is there a useful way to do nightly builds in a web application? The best solution would be one where I can access the application at an address like nightlybuild.myapplication.com/buildnumber and the server would figure out what DLL, view pages, and DB to use.

Resources