We have a dev team in asia with tfs and one in US. We would like to have another tfs in US and sync both of them in real time. Each of them can serve as the fail over cluster for each other.
Going forward we want to have teams login to respective server.
How can we achieve replication in real time ?. How does merge and collision be dealt with?.
We have proxy already but we want something better than that.
No, you can't replicate TFS in real time. You need to backup database and restore it on another server to have full migration. Or use tools like TFS Integration Tool to migrate work items or changesets (data lossy migration).
Using Visual Studio Team Service maybe a good option for your scenario. Visual Studio Team Services provides a set of cloud-powered collaboration tools that work with your existing IDE or editor, it's not needed to set up on-premise TFS.
Get database guy in the picture and ask him how to replicate and sync DB severs. Source code and change sets are stored on SQL. This should help you to proceed further.
Related
Different developers check in the code touching the same file or different branches could be getting merged. I am new to TFS from the admin side but I do know how to do basic check-in of the code. How can I avoid the code collision from admin side? We are using Microsoft Team foundation server as version control.
Version control systems are made to allow different users to edit the same files and so that when merging, it provides a reasonable experience upon merge. The ability to check out the same file by multiple developers is one of the things that allows teams to become highly productive. When multiple versions of the same product are being developed or maintained, it's impossible to prevent conflicts altogether.
There is a lot of additional guidance available through the ALM RAngers' guides. I highly recommend you and your developers read this.
Note:
Visual Studio 2013 offers a much better merge experience than older versions. 3rd party tools like Semantic Merge further improve the experience by parsing the code being merged and applying some additional intelligent logic to prevent conflicts.
For certain notoriously hard to merge files, like SSIS packages, there are additional specialist tools such as BIDS Helper Smart Diff.
Some things you can do:
Make sure developers communicate
Teams that do a daily scrum (stand-up meeting) or have the ability to use Team Rooms in TFS can signal intent and keep others up to date on what they're doing pro-actively. Making sure that there is a dedicated communication channel available and that the users have the Team Rooms extension installed in case they're not co-located. Communication prevents many of these type of issues and is the best solution once such a merge issue occurs.
Have developers perform a get-latest and check-in frequently
While there is no server setting for this, having your developers trained on branching, merging and general source control patterns can help a lot. If a user regularly checks for incoming changes (get latest) and checks in as soon as he is relatively confident about his code (say after the first test passes when using TDD), then chances of conflicts is substantially lower.
The Incoming Changes Lens
Codelens has been updated with a new Lens as one of the updates that was released after RTM. The lens requires both the Client and the Server to be upgraded to at least Update 2 and at least Visual Studio 2013 Ultimate Update 2. It's recommended that when you start applying the Updates, you always stay current, thus I'd recommend you install Update 4.
While Code Lens is an Ultimate feature, it will be moved into Professional with the release of Visual Studio 2015.
Use Exclusive Checkout
If users use the Checkout and lock option to check out a file, they can signal other users that the file is undergoing major changes. This feature requires that all users have their workspace type set to "Server Workspace". Local workspaces, given their disconnected nature, ignore the Lock flag. Though individual users can always override their workspace type, it is possible to set the default workspace type at the collection level.
Disable Merging and Multiple checkout for individual file types
A better solution is to mark file types that are hard to merge, the old style SSIS packages with lots of XML and GUIDs are agood candidate here. Open the Source Control settings and add the extensions of "bad files". This setting applies partially in Visual Studio regardless of the workspace setting (Visual Studio will only offer Take Local or Take Server and will not offer to merge.
Disable Multiple Checkout
It is possible to set the Project's Source Control option not allow "Multiple Checkouts", this will automatically acquire a Lock when a file is checked out. It's not recommended to turn this feature on, as it introduces a lot of frustrating things while working in Visual Studio (most importantly when adding any file, one has to acquire a lock on the project file). As with Exclusive Checkout, this requires all users to use Server Workspaces.
As this feature will prevent any file from being checked out by multiple people, it's often trying to apply way to much force to solve this issue. Only in case your developers mess up on a high frequency, you may enable this feature temporarily while they receive training.
We are some Devs working on an internal TFS-Repository. This TFS can only be accessed from within out network.
But we also have a test-lab which wasn't meant to, but ended up in being a development environment. Because we do not have any access to our company's TFS we used Microsofts tfs (visualstudio.com) instead.
We do not have the chance to change ANYTHING in our network-structure. So stuff like "just allow your test-machines to access your internal TFS would be of no help)
Some of our DEV-Machines can access both TFSs (internal and visualstudio.com) and some can only connect to the internal.
My idea would be to install some "sync" onto one of those dev-machines that can access both tfs's
What would be the best way to do that?
Bottom Line Up Front: There is no way to do what you are asking in a scalable and maintainable way
The only way to fix this issue is for all developers to have access to the TFS server that they are using. However if you have a large budget for support and unlimited patience you can use the TFS Integration Tools.
You can use it to sync both source and work items in a bi-directional manor. It will even maintain any relationships between source and wit.
Now the bad:
You will have to resolve all conflicts by logging onto the server running the migration.
You must have communication between the two servers
It's going to really annoy you
Another option would be to buy a commercial tool like OpsHub that will manage the synchronization and provide better conflict resolution. That will however be costly.
My advice (and I have done it every way before) is that you fix the issue not work around it. This is solvable in Healthcare, in banking, and in defence. I have been there your company is not special. Unless of course you are an insurance company and then you are too dysfunctional to save.
We are on TFS 2012 currently and planning to upgrade to TFS 2013 soon. I'm trying to better understand what is the best setup for TFS. We currently have multiple teams in the company using it and it is critical as a source control and ALM tool. Nope Visual Studio Online is not an option for us.
Let me know what you guys think specially if someone else has a similar setup.
1) Whether we should have a DR environment for TFS so that if something goes wrong with the Main TFS we can failover? I know we can restore it from the DB backups but that is time consuming specially if the TFS application tier goes down and has to be rebuilt.
2) Should we have a QA/Dev environment so that it can be used to try the upgrade first and if it looks good then done in Prod? It can be used in future to try out features etc. as well.
In a recent environment I was working in, we had a "QA" TFS server that we could test updates\template changes\plugins, etc. That worked out great for the whole team and I would definitely recommend having a test server if that's an option for you.
I can't recommend what you do for disaster recovery as there are many factors involved that your team needs to decide on. My last team didn't maintain a completely separate rollover environment, but there were nightly snapshots of our TFS servers that were virtualized. We could restore from those snapshots fairly easily. That recovery plan was sufficient for this team based on risk\resources\potential downtime.
I hope that helps.
The ALM Rangers publish a TFS Planning Guide which has a section on how to approach DR with TFS: http://vsarplanningguide.codeplex.com/
You probably should also consider designing a High Availability (HA) TFS deployment. Details of how to do this are in the TFS Installation Guide: http://www.microsoft.com/en-us/download/details.aspx?id=29035
In general though, at the core of TFS is a SQL Server, and all the best practices around HA SQL and DR for SQL apply here. Reconstructing an AT is relatively straightforward, and if you design a HA TFS deployment you will have multiple load-balanced AT's so if one fails the traffic just routes to the healthy AT(s).
I currently work in a company that uses FogBugz for issue and bug tracking and SourceGear Vault for source control.
We are now introducing Team Foundation Server. Clearly TFS will replace Vault for source control. My question is, with the following requirements:
Large existing base of FogBugz cases (some obviously open) that we need to support ongoing
Support desk needs to be able to raise bugs / support calls
Want changes to source to be linked to a case number
... what is the best split between using FogBugz cases and TFS WorkItems?
Is it possible to totally migrate from FogBugz to TFS?
If it is not possible to migrate from FogBugz to TFS then what is the best way to use the FogBugz case and TFS workitems together?
Initially I'd say bugs and defects stay in FogBugz, stuff on the project plan as work items. You could manually get the developers to create a work item for each case in FogBugz and associate the code with that work item but I can hear the howls of derision already :-)
You might want to take a look at the TFS Integration platform. I don't know if there are any tools that link directly to FogBugz but these tools are highly extensible. You could then decide to either migrate everything in to TFS or run both systems and synchronise. Running both is nice as each discipline can use the tool they are most familiar with, devs use TFS for everything and the testers / support can continue to use Fogbugz and the toolkit keeps everything in step.
My company are imposing Jira and Zephyr on us for defect tracking and test management. We're quite happily using TFS 2008 for both these jobs at the moment, but management have never let the fact that something isn't broken stop them from trying to fix it.
Are there any tools/plug-ins that will allow us to synchronise between the remotely hosted repositories and our in-house TFS server?
Probably too late, but the company might want to look at the new features for bug tracking and manual tests coming in the 2010 release. Nice as Jira is, I doubt it will integrate well with the historical debugger and the ability to include a video of the test, as well as information on the test environment, and have it all be part of the work item.