Team Foundation Server - Manage external teams - tfs

Our company has a small development team in-house but we mostly outsource our customer projects to external consulting firms which we don't manage directly. We only interact with their project manager and maybe a team lead.
I'm implementing TFS 2010 and Scrum for our internal team for Project Management, Version Control and Sharepoint shared documents access.
My problem is how to to manage the external teams.
They won't use our TFS for Version control and I can't forced them to use Scrum and report as such (report on a task level adding remaining hours).
The solution I came with is this:
Use the “MSF for Agile Software Development v5.0” template in Team Foundation Server.
Break the project into user stories and then create a task for each.
The tasks have these fields:
Original Estimate
Since we’ll track percentage of completion, this will always be 100.
Remaining
This is the percentage of remaining work.
Completed
This is the percentage of competed work.
Their team lead will update the remaining work in percentage for each user story (on the task level).
If progress is reported correctly I can print a "Stories Overview" report periodically and see the percentage complete for each user story,
I'm sure it must be a better way out there and I'll appreciate any help on getting to the right direction.
Thanks

We are basically doing the same thing ... I have 10 in-house developers and teams around the world working on their projects. Most of the work we do overlaps between external and external. We are using TFS2010. We break a piece of development into user stories into lots of tasks and eventually bugs. We view the status of the external projects by looking at the breakdown of work on the individual work items.
Part of the development process flow is to get the code into TFS source control; and the control of the logs changes as it comes back into our system.
The external PM's then use the web interface spreadsheet upload update the data on these logs (Including the time spent / work remaining) so we can see the state of the work. You don't need code upload to set a work item to test / complete.
The process flow we have on the external work is; on a given user story item you can then see the state of development for all those tasks.
List item
To Spec
Specified
Spec Agreed
Open For Work
WIP
Development Complete
External Test
Source ADded to TFS
Delivered to Internal Test
Internal Test
Complete

Related

Automating Tasks in TFS

I am trying to find a way that will cause TFS to automatically create a second task when another task has been completed. The idea here is that once a developer has completed a task and moved it to the done column a 2nd task is created off the first one for the QA/QC members to start looking at the code and doing what they need to do.
Any suggestions?
There is no built-in feature, but there are tools, many open source, that can help you implement your request.
One is TFS Aggregator: I am part of the core team, always looking for new contributors.
This should be managed via your user stories, either via having a task for testing that can be grabbed by the QA team once the user story is ready for QA, or by defining a board column for the user story to indicate that it's development-complete and ready for QA.
Once a user story starts development, you shouldn't be adding tasks to it. Each task should be defined and estimated prior to the sprint starting for capacity planning purposes.

TFS Integration Tool number of change groups doubling

I have started using TFS Integration Tools to migrate work items from one TFS2010 project to another team project within the same collection. After some small trial runs and modifications to the field and value mappings I started a migration on our entire product backlog. Approximately 170000 change groups were discovered and analyse started. However, during the analysis the connection to the TFS server was lost so the migration had to be restarted. After the restart approx 340000 change groups were identified (roughly double) without any significant changes being made to work items in the backlog.
Has anyone experienced a similar problem or are aware of settings or changes that can be made in the tool to limit this increase in change groups? The amount of time taken to analyse so many groups is causing the migration to take much longer that was initially expected.
After several runs, I found out that the count appears to be a running total so logically enough when I experienced a break in connection all change groups had to be re-analysed causing the "doubling" in change groups.

TFS - whole project start and end dates with sprints

i am having a TFS 2012 server for managing my project
i have several sprints with same length 2weeks
Is it possible to set a whole project start and end dates ?how can you see the whole project start and end dates with all the sprints informations weather it is ongoing, finished, late.
Thanks
The sprints should be defined under a parent node. You can set this node's start and end dates just as you would the sprints.
As for seeing information about the project (i.e. whether, or not, it is on-going, finished, late, etc.), depends on which metrics you are looking for. SQL Server Reporting Services contain reports that run off both the Warehouse database and the OLAP cube that resides in Analysis services. The reports are dependent on which process template the project is based off of (Agile, Scrum, CMMI, etc.). Nonetheless, the reports will give you overview information, release burndowns, sprint burndowns, etc.

JIRA implementation for process control and scrum

I am reviewing JIRA for possible use within several development teams at the company I work for. We use Scrum as a base for our project management. We have good, self-organizing teams, almost no assigned work, etc. JIRA seems great for some of these items, but something we are struggling with is handling the management of process vs technical tasks and something we call "issue bundling".
Process control. Currently we will create a story, say "The graph on the Profit and Loss report has issues with overlapping legend text". Okay, good enough. We will then create a technical sub-task, for simplicity let's say it's "Research and correct the issue". Next we have a set of process sub-tasks that we create. Peer Review, Make Build, QA Testing, Merge, Track. Each of these can then be independently assigned to users and placed into the Pending bin on our scrum board (BTW, we use a Pending, Awaiting Action, In Progress, Done, Merged model rather than a todo, in progress, done model). Pending basically means, I'm waiting if I'm next in priority. During development the programmers will grab the technical task, set themselves as assignee and move to in progress. When they are done they will move it to the Done bin and then update the Peer Review process sub task to "Awaiting Action", and set the assignee to their code partner. Emails are sent, peer review is done. When that is completed the peer review partner will move Peer Review to the Done bin and set the Make Build sub task to the build manager and move it to Awaiting Action. Build manager sees this, makes build, moves it to done and updates the QA Testing ticket to a Awaiting Action status, you get the point.
It's working, but are their any suggestions on alternatives, best practices, etc. Is creating technical and process sub-tasks not the way to go? One thing I notice is that we have to filter the issue list to hide the sub tasks and the scrum board can get pretty overwhelming for the stakeholder who just wants to see the status of the parent story. Since the parent story does not move until the sub-tasks move they don't see anything that is of interest to them, not even if the story is "in progress" while the sub-tasks are moving. Ugh.
Issue bundling. We often have a set of issues that are related perhaps tightly, but typically more general. For example, all issues that are related to reports in our software. At the present there are say 15 known issues. These issues may be on different reports in the system, with specific steps to reproduce, etc. When we are gearing up for a sprint we will select bundles of these related issues. The reason is that QA can more efficiently test a bunch of small fixes that are generally related in one pass rather than testing each report as a separate process.
Currently we move each issue to a subtask of a bundle. The bundle for example might be simply called "Report Fixes 1", and it will have, for example, 5 sub-tasks that are technical, each being a different report bug to be fixed. We can then add the process control items from above to the overall bundle. We also know that we won't merge until all items in the bundle are done, so they all get the same version.
However, as stated above, visibility is reduced as you cannot see the status easily of the subtasks now that they are in the bundle.
Again, best practice? Ideas? How are others handling this?
Brian,
Have you considered combining your process sub-tasks (Peer Review, Make Build) and your Scrum board "bins" (Pending, Awaiting Action)? Subtasks are the usual way to provide parallel tasks in JIRA, but the way you describe the whole process it sounded more linear. If each story really gets bounced from one assignee to another, just change the assignee and the status.
"Issue bundling" sounds like Epics in GreenHopper to me. You can also do a similar thing using a Labels field (standard or custom) to group issues.
~Matt
Here is how we are handling this -
The process sub-tasks you mentioned are statuses in our implementation. So, I'll have the story broken into Technical tasks that the dev finishes and then moves the story to "Pending Review" status, from where it goes to Make Build and QA Testing and so on. This is pretty much what Matt said as well. This workflow gives the stakeholders the sense of the progress that is being made on the sprint and thus is very helpful.
As such there is nothing like best practice in JIRA, it is very flexible and one can use it the way one wants/needs.
I agree with you on Epics not being a complete solution. We overcome this by adding labels and creating filters and swimlanes based on these filters.

Tools to assist managing the application promotion process in an enterprise environment

I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.

Resources