I have started using TFS Integration Tools to migrate work items from one TFS2010 project to another team project within the same collection. After some small trial runs and modifications to the field and value mappings I started a migration on our entire product backlog. Approximately 170000 change groups were discovered and analyse started. However, during the analysis the connection to the TFS server was lost so the migration had to be restarted. After the restart approx 340000 change groups were identified (roughly double) without any significant changes being made to work items in the backlog.
Has anyone experienced a similar problem or are aware of settings or changes that can be made in the tool to limit this increase in change groups? The amount of time taken to analyse so many groups is causing the migration to take much longer that was initially expected.
After several runs, I found out that the count appears to be a running total so logically enough when I experienced a break in connection all change groups had to be re-analysed causing the "doubling" in change groups.
Related
Our TFS database size was growing really quick and I figured out that the issue was with tbl_TestResult table. I am not sure why it is growing that fast. It seems there will be a record for each test case. In my case, we have more than 1000 test cases that will be fired in each check-in. That means we do average 20 check-ins a day. That is around 20,000 records.
My question is can I manually delete the records on that table? Will it make any problems to the TFs other than losing the tests results?
UPDATE:
We have TFS 2015
Deleting data manually or changing the schema in any way would result in your TFS instance no longer being supportable by Microsoft. It effectively invalidates your warranty.
In TFS 2015 you can change the Test Management retention settings in the Team Project admin page. Default is 30 days, but someone may have changed it.
Other than that this is the normal meta data that is collected as part of your ALM/DevOps platform.
**
This was "fixed" in TFS 2017 because they changed the schema for the test results https://www.visualstudio.com/en-us/news/releasenotes/tfs2017-relnotes#test. Brian Harry mentioned a 8X reduction in storage from the new schema https://blogs.msdn.microsoft.com/bharry/2016/09/26/team-foundation-server-15-rc-2-available/
currently we are having the following states/columns in JIRA:
Open/Todo (-> Developer takes task and starts work)
In Progress (-> Developer sets tasks to done)
Done (-> QA tests on staging and sets task to ready to deploy or reopens)
Ready to deploy (-> Developer deploy these tasks at date of release)
Deployed (-> QA/Stakeholder tests task again on Live/Production and closes or reopens)
Done/Closed
In my current understanding this is wrong, because we try to handle two concerns in one status dimension: Development and deployment. I would like to decouple sprint from release/versions. Currently we cant end a sprint until all tickets are approved on production which leads to bottlenecks.
What would be your suggestion? One idea I have in mind: Limit the status to Open, In Progress, Done, Closed and handle the deployment/release over JIRA build-in versioning. If a problem occures on production, a bug ticket must be opened.
Otherwise I don't see a chance since the versioning/releasing of JIRA 6.4 does not seem to include status columns by itself.
Is releasing to production part of your team's 'definition of done'? If it is then the workflow you have makes a lot sense.
There is no separation of concerns between development and deployment. Code that has been developed but not deployed has no value to the business. Development is simply a step in the process towards release, which is the point at which value is realised.
A sprint is a timebox, not a set amount of work. When the timebox ends then the work that you have still in progress is not 'done'. If you are regularly unable to complete all the work you bring in to a sprint then that suggests you are bringing too much work in. The team's velocity, which is a measure of the work that gets 'done' each sprint, should be a good indication of what your sprint capacity is.
If your bottleneck is the release to production and verification of the release, then perhaps you should focus some effort on improving this process? Possibly this could mean more release automation or better coordination with the stakeholders over validating releases.
i am having a TFS 2012 server for managing my project
i have several sprints with same length 2weeks
Is it possible to set a whole project start and end dates ?how can you see the whole project start and end dates with all the sprints informations weather it is ongoing, finished, late.
Thanks
The sprints should be defined under a parent node. You can set this node's start and end dates just as you would the sprints.
As for seeing information about the project (i.e. whether, or not, it is on-going, finished, late, etc.), depends on which metrics you are looking for. SQL Server Reporting Services contain reports that run off both the Warehouse database and the OLAP cube that resides in Analysis services. The reports are dependent on which process template the project is based off of (Agile, Scrum, CMMI, etc.). Nonetheless, the reports will give you overview information, release burndowns, sprint burndowns, etc.
Our company has a small development team in-house but we mostly outsource our customer projects to external consulting firms which we don't manage directly. We only interact with their project manager and maybe a team lead.
I'm implementing TFS 2010 and Scrum for our internal team for Project Management, Version Control and Sharepoint shared documents access.
My problem is how to to manage the external teams.
They won't use our TFS for Version control and I can't forced them to use Scrum and report as such (report on a task level adding remaining hours).
The solution I came with is this:
Use the “MSF for Agile Software Development v5.0” template in Team Foundation Server.
Break the project into user stories and then create a task for each.
The tasks have these fields:
Original Estimate
Since we’ll track percentage of completion, this will always be 100.
Remaining
This is the percentage of remaining work.
Completed
This is the percentage of competed work.
Their team lead will update the remaining work in percentage for each user story (on the task level).
If progress is reported correctly I can print a "Stories Overview" report periodically and see the percentage complete for each user story,
I'm sure it must be a better way out there and I'll appreciate any help on getting to the right direction.
Thanks
We are basically doing the same thing ... I have 10 in-house developers and teams around the world working on their projects. Most of the work we do overlaps between external and external. We are using TFS2010. We break a piece of development into user stories into lots of tasks and eventually bugs. We view the status of the external projects by looking at the breakdown of work on the individual work items.
Part of the development process flow is to get the code into TFS source control; and the control of the logs changes as it comes back into our system.
The external PM's then use the web interface spreadsheet upload update the data on these logs (Including the time spent / work remaining) so we can see the state of the work. You don't need code upload to set a work item to test / complete.
The process flow we have on the external work is; on a given user story item you can then see the state of development for all those tasks.
List item
To Spec
Specified
Spec Agreed
Open For Work
WIP
Development Complete
External Test
Source ADded to TFS
Delivered to Internal Test
Internal Test
Complete
I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.