How to handle a single team in parallel products? [closed] - tfs

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are in the process of reviewing scrum and seeing how we can implement it in our product. Since it's fairly new to us and most examples follow a very straightforward cycle we have some questions on how it would apply to our situation.
We are one single team of a few developers. We have 1 product that has several parallel product versions. Example:
Foo/
/version_1
/version_2
/fork_a
/fork_b
Version 1 is our legacy version, which mostly receives bug fixes, but we need to back port the occasional feature from our main development: version 2. Both fork_a and fork_b are special versions of our product, Foo, which can go from an alternative UI to a small extra feature. At the moment, when a fork is made and completed it's treated as closed and nothing is back ported to that branch.
Our problem is that all of these product versions are developed parallel, and we can't visualize how to maintain this. (We are planning to use TFS 2010 as our tool, so any direct examples are useful.)
We though of treating everything as a different product, each with it's own releases and sprints. But that means a developer who needs to do work on feature A in version_1 and feature B in version_2 can be booked in parallel sprints. We basically need to manually manage that. Which means that we cannot properly generate reports to visualize this.
An alternative idea was to treat everything as a single product and drop the release term. Or use quarterly releases and have the sprints of all products under those. But that means that we can have a product release in the first week of a one-month sprint. How do we coop with that? Or how do we then properly view what has been done for a single product release? Because the work developer X did in sprint 1 and 2 can be of no use to the product release we're targeting.
Any real-world examples and ideas on how to manage this are greatly appreciated.

We deal with a similar situation with our Scrum team. We have an even wider range of products that we develop with one team (internal desktop app, web apps, web services). The all have a common technology base (C# .NET), but a wide array of clients and presentation technologies. We actually have enough software developers to break up into separate teams, but we don't have a large enough supporting cast (QA, DBA, BA) to do so. So we end up modifying the classic Scrum approach a little in that not all of our stories can be worked on by the entire team, some are more web dev and some more desktop. We are still having a lot of success with getting feedback on all of our products every 2 weeks in a combined demo to the stakeholders. We release all of the products in sync as well.
I think the best thing to do is just get started with whatever process seems to fit and adjust along the way in your retrospectives. Don't get too hung up on being textbook as all agile processes are meant to be tailored to your specific situation. Doing some things "right" is better than doing nothing while trying to make up your mind.

To see work done on a specific release, use work item association upon checkin, that way you can simply query which items went into which branch.
Use automated builds to automatically associate changesets (and thus work items) to built versions of the product. That way it is easy to figure out which changes have gone into which binaries.
Then finally just let the marketing people decide how to name/number each the release, the technical version number doesn't have to match the marketing version, most Microsoft products are a great example of this, Windows 7 doesn't have a version number of 7, internally it uses 6.1, build 7600.16385.090713-1255.

I have worked in team that runs trunk and branches like what you have. At the time we didn't practise Agile but that didn't stop us using our common sense to pull through. If I was to do it again now I would have a main board for all the work that needs to happen in a sprint but also have sub board for each of the different fork/branches. The main board and sub boards are updated at the same time each day. The sub boards just provide a visualization of the stories and progress specific to each of the forks/branches. Everything on the main board are still completed to done criteria every sprint. Releases need to be synced to the same sprint intervals (time can be different) to keep the team together (no parallel sprints).

Related

TFS2012 vs Jetbrains TeamCity+YouTrack [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have used TFS2012 on the cloud and we don't like that there's no reporting service so we're looking to move to on-premise TFS2012. At the same time, we're starting to like Git and we're thinking that it may make more sense than TFS version control.
This obviously requires research and developers to "play admin" so we're taking the time to evaluate whether Jetbrains' highly-appraised solutions are a better fit.
Given a team of 6-8 people that run with Scrum that is eager to be on the "best practice" train for agile and a project that combines .NET technologies for the back-end and Javascript (AngularJS) on the front-end, considering a move from TFS2012 to a TeamCity/YouTrack/Git stack for scrum planning, source control, continuous integration and quality control and issue tracking:
What would/could we miss from TFS2012?
What are we going to enjoy from the new stack?
Is the new stack falling short in any respect that TFS is not and vice versa?
Note: This is a question specific to TFS2012 - there are several comparisons on SO and elsewhere for previous TFS versions and TeamCity, perhaps YouTrack too.
Here's a brief account of my two-week long experience with Git/YouTrack vs 6 months of TFS.
The new stack feels a lot more lightweight than TFS. Both installing (we tried the on-premise TFS shortly) and using TFS gives the sense of a very heavyweight enterprise suite for no reason. This is partially an illusion that the UI design gives but it seems that with YouTrack:
Takes less clicks to do anything and even less if you learn some shortcuts and how to use the commands.
It's easier to navigate between the views - there are less of them but give a better overview than TFS. This is not because they present more info - in most cases they present less info - but because they give the key information in a visually clean way.
The ability to run ad-hoc searches in YouTrack makes such a big difference! In TFS you have to create a query with a UI that tries to makes it easier but ends up making it harder for you than just typing the query params. I mean, we are developers after all.
I've enjoyed the local commits of Git and how pull requests work to integrate work from other people into a main branch vs merging on TFS.
TeamCity has also been very lightweight to use - though I have no experience with CI on TFS. Having said that, it's an area I didn't delve into much because I was already spending a lot of time managing TFS.
Hiccups and things that I miss from TFS:
It's a little harder to manage releases with YouTrack or I haven't figured out how to do it effectively. The management and separation of product backlog, release backlog and sprint backlog is easier on TFS.
There's no way to plan a sprint based on capacity of developers - I believe JetBrains is working on that though.
You gotta pay for a private Git - though YouTrack/TeamCity are free and full-featured for a few users.
I'll try and keep this up to date as I go.

How is TFS 2010 used within Microsoft for scrum sprinting? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are a Microsoft shop, and so using other incompatible tools for Scrum does not make as much sense. So, we use TFS - for Scrum as well.
However, we found TFS templates to be rather simplistic. There is no way that MSFT can release the next Visual Studio, or the next .Net framework by doing all of the planning using TFS tasks.
What is Microsoft hiding from the rest of the world?
Alternatively, how do you use TFS 2010 for scrum in enterprise (=huge size) software?
EDIT: Specifically, trying to figure out how different pieces fit together can be hard. Imagine the following epic (as if it was developed in .Net 5.0 and not done in .net 3.5): We want to implement the LINQ library. Now, let us size this task ... before the can do so, they need to carefully define all of the stories IN DETAIL, and only then try to put it together. Still, the amount of use cases is huge, and interactions between different parts of the system. Without lots of Wiki pages, lots of Word documents, a combination of these two, and perhaps something else, I do not see how they could keep track of things.
Microsoft has released a new scrum template for 2010.
http://blogs.msdn.com/b/bharry/archive/2010/06/07/a-scrum-process-template-for-tfs.aspx
There's a 9 post series on the old TeamsWitTools blog about how dev-div uses TFS. The first post in the series talks about the breakdown of epics. This is probably a good place to start.
I don't know about TFS 2010, but I do know that the TFS 2008 has some important features to Scrum.
Once setup correctly, TFS can launch a bunch of tasks automatically for huge scale projects, which Scrum was founded to manage very productively. Some of these tasks is to compile a build at scheduled time. In the case of Scrum, I would say after each Sprint, after that the commitments of the Team has been done. "Done" is a very important word in Scrum, this means that here is nothing left to be done. So, think of it as all of the kind of testing, test automation, etc. is done. Your code works, 150% sure, bug free. Anyway, TFS can report the tests that failed, and track down who this task was assigned to (Team, not individual).
After having taken an eye out to #Shiraz Bhaiji's link, I think that FS 2010 got all what you need with Scrum. You got the Burndown chart, which purpose is to illustrate the work remaining to be done throughout the time, you got the velocity chart, which gives significant information about the Team's velocity to work together. Keep in mind that the velocity of a Team shall augment with the time the Team works together.
I see no problem at all using TFS2010 and set it up to work with Scrum, as it can track the Product Backlog, and should allow you to write Team's Sprint Backlog as well. In fact, there is now, with the coming of VS2010, the PSD certification, which is Professional Scrum Developer certification.
Microsoft Developers Tools (TFS and VS) is Scrum "compatible".

How to release often with Lean/Kanban? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am quite new to Lean/Kanban, but have poured over online resources over the last few weeks and have come up with a question that I haven't found a good answer for. Lean/Kanban seems otherwise such a good fit for our company, who is already using Scrum, but have reached some limitations inside that methodology. I hope someone here can give me a good idea.
As I see it, one of the biggest advantages of Scrum over Waterfall is the use of sprints. By having everything ready every 14 days you get short feedback cycles and can release often. However, as I have understood from reading about Lean, there are some costs associated with this (for example, time spent in sprint planning meetings, team commitment meetings & some problems with finding something useful for everyone at the end of the sprints).
Lean/Kanban will remove these wastes, but only at the cost of not being able to release every 14 days. Or have I missed an important point? For, in Kanban, how can you work on new development tasks and release at the same time? How do you make sure you don't ship something that is only halfway done? And how can you test it properly?
My best "solutions/ideas" so far are:
Don't release often and allow the waste associated with running out of new development tasks. Not really a solution to the question asked though.
Develop in branches and then merge into the main trunk. Makes you have to support at least two branches continuously internally.
Use some smart automatic labelling system to automatically build only certain finished tasks and not others.
As a summary, my question is: When you use Lean/Kanban, can you release often without introducing waste? Or is release often not part of Lean/Kanban?
Additional info specific to my company:
We use Team Foundation System & Source Control and have previously had some bad experiences in regards to branching and merging. Could this be solved simply by bringing in some expertise in this area?
The problem you describe seems more a source control program -- how to separate done features from features in-progress, than about Kanban. You seem to put a heavy penalty on running many branches -- which is the case for source control systems not based around the idea of multiple branches. On Distributed Source Control systems, such as GIT and Mercury, everything is a branch, and having them and working with them is lightweight.
I assume you read this blog about Kanban vs SCRUM, and the associated practical guide?
And, in answer to your question, yes, you can release often with Kanban.
You need to understand pull systems, which is what Kanban is designed to manage.
A customer (or product owner or similar) request for a feature in the running system is what triggers the process.
The request is a signal that goes to deployment. Deployment look for a tested item with properties that match the request. If none is there, you write the tests and look at development if there is a development slot that can be used to implement something that fulfils the test. When development has done its development (maybe looking for a suitable analysis first and so on), the test does its test, and deployment deploys.
The requests going backwards through the system are permissions to start working. As soon as the request has arrived, this triggers a lot of activity, where each activity should be completed as quickly as possible. There you have your turbo deployment.
Just like the request for a car goes to the dealer who looks in the ship who signals to the car factory, who signals to the suppliers.
Kanban is not about pushing requests through a system. It is about pulling functionality out of the system in exchange for a request that enters via the last step.
The team I manage uses Kanban and we release around every two weeks. If you're strict about what gets integrated into your mainline code branch (tests passing, customer approved, etc.), Kanban allows you to release whenever you want. You need to make sure that the stories moving through your system aren't co-dependent in order to do this, but on my team that's usually not a problem - a large part of our work involves maintenance, which consists of several unrelated bug fixes / features per release.
The way we handled weekly releases on a sustained engineering project that used Kanban was to implement a branching strategy. The devs worked in a sandbox branch, and made one checkin per work item. Our testers would test the work item in the sandbox; if it passed the regression tests the checkin would be migrated to our release branch. We locked the release branch from noon Monday until the release went out (usually by Wednesday, occasionally by Thursday, the drop dead date was Friday), and re-ran the regression tests for all migrated checkins as well as integration tests for the product, dropping a release once all of the tests passed.
This strategy let devs continually be working on issues without being frozen out of their branch during the release process. It also let them work on issues that took more than a week to resolve; if it wasn't checked in and tested/approved it didn't get migrated.
If I were running Kanban for a new version of a project, I'd use a similar strategy but group all related checkins as a 'feature', migrating a feature en masse to the release branch once the feature was done and then performing additional unit/integration/acceptance/regression testing in the release branch before dropping a release with that feature. Note that a key concept of Kanban is limiting work in progress, so I might restrict my team to work on one feature at a time (this would probably be several work items/user stories).
There's more to this than just source control, but your choice of TFS is going to limit you. When the Burton project was conceived back in 2004, Microsoft wasn't paying attention to Agile, much less Lean. It's going to be your weakest mechanical link for some time. Your hackles should have been raised by CodePlex's own adoption of Mercurial after having been offered to the Microsoft community as the poster child of TFS implementation.
A more salient issue here is Work Design. It encompasses the order that you choose to implement features (work schedule), as well as prioritization and cost of delay, and the shape and size of work items.
Scrum is commonly interpreted to say that non-technical "Product Owners" can determine work schedule based solely on their own concerns. If you follow this path, you're going to incur a lot waste by not taking the opportunities to do work together that belongs together. Work that belongs together can't just be determined by Product Owner wishes. Technical and workforce (skills) opportunities must also be taken into consideration.
For work to be done in the most productive way, the work itself has to be designed that way. This means that in a Lan Product Development team, decisions are made not by a non-technical worker, but by what Toyota calls someone of "Towering Technical Competence" who is close to the product, close to the customers, and close to the team.
This role is a stark contrast to Scrum's proposition. A Chief Engineer on a Lean team is himself (or herself) the voice of the customer, and the role of Product Owner is unnecessary.
Scrum's "Product Owner" is a recognition of an under-developed role in software development organizations, but it's far from a sustainable solution that consistently avoids waste. The role of "Software Architect" is often insufficient as well, as in some developer sub-cultures, the architect has become far too removed from the work.
Your issues of continuous deployment are only partially addressed with technology and tools. Look also to organizational issues, and perhaps give some thought to Scrum's purpose as a transitional approach from waterfall rather than one that can serve your organization indefinitely.
For source control I'd highly recommend Perforce. It makes branching and integrating changes from other branches relatively straightforward, and provides the best interface for source control that I've seen so far.
Continuous integration helps as well - i.e. lots of small, more than daily commits, instead of huge and potentially challenging merges. Tools like CruiseControl can help highlight when the source gets broken by a bad commit. Also, if everyone makes many small changes then conflicting changes will be rare.
I'd also advice not to try to follow things like lean, scrum, kanban & co. too closely. Just solve the problems yourself, looking to these ideas for guidance rather than instruction. The specifics of your problems will more than likely require some flexibility for the best management.
How we do it:
We have a pipeline with the following stages
Backlog
TODO
In progress (Develop and quick testing)
Code review
Test (Rigorous testing)
Integration test and general acceptance tests
Deploy
Each story is developed as a branch based on the latest version to leave the Deploy stage. They are then integrated as part of preparing the integration test.
QA pulls from the code review stage and can prepare releases at any pace the want. I think we have a pace of roughly one release every week.
By removing the "master" branch from git and not doing any merge before the code review stage we've made sure that there is no possibility to "sneak" code into releases. Which, as an interesting by-product, has forced us to visualize a lot of the work that used to be hidden.

Is Scrum for Team System a good tool for managing the scrum process? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We've had several sprints with the traditional whiteboard and PostIt notes and are ready to move forward to integrating the process into our Team System environment. One tool we're considering is Conchango's "Scrum for Team System" (http://www.scrumforteamsystem.com/en/)
Has anyone tried this tool in a real world scrum process? Was your experience positive or negative? Is the tool worth the licensing fee in your opinion?
We use Scrum For Team System and love it. It really does a good job of merging the TFS and Scrum processes.
We also got the task board (the part you have to pay for) and really like that too.
Even with Scrum for Team System, TFS via visual studio is not good for planning meetings (though it is ok for standups) The task board helps a lot in visualizing the work remaining and in moving it around.
Before we got the Task Board, we would use post it notes for our planning meetings and then enter them in to TFS after. And even though the Task Board is nice, if you don't have at least 2 people working on it in a planning meeting then it is not enough. We have 3 laptops going for a team of 5 + 1 (scrum master) and that works great. If you don't have that then I would still think about doing post it notes.
The task board allows you to refresh and see what the other are entering in. We have one computer hooked up to a projector so that the others can see what is happening. We all then brainstorm like we would on post it notes, but the people on laptops enter the data into TFS.
For us, it works great!
Later Note: If you do choose the Scrum For Team System template then I STRONGLY recommend that you read the Process Guidance. We had to figure out a few thing the hard way before we sat down and read it. Especially on how to handle defects (i.e. when is it a Bug and when it is a Sprint Back Log Item that goes back to "In Progress")
The templates are free. It is only the Task Board Application that cost a modest fee. You can use the templates without the Task Board although I highly recommend using it as wll. I think the biggest advantage for my team has been that the ScrumForTeamSystem tempaltes integrate into VS and provide a seamless feel with the rest of the development environment.
We love the ability to attach the PBI's to check-ins and have them show up on the Daily Build report.
If you are are missing something you need, you can fire up the VS template editortweak the templates to your liking. For us, we added a "Requested By" field and a "Testing Status" field to the PBI template.
The 2 shortcommings that annoyed us were that the "State" of the PBI's were not the same as SBI's (No Ready For Test on the PBI). We do testing/validation at the feature level and not the task level and wanted to track the PBIs status so we had to add our own custom field. The second issue was that there is no report out-of-the-box for a PBI burndown/up at the Sprint level. So you can't see how you are doing at delivering stories, only tasks. You have to make your own.
We don't really use the "Bug" template much (we ship flawless code:) ). No really, there is no such thing as a bug against work in a sprint; so the only time we record a bug is if a client finds an issue in the production code where it didn't work as advertised.
As Vaccano said, it isn't nearly as fast as a whiteboard or post-its in a meeting environment but if you get a couple people really good at using the tool and a couple of laptops you can make it work.
I evaluated several products and the simplicity and price of ScrumForTeamSystem can't be beat.
Like others have said, beware that the Conchango template handles bugs very differently. The idea that unreleased Backlog Items are bug-free is not just a suggestion; there is literally no way to track bugs affecting the current sprint's work. I found that this disadvantage outweighed the advantages.
If you are searching for an online Whiteboard you can have a look at the Scrum tool Agilo. It was build especially for distributed teams which do not have the chance to work on a "real" whiteboard.
For a quick information you can have a look at this video.
The 3.0 version of the template for VS 2010 changes how the tool models Scrum in ways to very effectively support multi-team projects and the typical interactions one will find in larger projects.
Regardless of version, it is currently my default answer for Scrum projects in Microsoft environments. As mentioned, the task board and the (new) ScrumMaster's workbench are incredibly valuable as well!
We build Urban Turtle that extend the Microsoft ALM platform with an intuitive Web interface and simplify your agile project management. By providing a Task Board and a planning board directly in web access you don't have to synchronize anything. The installation is a simple process. 2 minutes to install on the web access server. Nothing to setup on the client desktop.
Don't take my word for cash have a look at what Brian Harry from Microsoft said about the product :
http://blogs.msdn.com/b/bharry/archive/2010/10/21/urban-turtle-3-5-released.aspx
Have a look at the website and send me your feedback
urbanturtle.com
Dominic Danis
Product Owner.

To beta or not to beta.. that is the question [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm looking for a consensus for one or the other
Beta; Release software that is semi-
functional to a limited audience and let users guide the
process until an application is
complete.
No Beta; Define all functionality
based on previous user-feedback
and/or business logic, build the
software, and test it with whatever
time you give yourself (or are given).
Hmmm...Beta is for a product you think is finished...until you give the Beta users to play with it and they break it in ways you haven't thought of.
So, yes, go with Beta, unless you want to burn your product on many users.
I disagree with both of your cases.
How about internal QA? They find bugs too, you know. Only release a beta to customers when you have no known serious bugs.
Also, always test adequately. If time is pressing and you haven't finished all your tests, then that's just tough. Finish testing, whether you have time or not.
My answer is that there are many factors that would determine which makes more sense:
1) Consequences of bugs - If bugs would result in people dying, then I think a beta would be a bad idea. Could you imagine running beta software on a nuclear reactor or on a missile system? On the other hand, if the consequence is fairly minor like if there is a temporary outage on some fantasy sports site, that may not be so bad to put out in beta.
2) Expectations of users. Think about the users of the application and how would they feel about using something that is a "beta"? If this would make them scared to actually use the software and be afraid that it is going to blow up on them regularly and be riddled with bugs, that may also play a role.
3) Size of the application. If you are going to build something very large like say an ERP to handle the legal requirements of 101 countries and contain hundreds of add on modules, then a beta may be more sound than trying to get it all done and never get to where you have customers.
4) Deployment. If you are setting up something where the code is run on your own machines and can easily be upgraded and patched, then a beta may be better than trying to get it all done right in the beginning.
5) Development methodology. If you take a waterfall approach, then no beta is likely a better option, while in an agile scenario a beta makes much more sense. The reason for the latter is that in the agile case there will be multiple releases that will improve the product over time.
Just a few things I'd keep in mind as there are some cases where I easily imagine using betas and in other cases I'd avoid betas as much as possible.
Undoubtedly beta!
Some benefits of running a beta period ...
Improve product quality, usability, and performance
Uncover bugs
Gain insight into how customers use your product
Gauge response to new features
Collect new feature requests
Estimate product support requirements
Identify customers
Earn customer testimonials
Prepare for final release
Generate buzz for product release
Launch quickly and iterate often.
Without knowing what the app is and what it's audience will be, I think that would be a hard choice to make. However if it's an Open Source project, it seems like the consensus is usually "Release Early and Release Often".
The real question is this: Do you know exactly what your users what?
If you answer yes, then design and build everything and launch it with no Beta.
If you answer no, then Beta is the way to go - your users will help define your software and they will feel more a part of the process and have some ownership.
I say Beta. But I disagree with your premise. You should never release semi-anything software. The software released as Beta should at least be feature complete. This way your users know what they're getting into.
As far as finding bugs. Internal testing is best. However, anyone that has released software knows that no matter what the user will find a new and interesting way to break it.
So Beta 'til your hearts content.
I would suggest alpha testing first, to a select group, that is not feature complete, so they can help you find bugs and determine which features are needed.
Once you get what is thought to be feature complete, then release it to a larger group, mainly to find bugs, and to get comments on feature changes, but you may not make feature changes unless there is something critical.
At this point you are ready for release, and you go back to step (1) for the next release.
After finishing (you think) your software, and believe that there are no serious bugs, conduct alpha testing. Make the software available within the test staffs of your company, fix the bugs reported.
Next, release the software to customers as beta testing, collect comments, fix bugs & improve features.
Only then you're ready for release.
Neither.
You shouldn't release a beta until you feel that the product is bug-free. And at that point, the beta users will find bugs. They will, trust me. As far as letting the beta users 'guide the design', probably not a good idea - you'll end up with software that'll look like the car that Homer Simpson designed. There's a reason why beta users aren't software designers.
As far as the second option - unless you are an extraordinary tester, you will not be able to test it as well as end-users (they will do the silliest things to your software). And if you "test it with whatever time you give yourself (or are given)", you will not have enough time.

Resources