Combine Redmine and TFS - tfs

Our company want to implement an bug tracker, which will be usable for our partners. Currently we are using TFS as our source control and ALM system. Now i'm confused, how to combine these two systems, because in the TFS we have our product versions and work items (maybe also bugs). But when we start using Redmine, we don't have them in the TFS any more, or is there a way, to use them together?
I could not find any plugins or something similar.
Maybe some one has experiances in the area, thank you!

First, things reported by customers are not bugs in the engineering sence. They are defects that may or may not be bugs.
Specifically in TFS a bug work item is used to represent the meta data of a failing test or exception.
To integrate you will either have to roll your own or buy a tool. I recommend talking to TaskTop as the best work item tracking integration tool...

With all due respect, the accepted answer has misleading statements.
First, things reported by customers are not bugs in the engineering
sence. They are defects that may or may not be bugs.
Even when we talk about TFS/Redmine and the like, the concept of a bug is very different from what is being expressed here.
According to ISTQB (and other qualification boards related to software quality) there is a difference between an Error, a Failure, and a Defect.
Let me explain each concept:
Error: is produced by a human being's actions (for now :) ) it's existence may or may not be detected and may or may not be classified as a software Defect
Failure: is how an Error can be detected while using the Application Under Test (i.e. error messages, data inconsistency, unexpected behavior, etc). A Failure may or may not be detected, but most often ends up being classified as a Defect.
Defect: is the documented description of a software Failure. It usually includes actual software behavior and expected software behavior. More and varied information can be included as well (i.e. software version, environment, etc)
Now what is a "Bug"??
"Bug", as a term, is a software testing slang that dates back to the late 40's.
Check this out:
http://www.computerhistory.org/tdih/September/9/
Taking all the aforementioned, we can say that the Work Items in TFS and Issues in Redmine both have a "Bug" category that refer to work items that allow Defect management actions.
In conclusion, whether or not the unexpected behavior was found by a customer does not change the fact that a "Defect" is a "Defect", and therefore a "Bug" is a "Bug".

Related

Is PolymerDart dead? An architecture issue

Given the information at both the Polymer and Dart conferences in the last week, there is no information mentioned about the intersection of polymer and dart called PolymerDart at neither of them (except a passing wave, 2 word statement in 1 slide at the dart conference after watching the conference a second time). Given the last commit for it has been in July, only adding some tests for PolymerDart I am leaning towards yes.
I have been looking into a variety of other sources and it seems that while Polymer and Dart are separately going to continue on their own things respectively, there is no support any longer for the union. Polymer is focused entirely on PolymerJs and web component standards, where as Dart is pushing deep into AngularDart and its associated architecture. It seems that after looking deeper into it, the Googlers who were managing this union was in fact a side project, which does not actually have set resources, so there isnt a defined team in place for PolymerDart compatibility.
Given this information and the fact that Google regularly releases a lot of things, some thrive and some die all the time. They let the community determine it, and is shown over and over again throughout their many projects.
Has anyone else obtained any information related deeper into this? All my signs are pointing to it being dead. That said, maybe i should be looking to migrate my team from this and more towards angulardart.
I did as best as I could to only make note of facts and not opinions of any sort, as my opinions are not on trial, the continued development in a fast paced technology where stagnant codebases imply a dead-in-the-water technology.
After watching the talks it seems to me that the focus is more on making angular2 dart, the angular 2 dart components and the dev compiler more stable and robust.
It's eventually not dead at the moment, but priorities might have shifted.
Thus, you might want to give https://pub.dartlang.org/packages/polymerize a try.

JIRA - "Done" issues marked as "Unresolved"

I'm a bit of a novice using JIRA and I don't know why this is happening. Lately, whenever I mark an issue as "Done", the system won't update as it being "Closed" but rather keep them as "Unresolved". Why would this happen? I don't know what information I must provide to solve this issue, except that I'm using JIRA 6.1.3, self-hosted, and no extra plugins.
That issue is neither fully resolved nor necessarily related, but you might want to check into Fix apparent data integrity violation with closed issues not actually being closed (JRA-34222), in particular Andreas Knecht's comment, summarizing potential race conditions during workflow changes:
Yeah so bulk editing while doing workflow changes definitely has the potential to cause this sort of a problem. JIRA doesn't really lock down a project while doing workflow migrations AFAIK so this kind of thing can always happen if concurrent operations are happening during a migration.
It's a complicated problem with an even more complicated solution. For some reading on an analysis we did ages ago see [not accessible link to 'Concurrency+Problems+in+JIRA']. Also [not accessible link to 'Concurrency+bug']. It's a known problem, but the solution has huge performance implications for JIRA and will take considerable effort to implement and test.
The last comment from a Cisco employee seems to confirm Andreas' summary that this might be a generally applicable issue with JIRA for the time being:
JIRA 5.2.8 we have been having an issue like this for months. I can
not view it, but see also: JSP-161469
Recent investigation has correlated "Tried to reopen the IndexReader,
but it threw AlreadyClosedException." message as closely following the
execution of Jython scripts. [...]
Possible Remedy/Workaround
While not addressing the root cause, you an see from the screenshot attached by Michael Knight that Atlassian seems to have been able to fix the integrity issue at hand by Using the Database Integrity Checker:
This aptly named JIRA feature is useful in a number of situations, e.g.
Before migrating a project to a new workflow [emphasis mine]
An external program is modifying JIRA's database
Troubleshooting a server crash
Using such a tool is obviously not without risks itself, so please note that Atlassian strongly recommend[s] taking a backup of your data before correcting any data inconsistencies accordingly
Good luck!
Here is a very good confluence answer that explains why this happens and a few workarounds Confluence Answer to Done issues as Unresolved

Tips for avoiding second system syndrome [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Lately I have seen our development team getting dangerously close to the 'second system syndrome' type ideas while we are planning the next version of our product. While I'm all for making improvements and undoing some of the mistakes of our past, I would hate to see us stuck in an endless loop of rewriting and never launching anything.
Is there a good design / development method that lends itself to building a better version 2.0 while avoiding second system scenarios?
I have experience the second system syndrome from both sides as a customer/sponsor and part of a development team.
A root cause for problems is when the team latches on to an Utopian vision of version 2, such as the desire to make the new software "flexible". In this scenario everything is abstracted to the nth degree. For example, instead of data objects that represent real-world entities a generic "item" object is created that can represent anything. One flawed idea is that we can build in so much flexibility into the software to anticipate future needs, that non-programmers will be able to just configure new capabilities. Often one goal such as "flexibility" overshadows the effort to a point that the resulting software doesn't work.
A balanced practical consideration of usability, performance, scalability, features, maintainability, and flexibility goals can bring the team back to earth. "It would be great if..." should be prohibited from the vocabulary of the team. The Scrum backlog is a good tool and the team should be heard saying often... "Let's backlog that...we don't need that for version 2."
"I would hate to see us stuck in an endless loop of rewriting and never launching anything."
Which is why people use Scrum.
Define a backlog of things to build.
Prioritize, so that things which lead to a release are first. Things which should be fixed are second.
Execute sprints to get to the release. Execute a release sprint.
Get someone who has written at least three systems to lead the project.
Try to focus on short delivery cycles, i.e. force yourself to deliver something tangible and useful to the users every few weeks or month. Prioritise the tasks based on their value to the customer. This way you always have a usable, functional application with satisfied users, while under the hood you can refactor the architecture in small steps if you wish (and if you can justify the need for it - that is, towards management / the customers, not just teammates!).
It helps a lot if you have a stable build process with a good suite of automatic (unit / integration) tests.
Agile development methods like Scrum do this, and they are warmly recommended. But of course it is not always easy or even possible to adapt such a method in your team. Even if you can't, you can still take elements of it and apply it to your project's benefit (maybe without even mentioning the words "agile" or "Scrum" publicly ;-)
Make sure you document the requirements as well as possible. While obviously you need to also manage what gets into the requirements to avoid over-designing, having a fixed scope helps prevent developers from running off with ideas or gold-plating what needs to be done and it helps keep management or clients from introducing scope creep. Define all requirements and how scope changes will be addressed.
I'm all for short development cycles (make sure you're writing tests) and agile methodology, but neither of those is a shield against second syndrome system. In some ways it's easier to keep adding on feature after feature if you're working in short sprints without stopping to look at the overall picture. Use agile practices to build the simplest thing that works, then keep adding your other requirements as simply as possible. Remember YAGNI, because when you build a system a second time, that's when you're most likely to build something you're sure you'll need at some point, something that will make the system "extensible" so there never has to be a third build. It's the best of intentions, but the road to hell and all that.
You can't get close to second system syndrome. Either you're in it, or you're away from it. You'll know when you're in it, but only after wasting a lot of resources.
Tips are: know about it (so basically we got that covered already). It's invaluable information to know what a "second system" is, and even more to have some experience with that. I think we all have some experience with that, hopefully benign.
That knowledge will often make you think twice and you'll find a solution without venturing into second-system limbo.
PS: Also know how to use your current system, that includes, maybe documented solutions, and other documentation.
Focusing on the system architecture should help e.g.
Having documented interfaces which support "loose coupling" between sub-systems
Having documented design decisions (avoid re-hashing previously beaten paths)
Hence, without going for an all out swap, the current system can be "upgraded" with more appropriate interfaces to help future growth.
Another good way to focus: assign a $$$ figure to each feature and prioritize accordingly ;-)
Anyhow, just my 2cents
I up-voted S. Lott's answer and would like to add some more suggestions:
Deliver a working product to your customer as frequently as possible. Iterations lasting between a few weeks and 2 months are ideal. Agile methodologies, such as Scrum, lend themselves well to this.
Use FogBugz for feature and bug tracking. Its features are very practical for agile projects. FogBugz allows easy prioritization according to releases and priorities. If your team enters their estimated levels of effort for each task, you can also use this to calculate reasonable ship dates.
Prioritize which features you will develop according to the 80/20 rule (20 percent of the features will be used 80 percent of the time) and the most bang for the buck. This helps keep the system as simple as possible, helps prevent feature bloat, and saves development and testing time.
Give similar thought to both new features and bugs when you determine the priority of an issue. One point of the Joel Test is "Do you fix bugs before writing new code?". In most shops this doesn't happen, but do not make debugging the system an afterthought. Also, keep a working copy of the old system to compare against when bugs are found on the new system.
Also factor in the level of effort required to maintain, and if necessary rewrite, existing code. If you have not already done this, give the team some time to code review whole files that they find troublesome to change. If the system's code was difficult to maintain the first time, this will only get worse and cause your team to spend more time on new features down the road.
It can never be avoided at its entirety. But being cautious could alleviate the problem.
You should formulate some thumb rule based on the vital parameters (scarcest resource) that define the success of the system. For example, reducing potential number of bugs might directly decrease operational cost (potential service calls to support center). But this might not be the case in every other systems. Another example, scarce use of CPU, memory and other resources might be beneficial in some cases but there could be environments where they could be available in abundant.
So simply to avoid "temptations", identify the scarcest resource (time, effort, money$ etc) and consider implementing only those that exceed threshold value.[f(s1,s2...) > threshold]
Despite the iterative development, I would emphasize on how technical debts are handled.
Links that are related to this:
Tech Debts: Effort Vs Time
Tech Debt Quadrant

What advantages does TFS 2010 have over Axosoft OnTime?

I am currently creating a business case for rolling out TFS 2010 as our source control and bug/release management tool.
We currently use OnTime for our bug tracking software and subversion for our SCM.
I was wondering what advantages TFS 2010 has over OnTime?
I have done some thinking so far and would love to hear responses:
TFS 2010 allows linking changesets->work items->builds
TFS 2010 provides greater customisation of workflow than OnTime
TFS 2010 is integrated into the Visual Studio IDE - This requires less apps to be open and less window flicking
Thanks in advance.
TFS is one of the least intuitive Version Control systems I have ever had the misfortune to have to use. It may have numerous "bullet point" advantages over OnTime (and other comparable systems), in terms of raw feature-lists and capabilities, but the key factor is whether it can fit in with your working processes.
My experience with TFS is that you will be required to adapt to the TFS way of working, because adapting TFS to your way of working will be impossible or too difficult to justify.
We recently reviewed a number of possible alternatives to replace a system comprising SVN and a manual bug-tracking system (Excel spreadsheets). On-Time was evaluated but deemed too expensive and complex.
In the end we opted to continue using SVN, but drastically revised (simplified) our repository structures and chose to combine SVN with the FogBugz issue tracking system. The integration between these two systems was fairly rudimentary "out-of-the-box", but required only a little effort on our part to achieve the much closer level of integration we desired. Certainly far LESS effort than my previous experience of a TFS roll-out involved.
Our SVN/FogBugz system is also now integrated with a FinalBuilder build automation suite.
The result is a system that not only fits our working practices perfectly (since we devised the means by which the systems would integrate to achieve that) but which is also infinitely adaptable as our working practices evolve.
I think that it really depends on the size of your team(s), and what you want out of source control.
I used bugzilla in combination with Perforce for a couple of years and found that both were really very good at their own individual things while working in a very small team (2-3 people), but the suffered from a lack of integration between them and from some little idiosyncrasies that took time to get used to.
I recently moved to a new job where TFS is used extensively. There are 4 main teams in this company with 10-12 developers in each, split into further project teams below that level, and it is in this kind of environment that TFS really shines imo. It's biggest advantages in my view are:
1) The integration with Visual Studio - it's not just a case of having less windows open, but it really does speed things up and make your life easier. Things like VS automatically checking out files for you as you work (no issues with accidental checkouts due to lockless editing), being able to synronise local + TFs builds, being able to quickly compare the local version against previous ones..yes you can get 3rd party plugins to integrate but none to this level and with the same stability.
2) The communication features - simple things like integraton with Live Messenger (provided you configure TFS correctly) are great for large teams. We use WLM to communicate accross the office and for collaboration as its just quicker than walking over to someone else every time you need to ask a quick question.
3) Linking builds/changelists to tasks - Yes other SCMs do this but again it's just done in a very nice, integrated fashion..I guess it's nothing special to TFS but personally I like how it tracks this.
4) Ease of merging/lockless editing. I've had experience with some other merge tools and the TFS one works nicely enough, making merging after concurrent editing pretty simple. It's very similar to perforce in this respect, but also with a usually pretty effective auto-merge tool which I use for tiny edits that I know cannot cause any potential issues with edits other developers are working on.
5) Auto building/build management. Working with a couple of large solutions containing 20-30 projects that depend on each other, this is a godsend. We have it set to queue up a build every 20 minutes IF something has changed, and when one has happened its listed in the history log..so easy to see when you need to update your local libraries.
I don't have any experience with configuring it other than build management, but I have heard that this is the worst part of TFS..that its a bit of a pain to get everything running correctly.
So, translating that to a business case..I'd say that if you are a Microsoft software house with large/multiple teams, then the time savings and productivity improvements that you will see as a result of the above features are worth the investment in setting it up. Its free to use in most cases as you will probably have a MSDN subscription (maybe some CAL issues but i'm not sure) so your biggest cost will be in user training and configuration.
Firstly, I would suggest to consider what is your primary concern, what is the problem that you are tying to solve by rolling out TFS.
In terms of version control I would recommend the blog post from Martin Fowler on Version Control Tools and a follow up results of a version control systems survey. Admittedly this might be and is a subjective view of the subject but one that seems to be pretty popular. TFS clearly looses in comparison to other Version Control Systems.
I currently work with TFS2008 and we have migrated from SourceSafe and IBM ClearCase/ClearQuest and there is no doubt that TFS is far better then any of the previous tool, still it has its serious shortcomings and the new version will only partially address those.
Addressing the individual point you have raised:
TFS allows to link builds with changesets and work items, but so many other systems
I have not used OnTime but the workflow customisation can be both an advantage and a hindrance. Potentially, there might be a lot of work involved in creating a custom process template and you would still need a sensible UI on top of it (Team Explorer or Web Access might not be sufficient)
Integration with Visual Studio is an advantage but there are add-ons to Visual Studio that allow integration with other source control providers
On the advantages of TFS I would probably mention
Distributed builds and separate build agents - if you do many builds
Full integration with Visual Studio via the Team Explorer
Extensive reporting infrastructure (though you can only take full advantage of it when using MSTest for all the testing)
SharePoint collaboration site for each project
Given the substantial cost of rolling out full TFS installation I would really consider what real business benefit would this solution give you that others don't.
Not shure about TFS, but the UI of OnTime is kind of non intuitive.
Also I dont like that you have different fields for Bugs and Tasks. Of course you can always add your own fields, but the default layout should be ready to use.
We endet up using only "Bugs" even if it is a task.
I dont say its a bad product, but if TFS has a better UI for bugtracking now (which it hadnt 4years ago when I had to use it and hated it ), then this would be an argument for TFS.
Sorry to hear that you want to get rid of SVN. Thats a hard decision.
I'm not sure about the licensing for the Axios OnTime but if you have an MSDN subscription then it's no additional cost. See the blog post here
I've been using TFS 2008 only for version control and while it's a nice upgrade from VSS some things that we're tyring to do aren't exactly in line with what is expected. That said, I've written a quick little web app that fills in those gaps. It was pretty easy to develop against using the API and there's lots of addons to help with specific tasks.
Probably not the answer you want to hear, but I'd be doing my damnedest to make a business case against TFS.
In any event, my general advice would be to try it out yourself (or in a small team) on some very small, but real project - maybe some tool you need on a once-off basis, code that can be thrown away or easily migrated to another system because it's small. There's nothing like actually using the system!
I have used OnTime and Subversion. I have not used TFS as bug tracker, but I've used it for source control. The source control part of it is basically still the bad old Visual SourceSafe. If you are currently using Subversion you will be swearing your head off any time you need to rename a file or, heaven forbid, delete a file and then create one with the same name - never mind any branching or merging. It's hard to convey in a post just how primitive and fragile it is as a source control system - that's why you really have to use it. You'll see what I mean when you find yourself stuck with a file you can neither check in nor delete and some meaningless error. Not that Subversion is perfect - but it's a decade ahead of VSS!
The workflow part of TFS, which I've only briefly played with, seems very "heavy" to me. That is, it really restricts the user to that workflow and requires a lot of steps that are often unnecessary. This stuff can help, but it can also just as easily get in the way. A good system provides the workflow when it's needed, but allows you to bypass it when it would just get in the way. When we used OnTime, we found that even its relatively unobtrusive workflow was often just more trouble than it was worth. Of course, this all depends on the specifics of your situation. How are you using OnTime workflows now and what do you want out of TFS that OnTime doesn't provide?
Linking changesets to bugs can be done with Subversion as well. It supports some extensibility mechanism - I don't remember the details, but FogBugz uses it (we switched to it after OnTime). Linking the to builds can be done by adding a simple svn tag command to your build script. Visual Studio integration can be done with VisualSVN.
The cost is also a huge downside of TFS. It is very expensive for what it does, especially when you take into account how well it does it. Yes, it's "free" if you have to have an MSDN subscription for every developer anyway - but do you have to, without TFS? Subversion is free, full stop. OnTime and FogBugz are far more reasonably priced.
I would strongly recommend against TFS. I once tried to restore the source code from a crashed instance, but I gave up after a few days, so source code was lost (= it failed to do the one thing a VCS should do). Of course, I might have done something wrong, but it's not easy to get everything right when the restore guide is two miles long, and it really is something that should happen so rarely that nobody is experienced with it.
Now I use Subversion/Trac, which gets the job done (and customizing the workflow in Trac is so easy it's not fun, compared to TFS).
For the time being, avoid TFS!
I would stick with SVN + FinalBuilder and then choose between FogBugz or CounterSoft Gemini.

How can I figure out which programming methodology (if any) that we're using?

My group is moving to Team Foundation Server soon. Actually, I'm heading up the effort.
One of the things you get to decide is which methodology you're using - Agile, CMMI, etc.
Thing is - I have no idea what methodology we use. By which I mean, we're not actively using one. And I'm not familiar enough with Agile or other methods to know which, if any, happen to apply to the way we're doing.
Is there some default methodology? As in, if we go through some very blunt process (get requirements, code, test, push to QA, have QA test, push to production) is there even a name for it?
And as a bonus, with TFS, what is the penalty for picking the wrong one at the outset? How hard is it to switch gears later if we decide to go Agile or something?
There's no major penalty for switching methodoligies - you just pick a default one when you install, and you can choose the one you'll use for any given project. In fact, it only has to do with how TFS configures the Sharepoint project page initially - you can add whatever you want to your page once it's created, so if you decide to change a project's methodology, it's not difficult to do.
For the two that TFS gives out of the box (Agile and SCCM/Waterfall), it really a question of your process - do you release "early and often", with smaller packages releases as bugs come in, or do you run your projects in large iterations, with a release much more infrequently, but with obvious milestone releases?
A question to ask (though not exactly accurate, but always helps me): Does the product have version numbers that will be meaningful to the end users? For example, many websites are Agile, as they're constantly releasing improvements and patches, and don't often have huge improvement/overhauls, whereas a product like MS Office has a meaningful version number (2003, 2007, etc), which is more likely SCCM.
If you don't have a stated methodology, it's a great time to develop one - decide which release cycle makes sense to you, create a project in each and review what TFS sets up for you automatically - do the progress indicators and Sharepoint pages make sense? Is there anything obvious missing?
If you can't discern a methodology, then you are using an ad-hoc methodology. It may be similar to an existing methodology (by accident). Note however that following a methodology is not the same as being successful. I have seen plenty of methodology heavy projects that failed, and plenty of "seat of the pants" projects be resounding successes (if perhaps in need of a bit of refactoring when the dust settled).
Changing methodologies depends on your culture more than anything. Institutions tend to resist change, and do some individuals. However, it is again situation dependent: if the existing situation is obviously broken, an institution can sometimes make snap changes that surprise everyone.
Some methodologies are "heavier" than others: those are harder to change to or from. Even Test Driven Development is "heavy" in that adopting it after the fact will mean adding a lot of tests to old code. Most real world transitions simply add the testing as files are edited for other reasons. Likewise, moving from TDD to some waterfall style would require a lot of code to be documented in large disused binders.
The most basic method tends to be your iterative or "waterfall method" because you just go from step to step to step. It doesn't seem to be very popular anymore, though.

Resources