Where do you record Technical Debt in TFS? - tfs

I'd like to find a way to record the Technical Debt we incur in TFS.
I need to record each item outside of a specific iteration to ensure that it is visible and easily-reported all the time. I've considered creating a separate Area for technical debt, but am unsure how well-suited that field actually is.
What are some common approaches that I might consider? Am I even barking up the right tree by trying to find a right place to put this?

I haven't found a need to track it separately; I just enter it as additional tasks. That way, they can be easily tracked and reported.

I find that there are several types of technical debt: Debt you know about and can track until fixed, and debt that becomes apparent as the result of an unexpected bug. I like to track the outstanding known technical debt in a separate Iteration I call 'Maintenance Backlog', under the area 'Technical Debt'. I can then link relevant bugs from ANY iteration to the Technical Debt area, while still tracking issues I cant resolve yet. The key is you still need bugs associated with the iteration they are found and fixed in and linked to the originating requirements for reporting purposes etc.

For what it's worth, I'm adding my 2¢ with a contemporary spin — Best practice for capturing Tech Debt work items in the backlog in Azure DevOps (the successor of TFS).
1. Use tags
Marking such tickets with a tech debt tag to indicate implicit value for the customer is always handy (e.g. to balance out the % of such tasks in the sprint).
2. Avoid tech debt features
There are 3 reasons to avoid tech debt features:
Tracking purposes. An epic or feature needs a well defined goal to achieve, so it can be crossed off the list at some point to reflect the progress. Things like refactoring or tech debt related tasks is a never ending process, so bringing them under a feature would make it pointless for tracking progress.
Violation of the single parent for tickets rule.It's convenient to practice a tree-like approach with a single parent feature for tickets and a single parent epic for features. There might be exceptions but they should be rare. Having multiple parents on tickets would trouble tracking progress.
Tech debt tasks may belong to "real" feature.
If a subset of tech debt tasks contributes to completing an ongoing or future feature / epic faster then keeping those tickets under that feature / epic makes sense. Combine it with a corresponding tag just in case. Of course, later when running out of time you may drop those tech debt tasks out.
3. Tasks without a feature is no crime
Tasks not always need a feature if you can manage them like this. Azure DevOps provides plenty of tools (e.g. querying on the tickets) to find and manage what you want.
Do what makes sense for the team and your project rather than "ticking a box".

Related

Continuous Delivery / Deployment and Process Controls

From a "process controls" POV, in a continuous delivery/deployment context, how important is it to mandate that source control commits are associated with an Agile PM (or ticketing tool) "work-item?" Here "work-item" means any of: user story, task, defect, bug, etc.
The end-goal is to ensure that developers are not placing new features into production that were not derived from the product owner. Obviously code reviews are a critical part of a proper process-controls story, but having a code review presumes the reviewer can look at the associated statement of work (e.g. user story) to ensure the code changes reflect the requested work.
Herein lies the issue.
Context
I've always assumed to have a workflow where work-tickets are associated with commits, such as with Jira, but now I'm working with a corporation whose PM tool is incapable of associating work-items or defect-tickets with source commits.
With this client, I'm also seeing a catch-22. First, I'm told by representatives of the PMO that such ticket-to-commit associations are not needed. Second, the engineering org paid for an outside consultancy to audit and flag major process flaws. The #1 flaw that was identified was the inability for management to know if developer commits have any bearing on authorized work.
From my POV, I think the PMO needs to realize that they are the "tail wagging the dog" and that they need to embrace tooling changes or special integrations to overcome this problem (not to mention more maturity with Agile philosophy).
However, perhaps I'm the one who simply is over-concerned about the ticket-to-commit associations, and perhaps there is another way to achieve effective process controls without that particular mechanism?
When dealing with regulated industries, such as health care or governing bodies, full traceability from scope to code is a requirement. I once had to perform an audit to validate that every line of code correlates to a line in the SRS for FDA approval, although generally it's enough to demonstrate that there exists a method of traceability (such as a branch in github that is named to match a task / story in JIRA and code integration is enabled).
If you're not in a regulated industry, requirement-to-code traceability is not a requirement... but it is still immensely helpful. The advantages include, and are not limited to:
Full transparency to everyone on the team, tech or not. The amount of confidence that this evokes is amazing, and the amount of chatter it reduces.
Reports to identify what theme in the requirements are causing the greatest amount of code churn, because there's a heavy cost to that.
Identifying features affected by a PR. This is immensely helpful when a release is planned, some aspects of the release are unattainable or buggy, a lot of the code has already been merged, and the team needs to isolate what to release and what not to.
Confirmation of an opinionated truth by remove the opinion: "I'm sure I did it... let's double check... yup! (or oops, let's rectify that!)". This helps deter CYA behavior, which is drain on morale and negatively influences efficiency.
Simple implementation with existing mainstream toolsets (JIRA, Trello, Asana, Freshdesk for tickets... Github, Bitbucket for repo and tickets... Zapier, IFTTT for integrations across systems that lack built-in integrations)
For every team I have ever managed or established (as dev manager, PMO, product manager, consultant or founder), it has been my explicit expectation that every line of code can be traced to the requirement for the reasons listed above. I advocate implementing this using the branch-per-topic pattern in git (Github or Bitbucket), where the branch is prefixed by the JIRA task/story/bug (eg. XYZ-2443-fix-that-bug) so that JIRA's integration automatically displays a link of the branch to the issue.
Of course, this is not the only way, but it is my preferred process at this time and is meant to illustrate a concrete example.

Understanding Agile [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I recently shifted to a new organization that follow Agile mode of development. The current project we are working has stalled due to many requirement changes that were reported recently. Since this is my first Agile assignment (after working in 4 years in non-agile environ), its a bit hard to differentiate where the problem really is.
Ruby on Rails is the platform that is being used for development. Since I can't ask a vague question, I will narrow it to this.
In agile, is it ok for the business team to relax and give requirements at will? (Some requirements given during the final sprints were altering the entire design of our app)
Or, its the development team's mistake not foreseeing the numerous possibilities of the application and not having a concrete design that could have welcomed abnormal changes?
In agile, is it ok for the business team to relax and give requirements at will? (Some requirements given during the final sprints were altering the entire design of our app)
It is OK (albeit not wise ;-)... so then an agile development team would tell them "fine guys, this would cost this amount of extra time and consequently this much schedule slippage."
If they are willing to pay the price, all's well.
If they decide that maybe the new feature is not that urgent after all, all is well too.
If they insist on including the new requirements and keeping the original schedule, that project is not agile :-(
Or, its the development team's mistake not foreseeing the numerous possibilities of the application and not having a concrete design that could have welcomed abnormal changes?
I don't think the design should be ready to welcome any kind of changes and any new features - that would only lead to bloatware, and a lot of extra work which in the end proves useless.
Agile projects should have some sort of roadmap too, so that the developers have at least a rough idea where the product is supposed to be in a year's time etc. This would allow them to plan forward and extend the design to make room for probable future changes.
If business doesn't give timely information about the roadmap (or if it proves unreliable), that is (usually - barring really unforeseen market/environment changes) business' fault. If the team did not use that info wisely, it's their fault.
Agile doesn't mean you don't have requirements or specifications or you can be free and easy with them. The business leads need to know what they want. The benefits of being agile is that if you do need to change paths, you can do that in an easier way, so you can adapt quickly.
Requirements changing will be painful no matter what your development methodology. There still has to be a strong clear plan, at least at that point in time.
Agile projects are supposed to have requirements gathering, design, analysis, coding, testing; just like the waterfall development model. However, in an agile project, the phases are supposed to cover less ground and therefore, happen faster.
I agree with Péter Török's answer, but its also the responsibility of the agile team or the agile team manager to teach the business team that the project will deliver better results each round if they postpone new requirements until the next requirements phase. Since a round is supposed to be 30 - 90 days, most new requirements can wait. The few requirements that can't wait, need to have a time and schedule cost.
Opinion of a Scrum Master here. It sounds like the business is lacking a clear knowledge of what 'agile' is, or how they implement it. Agile needs to be applied with structure, whether that be Scrum or Kanban or something similar. Too often it is a synonym for 'we don't design or document' things.
If you are meaning a team using Scrum, this would be manageable as long as the Product Owner and Scrum Master are on top of their game.
If the business team are giving requirements 'at will' that do not align, it is up to the Product Owner to take these on board, and prioritise a list of tasks prior to sprint planning. If they don't align with what has already been done, they may be estimated as large stories, due to the need for refactoring etc. by the team. It will then be up to the Product Owner to decide if it is worth proceeding with them despite the size, or working on alternative stories that align with work already done.
This would be a tough environment for the whole Scrum team to work in, but you would expect the business to see the lost value by changing direction between sprints, and hopefully align the product to a direction before too long. It certainly is not the development teams fault for not anticipating this, I would more say the Scrum Master and Product Owner need to have some serious words with the business units involved.
There needs to be a buffer between the sprint and the backlog.
The Business Team own the backlog and the prioritisation of the stories in the backlog but once the Dev team have committed to x number of stories in the sprint then it is unwise for the business team to tinker with its content. If you find this happens I would suggest shortening the length of the sprints.
If however a super urgent new requirement comes up that just cannot wait for the next sprint then another story/stories of similar points value have to be removed.
It is important that the business team manage their backlog so that in the sprint planning meeting the next set of stories are fully specked, prioritised and ready to go.
In agile, is it ok for the business team to relax and give requirements at will?
Yes, it's ok to change requirements (maybe not relax), but in our teams we will not disrupt or change the scope of the sprint unless the work currently in scope of the sprint is made redundant as a result of the new requirements added to the product backlog by the product owner (not the sprint backlog). Also note that if you commit to 100 points worth of work in a sprint, you complete 80 points and the the requirements change enough for the sprint to be disrupted then the team has still delivered 80 points that sprint based on the POs original requirements. (important when dealing with external clients)
its the development team's mistake not foreseeing the numerous possibilities of the application and not having a concrete design that could have welcomed abnormal changes?
We have a very high-level understanding of the overall features/project being worked on, however before we accept User Stories (broken downs chunks of the feature/project) we ensure that we fully understand what the Product Owner wants. If the product owner is unsure then we will ask the person whom does know. Note I am not saying that you plan out the whole project, you only nail down 1 or 2 sprints worth of user stories.
(This is how I do it, but there is no prescriptive rules, every agile rule make Agile less Agile - My opinion)
No, it is not ok to give requirements at will.
There is a general notion that business and development work together on daily basis and do that not primary in written form, but as well face to face (often to review or plan stuff). The idea is not to make too much assumptions on the future, but some and go with these.
Having done this as a coder, the only advice I can give: Setup for change. As you learn about your framework, the product side learns about the business.
If you run into problems like these: Its actually important that you fix this problem. This is what you have sprints for: Fail at something and then fix it after a retro. Doing this for a while should lead to a sane organized process how to get all the requirements at the right moment.
However: Proper engineering has hurt no-one yet as well as proper security and requirements engineering. If you need to do this despite your agile process: so be it.

Software development metrics and reporting [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation.
As an example, my view is that code coverage is a useful metric in the following ways (and maybe others):
For a team's own internal use when combined with other measurements.
For facilitating/enabling/mentoring
teams, where it might be instructive
when considered on a team-by-team
basis as a trend (e.g. if team A and
B have coverage this month of 75 and
50, I'd be more concerned with team A
than B if the previous month they'd
had 80 and 40).
For senior management
when presented as an aggregated
statistic across a number of teams or
a whole department.
But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code.
I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation?
I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations?
EDIT
We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.
A tale from personal experience. Apologies for the length.
A few years ago our development group tried setting "proper" measurable objectives for individuals and team leaders. The experiment lasted for just one year, because hard metrics didn't really work very well for individual objectives (see my question on the subject for some links and further discussion).
Note that I was a team leader, and involved in planning it all with my technical boss and the other team leaders, so the objectives weren't something dictated from on high by clueless upper management -- at the time we really wanted them to work. It is also worth noting that the bonus structure inadvertently encouraged competition between developers. Here are my observations on the things we tried.
Customer-visible issues
In our case, we counted outages on the service we provided to customers. In a shrink-wrapped product it might be the number of bugs reported by customers.
Advantages: This was the only real measure that was visible to upper management. It was also the most objective, being measured outside the development group.
Disadvantages: There weren't that many outages -- just around one per developer for the whole year -- which meant that failing or exceeding the objective was a matter of "pinning blame" for the few outages that did occur in each team. This led to bad feeling and loss of morale.
Amount of work completed
Advantages: This was the only positive measure. Everything else was "we notice when bad things happen," which was demoralising. Its inclusion was also necessary because, without it, a developer who did nothing all year would exceed all the other objectives, which clearly wouldn't be in the interests of the company. Measuring the amount of work completed checked the natural optimism of developers when estimating task size, which was useful.
Disadvantages: The measure of "work completed" was based on estimates provided by the developers themselves (usually a good thing), but making it part of their objectives encouraged gaming of the system to inflate estimates. We had no other viable measure of work completed: I think the only possible valuable way of measuring productivity is "impact on the company bottom line," but most developers are so far removed from direct sales that this is rarely practical at an individual level.
Defects found in new production code
We measured defects introduced into new production code during the year, as it was felt that bugs from previous years should not count against any individual in this year's objectives. Defects spotted by internal quality teams were included in the count even if they didn't impact customers.
Advantages: Surprisingly few. The time lag between the introduction of a defect and its discovery meant that there was really no immediate feedback mechanism to improve code quality. Macro trends at a team level were more useful.
Disadvantages: There was a heavy focus on the negative, since this objective was only invoked when a defect was found and we needed someone to blame for it. Developers were reluctant to record defects they found themselves, and a simple count meant that minor bugs were as bad as severe problems. Since the number of defects per individual was still quite low, the number of minor and severe defects didn't even out as it might with a larger sample. Old defects were not included, so the group's reputation for code quality (based on all bugs found) did not always match the measurable introduced-this-year count.
Timeliness of project delivery
We measured timeliness as the percentage of work delivered to internal QA teams by the stated deadline.
Advantages: Unlike counting defects, this was a measure that was under immediate, direct control of the developers, as they effectively decided when the work was complete. The presence of the objective focused the mind on completing tasks. This helped the team commit to realistic amounts of work, and improved the perception by internal customers of the development group's ability to deliver on promises.
Disadvantages: As the only objective directly under the developers' control, it was maximised at the expense of code quality: on the day of a deadline, given the choice between saying a task is complete or doing further testing to improve confidence in its quality, the developer would choose to mark it complete and hope any resulting bugs never come to the surface.
Complaints from internal customers
To gauge how well developers communicated with internal customers during development and subsequent support of their software, we decided that the number of complaints received about each individual would be recorded. The complaints would be validated by the manager, to avoid any possible vindictiveness.
Advantages: Really nothing I can recall. Measured at a sufficiently large group level it becomes a more useful "customer satisfaction" score.
Disadvantages: Not only highly negative, but also a subjective measure. As with other objectives, the numbers for each individual were around the zero mark, which meant that a single comment about someone could mean the difference between "infinitely exceeded" and "did not meet".
General comments
Bureaucracy: While our task management tools held much of the data for these metrics, there was still quite a lot of manual effort involved to collate it all. The time spent obtaining all the numbers was not enjoyable, generally focused on negative aspects of our work and may not even have been reclaimed by increased productivity.
Morale: For the measures where individuals were blamed for problems, not only did those with "bad" scores feel demotivated, but so did those with "good" scores, as they didn't like the loss in team morale and sometimes felt they were ranked higher not because they were better but because they were luckier.
Summary
So what did we learn from the episode? In later years we tried to re-use some of the ideas but in a "softer" way, where there was less emphasis on individual blame and more on team improvement.
It is impossible to define objectives for individual developers that are objectively measurable, add value to the company and cannot be gamed, so don't bother to try.
Customer issues and defects can be counted at a wider team level, if the location of the defect is unequivocally the responsibility of that team -- that is, you don't ever have to play the "blame game".
Once you measure defects only at the level of responsibility for a code module, you can (and should) measure old bugs as well as new ones, since it is in that group's interest to eliminate all defects.
Measuring defect counts at a group level increases the sample size per group, and so anomalies between minor and severe defects are smoothed out and a simple "number of bugs" measure can mean something, such as to see if you are improving month-on-month.
Include something that upper management care about, because keeping them happy is your primary purpose as a development group. In our case it was customer-visible outages, so even if the measure is sometimes arbitrary or seemingly unfair, if it's what the bosses are measuring then you need take notice too.
Upper management don't need to see metrics they don't have in their own objectives. This way it avoids the temptation to blame individuals for errors.
Measuring timeliness of project delivery did change developer behaviour and put a focus on completing tasks. It improved estimation and allowed the group to make realistic promises. If it were easy to collect the timeliness information then I would consider using it again at a team level to measure improvement over time.
All of this doesn't help when you are required to set measurable objectives for individual developers, but hopefully the ideas will be more useful for team improvement.
The key thing about metrics is knowing what you are using them for. Are you using them as a tool for improvement, a tool for reward, a tool for punishment, etc. It sounds like you're planning to use them as a tool for improvement.
The number one principle when setting metrics is to keep the information relevant so that the person receiving it can use it to make a decision. Most likely a senior manager cannot dictate the micro level of whether you need more tests, less complexity, etc. But a team leader can do that.
Therefore, I don't believe a measure of code coverage is going to be useful to management beyond the individual team. At the macro level, the organisation is probably interested in:
Cost of delivery
Timeliness of delivery
Scope of delivery & external quality
Internal quality won't be high on their list of things to cover off. It's a development team's mission to make it clear that internal quality (maintainability, test coverage, self-documenting code, etc) is a key factor in achieving the other three.
Therefore you should target metrics to more senior managers which cover off those three such as:
Overall Velocity (note that comparing velocity between teams is often artificial)
Expected vs Actual scope delivered to agreed timelines
Number of production defects (possibly per capita)
And measure things like code coverage, code complexity, cut 'n' paste score (code repetition using flay or similar), method length, etc at a team level where the recipients of the information can really make a difference.
A metric is a way of answering a question about a project, team or company. Before you start looking for the answers, you need to decide what questions you want to ask.
Typical questions include:
what is the quality of our code?
is the quality improving or degrading over time?
how productive is the team? Is it improving or degrading?
how effective is our testing?
...and so on.
Each question will require a different set of metrics to answer. Collecting metrics without knowing what questions you want answered is at best a waste of time and at worst counterproductive.
You also need to be aware that there is an 'uncertainty principle' at work - unless you are very careful the act of collecting metrics will change people's behaviour, often in unexpected and sometimes detrimental ways. This is especially so if people believe they are being evaluated on the metrics, or worse still have the metrics tied to some reward or punishment scheme.
I recommend reading Gerald Weinberg's Quality Software Management Vol 2: First Order Measurement. He goes into a lot of detail on software metrics, but says the most important are often what he calls "Zero Order Measurement" - asking people their opinion on how a project is going. All four volumes in the series are expensive and hard to get hold of, but well worth it.
Software writing
What must be optimised?
CPU(s) use, memory(s) use, memory cache(s) use, user time use, code size at run-time, data size at run-time, graphics performance, file access performance, network access performance, bandwidth use, code conciseness and readability, electricity use, (count of) distinct API calls used, (count of) distinct methods and algorithms used, maybe more.
How much must it be optimised?
It must be optimised the minimum reasonable amount (except in areas where surpassing acceptance test criteria is desirable) required to pass acceptance tests, facilitate maintenance, facilitate audit and meet user requirements.
("... for legal/illegal input test data and legal/illegal test events in all test states at all required test data volumes and test request volumes for all current and future test integration scenarios.")
Why the minimum reasonable amount?
Because optimised code is harder to write and so costs more.
What leadership is required?
Coding standards, basic structure, acceptance criteria and guidance on levels of optimisation required.
How can success of software writing be measured?
Cost
Time
Acceptance test passes
Extent to which acceptance tests it is desirable to surpass are surpassed
User approval
Ease of maintenance
Ease of audit
Degree of absence of over-optimisation
What cost/time should be ignored in assessing aggregate performance of programmers?
Wasted cost/time incurred because of requirements (inc architecture) changes
Extra cost/time incurred because of deficiencies in platforms/tools
But this cost/time should be included in assessing aggregate performance of teams (inc architects, managers).
How can success of architects be measured?
Other measures plus:
Instances of "avoiding early" being affected by deficiencies in platforms/tools
Degree of absence of changes in architecture
As I said in What is the fascination with code metrics?, metrics include:
different populations, meaning the scope of interest is not the same for developer or for manager
trends meaning any metrics in itself is meaningless without its associated trend, in order to take the decision to act upon it or to ignore it.
We are using a tool able to provide:
lots of micro-level metrics (interesting for developers), with trends.
lots of rules with multi-level (UI, Data, Code) static analysis capabilities
lots of aggregations rules (meaning those vast number of metrics are condensed in several domains of interests, adequate for higher level of populations)
The result is an analysis which can be drilled-down, from high level aggregation domains (security, architecture, practices, documentation, ...) all the way down to some line of code.
The current feedback is:
project managers can get defensive very quickly when some rules are not respected and make their global note significantly lower.
Each study has to be re-tailored to respect each project quirks.
The benefit is the definition of a contract where exceptions are acknowledged but rules to be respected are defined.
higher levels (IT department, stakeholder) use the global notes just as one element of their evaluation of the progress made.
They will actually look more closely at other elements based on delivery cycles: how often are we able to iterate and put an application into production?, how many errors did we had to solve before that release? (in term of merges, or in term of pre-production environment not correctly setup), what immediate feedbacks are generated by a new release of an application?
So:
which metrics are useful to which stakeholders, and at what level of aggregation
At high level:
the (static analysis) metrics are actually the result of low-level metric aggregations, and organized by domains.
Other metrics (more "operational-oriented", based on the release cycle of the application, and not just on the static analysis of the code) are taken into account
The actual ROI is achieved through other actions (like six-sigma studies)
At lower level:
the static analysis is enough (but has to encompass multi-level tiers applications, with sometimes multi-languages developments)
the actions are piloted by the trends and importance
the study has to be approved/supported by all levels of hierarchy to be accepted/acted upon (in particular, budget for the ensuing refactoring has to be validated)
If you have some Lean background/knowledge, then I would suggest the system that Mary Poppendieck recommends (that I've already mentioned in this previous answer). This system is based on three holistic measurements that must be taken as a package:
Cycle time
From product concept to first release or
From feature request to feature deployment or
From bug detection to resolution
Business Case Realization (without this, everything else is irrelevant)
P&L or
ROI or
Goal of investment
Customer Satisfaction
e.g. Net Promoter Score
The aggregation level is product/project level and I believe that these metrics are helpful for everybody (developers should never forget that they don't write code for fun, they write code to create value and should always keep that in mind).
Teams may (and actually do) use technical metrics to measure quality standards conformance which are integrated in the Definition of Done (as "no increase of the technical debt"). But high quality is not a end in itself, it's just a mean to achieve short cycle time (to be a fast company) which is the real target (with Business Case Realization and Customer Satisfaction).
This is a bit of a side note to the main question, but I had a very similar experience to Paul Stephensons answer above. One thing I would add to that is about collection of data and visibility of metrics.
In our case, the development director was meant to collate a bunch of data from various disparate systems and distribute individual metric results once a month. This often didn't happen, as it was a time consuming job and he was a busy man.
The results of this were:
Unhappy developers, as performance bonuses were based on metrics and people didn't know how they were getting on.
Some time consuming multiple entry of data into various different systems.
If you are going down this route, you need to be sure that all metric data can be collated automatically and is easily visible to those it affects.
One of the interesting approaches that's currently getting some hype is Kanban. It's fairly Agile. What's particularly interesting is that it permits a metric of "work done" to be applied. I havn't used/encountered this in actual practice yet, but I'd like to work towards getting a kanban-ish flow going at my job.
Interestingly I just finished reading PeopleWare, and the authors strongly discourage individual metrics being made visible to superiors (even direct managers), but that aggregate metrics should be very visible.
As far as code specific metrics I think it's good for a team to know the state of the code at the current time, and to know the trends affecting the code as it matures and grows.
The question is obviously not focussed on .NET, but I think the .NET product NDepend has done a lot of work to define and document common metrics that are useful.
The documentation section on metrics is educational reading, even if you're not doing .NET.
Software metrics have been with us for a long time and as best I
can tell nothing to date has emerged individually or in aggregate
that is capable of guiding projects during development. The nut of
the problem is that we want to use objective measures and these
can only measure what has happened,
not what is happening or about to happen.
By the time we have measured, analyzed and interpreted some
series of metrics we are reacting to things that
have already gone wrong, or very occasionally, gone right.
I don't want to underplay the importance of learning from
objective metrics but I do want to
point out that this is a reactive not a pro-active response.
Developing a "confidence index" may be a better way of monitoring
whether project is on-track or headed for trouble. Try
developing a voting system where a reasonable number of
representatives from each project area of interest are asked
to anonymously vote their
confidence from time to. Confidence is voted in two areas:
1) Things are on-track 2) Things will continue to be on-track or get
back on-track.
These are purely subjective measurements from people closest to the
"action".
Feed the results into a Kanban type chart where the
columns represent voting areas and you
should have a pretty good idea where to focus your attention. Use
question 1 to evaluate whether management reacted to the
previous voting cycle appropriately. Use question 2 to identify
where management should focus next.
This idea is based on each of us having a comfort level
within our own area of responsibility. Our confidence level
is a product of experience, knowledge within our
domain of expertise, the number and severity of problems
we are facing, the amount of time we have to accomplish our
tasks, the quality of the information we are working with and
a whole bunch of other factors.
MBWA (Management By Walking Around) is often touted as
one of the most effective tools we have - this is a variation of it.
This technique is not much use at the level of
individual teams because it only reflects the general mood
of the team. Kind of like using someone’s watch to tell them
the time. However, at higher levels of management it should
be quite informative.

Best practices for customer involvement in Agile development? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We need to involve our customer development partners in our development process. We're more or less following Agile methodologies. Some customer partners are remote, others closer. We need to minimize travel costs.
Our customers are in health care and tend to be busy, expensive, and hard to schedule.
What practices and technologies have worked to support customer involvement? We're using phone calls, phone conferences and email. We're curious about leveraging wiki techniques and would love to hear what's worked for others.
it doesn't matter whether the customer is in the same cubicle or halfway around the planet, except for communication delays - the critical factor is availability.
a customer that is too busy to answer your emails for several days is going to cause your iteration to be late, or fail
the customer has two critical commitments for agile:
available to answer questions in a timely manner
not to change their mind/priorities during an iteration
the customer must commit to a reasonable service-level agreement (SLA) on availability, e.g. 1-hour response time, or 24-hour response time, etc., and you will need to adjust all estimates and schedules by the lag factor. If the customer will not commit or does not follow through, cancel the iteration and re-plan, bringing the customer's commitment to the forefront again. Do not just "guess" at what you think the customer might want.
Bottom line: without a customer commitment, agile will not work.
My experience with Agile methods is mostly for desktop applications. When our customers are remote, we've spent time to get an engineer to the customer site to configure/install a demo rig. The engineer works with the customer on a test and demo setup/plan that will provide an environment that the customer believes replicates the important aspects of the deployment environment but isolates the demo system from existing infrastructure (so that we can push updates whenever we need to). The engineer also sets up deployment systems to move our applications into production, so that we can "deploy" without being on site. Our applications can self-update (either for each release or each build) and we carefully instrument the releases to log all errors and submit all crashes as bugs to our bug tracker. This way we at least know what went wrong, even if we don't know what's going right.
For each release/build that shows up on the customer's test rig, we provide a (short) screencast, narrated by the project lead or primary developer, demo-ing any new features. The release notes contain any long-term issues or questions we want the customer to think about (i.e. issues that can't be resolved immediately by a phone call or email), and the application displays these notes for the user.
Finally, and possibly most importantly, we get the customer and/or the customer's liaison an account on our calendar server and configure their calendar app to make use of that account. This then goes both ways--we can schedule time (on site, phone, email, etc.) with the customer and they can do the same with our developers.
One option: Install a customer proxy at the "customer partner" site who can extract the information that you need when those customers are available. Have these proxies build the solid relationships that allow them to represent the customer view. Their time is all yours. And when questions arise that they cannot answer, they have ready access to your customer partners - even if in the coffee line.
The whole point of the customer in agile is to have open and free discourse with the developers (IE immediate feedback). If your actual customers cannot provide this, then you need an intermediary/proxy that can fill this role. You don't need actual customers, you just need someone that can represent the customers' interests well enough to meet your customers' needs.
Just a few ideas:
If you do choose to use a Wiki, make sure it supports a whole-wiki-wide "recent changes" list, and preferably one that is specific to the users. The less distant from development people are, the more likely to have email as a metaphor for their computer use. If they can't immediately tell when there's something new for them to see, they will never explore it. You also preferably need ways to signal to them that you need their attention to matters, or they will treat changes like CCs.
I'm a big believer in creating video screen captures of interactions (narrated) and distributing them to users. Unlike a real demo, customers don't feel like they need to interrupt, and they can rewind and re-watch the same interaction over and over, paying attention to little details.
Finally, if you do distribute prototypes, make sure to send someone (or at least a screen sharing session) to see how the prototypes are used. Contextual design is effective. You can count on people using your prototype differently from the way you expect, and you have to understand how they use it to really understand where the issues are, even if they don't report them.
Have you considered something like LogMeIn.
This would allow customers to either log-in to a PC on your network already running your application, or alternatively allow you to install/update the application on one of their computers.
This would solve the remote customer issue and would also support the ongoing continual customer feedback requirement in the agile process.
I used it a previous company for technical support, but there is no reason (except maybe cost) that it would not work for your situation.
It is also a great way to actually see how users are using your application and therefore find out what works and what doesn't.
First of all, make sure that you have a product manager or a product owner close the the developers. This person will be managing the relationship with the customer.
Then, the product manager can demonstrate the product to the customer at the end of each iteration and also ask customers question when the developer need feedback to implement a user story.
It is amazing the positive feedback you can get from customers when you involve them.
We did not use a wiki and most of the communication is done via E-Mail, phone, and a screen sharing application (we are using GoToMeeting, but there are tons of alternative out there).
You should probably do a kick-off once with everyone at one place. Face-to-face time is invaluable. That includes all developers. Prepare some metaplan questions, but also have enough time to just mingle.
I think by most definitions of Agile processes that have high dependence on customer involvement you've already missed "best practice", which would be for an on-site, and preferably "in-team" customer present at all times. So I suppose we're looking for a "next-best practice". :)
There's the possibility of introducing a "proxy customer" on-site. I have to admit to being very sceptical about the value of a proxy customer. I'm concerned about the risk of introducing some sort of second-rate and otherwise unnecessary business analyst function to the mix, with the increased signal-to-noise ratio and potential for garbled messages. It also carries the risk of allowing busy real customers to reduce their involvement in the process, which is likely to lead to dissatisfaction. I wonder if there might be someone with good domain knowledge who has recently retired and might be available to act in this capacity as a consultant?
Communication bandwidth with remote customers is astonishingly lower than face-to-face, something I had not fully realised until I started dealing with users in another country. Even with video the loss is significant.
How long are your iterations? How hard is planning iterations? Might it be easier to go for longer iterations and get more planning done less frequently, or reduce iteration length and go to smaller, but more frequent planning sessions? Are more than one customer involv
Do you have a useable and available build at the end of each iteration? Is there time for involved users to have hands-on time before the next planning session? Keeping users engaged by shipping frequently would seem on the surface to be a Good Idea, which perhaps legislates for small frequent iterations (a week? two weeks?)
The wiki idea might work: have you looked at the FIT Framework? It's a sort of integrated acceptance test/wiki, which might help in getting acceptance tests from remote customers. I think I'd also look to provide some sort of (separate or integrated) "project dashboard", possibly pushed regularly to key customers as well as available on demand. use it as a substitute for things like post-its on whiteboards, Big Visible Charts and the like. There are a number of open-source or low-cost options that may serve - writing your own simple alternative need not be too time-consuming or costly, either.
Above all, remember that "Agile" is a kind-of catch-all label for developments that are carried out with an emphasis on the values and principles espoused in the Agile manifesto. What is considered "best" in one situation may not be so in another. If you understand the principles and regularly review your methods with a critical eye then you're probably going to be close enough to the best practice application to your situation.
I haven't looked at it for some time but with Beck and Fowler on the author list, there should be something useful in Planning Extreme Programming.
In my previous position #drchrono.com I aggregated data/feedback/iteration requests from 20,000 clinicians across the country. The best way to do this is to to evangelize a site like uservoice.com. I held "daily live web demonstrations" with sometimes 50 to 100 doctors (doctors signed up right from our website). In these demos I would demonstrate our current product and evangelize user voice to drive their feedback into a useful tool for our development team. All of this was done remotely and led to a 1,400% overall increase in recurring revenue growth.

How do I represent features v. tasks in FogBugz 6?

In FogBugz 6, how do I represent the concepts of a "feature" versus a "task"? As defined by Joel Spolsky, the owner of Fog Creek Software (which makes FogBugz), a feature is essentially a user-visible capability. To estimate the time to implement a feature, the developer should break the implementation into short tasks (2 days max) to ensure they think about each step.
FogBugz has only cases. I can't tell whether they're supposed to correspond to features or tasks. Some FogBugz documentation indicates that each case is a task, which is fine except there is no way to group all the tasks for a given feature together. This is especially odd given that, before FogBugz 6, Joel advocated using a spreadsheet with that grouped all the tasks for each feature. But his own software doesn't appear to meaningfully support that grouping.
I realize that the Joel article I reference includes a disclaimer pointing to a later article. However, the later article does not settle this issue, in fact it doesn't discuss features versus tasks at all, which is surprising given how well Joel advocates for those concepts in the first article.
For FogBugz 6.0 and earlier:
Make a case for each work item (task). FogBugz calls them "Features," only to distinguish them from bugs, but you do want one case for each task.
The best way to group a bunch of tasks is to make a Release (Fix-For) and assign all of the tasks to that release.
Responding to AviD's comment/question to Joel:
So, if you have 10 new features coming
in the next version, with each feature
needing 5 tasks to implement, you
recommend creating 10 releases? And
how do I define that these are the
features/"releases" that are to be
included in the upcoming release?
Here is how we dealt with this specific problem in our development process:
First, we made a regular release schedule: monthly internal releases and quarterly external releases. This schedule never changes but task assignment / feature completion does. This is hugely important in terms of simplifying our inter-human communication: don't try to argue with the calendar.
Major features ("10 new features" in your example) are turned into cases (e.g., case 101 to case 110).
Each task that is a sub-component of a major feature also gets created as a sub-case with a description of what makes this chunk of work an important part of the larger picture. Previously, in Fogbugz 6, we used the "See also" feature by allowing it to search the text for us ("This is a sub-component of case 101" for example). This was effectively the same thing but less aesthetic.
Now that we've broken down the work to its finest level of usefulness, we bring the actual developers into the discussion. Each task and major feature is individually assigned to a particular developer.
The developer determines when they can get their assigned work done by picking the appropriate internal release date that they think they can commit to.
At this point, we have a rough sketch of what will get done for each release. Further refinements continue as the working people actually estimate the hours that they'll need to do the work, enabling evidence-based scheduling, etc.
For AviD's question, though, he would have the release-assignment problem solved by step 5 above.
However, I think point 6 is the most interesting as that's where you really get a solid schedule. For example, if developers are having trouble estimating a larger task, they break it down into sub-cases even further. Notice how my assessment of "finest level of usefulness" can differ (perhaps greatly) from the person who really needs to get the work done.
This is also a time when a developer can reach out to someone else and say "I can do most of this but it would really help if person X could help me with this little piece Y." This is actually where I get most of my development tasking: I personally sit in multiple places during this process, from large-scale planning meetings to little fiddly tasks that no-one else has time to do.
PS: Making it a personal goal to get this answer rated higher than Joel's.... ;-)
PPS: My original response is now overcome by events since Fogbugz 7 has lovely sub-tasks. Program managers love those reports.
You may have better luck asking your questions in the FogBugz Discussion Forum
We use a combination of projects in order to accomplish your grouping goals. We also commonly setup a project "parking" Wiki where links to development cases, technical documentation, systems requirements, user documentation, external links to resources etc. can all be placed. It provides a good "one-stop-shop" for everything related to that project.
As part of that Wiki, we would then setup two specific projects. One in relation to the large overall goals/outlines similar to what might correspond to your Project Management charts/whatnot. One in relation to the development tasks of each feature as they are broken down into the smaller and more manageable chunks. You can then, as was mentioned use case linking to both reference the "master" cases in the other project as well as reference the project Wiki itself so that you can quickly and easily get back to all of your project related information which is conveniently in one spot.
You can accomplish a pile of different organizational structures using FogBugz, you just have to approach things a little differently sometimes in order to hit each and every situation.
Hope that helps.
haha, that article has a disclaimer, but I understand what you are saying.
We use Fogbugz and the only 'Feature' that I am aware of is under category and I don't think you can associated it with sub-tasks.
You can type in 'Case N' is the feature for this task if you just wanted to reference it in the case text.
That kind of stuff sound like is lies more in the project management domain instead of software used to track bugs.
thats a good question, i have asked that myself, too..
we currently test-drive fogbugz for 45 days in a group of 5 developers, and we currently create a "release" for major features. in fact we do not release it, but multiple releases together when something is ready.
there should definately be some sort of advanced task grouping in fogbugz.

Resources