How do I represent features v. tasks in FogBugz 6? - fogbugz

In FogBugz 6, how do I represent the concepts of a "feature" versus a "task"? As defined by Joel Spolsky, the owner of Fog Creek Software (which makes FogBugz), a feature is essentially a user-visible capability. To estimate the time to implement a feature, the developer should break the implementation into short tasks (2 days max) to ensure they think about each step.
FogBugz has only cases. I can't tell whether they're supposed to correspond to features or tasks. Some FogBugz documentation indicates that each case is a task, which is fine except there is no way to group all the tasks for a given feature together. This is especially odd given that, before FogBugz 6, Joel advocated using a spreadsheet with that grouped all the tasks for each feature. But his own software doesn't appear to meaningfully support that grouping.
I realize that the Joel article I reference includes a disclaimer pointing to a later article. However, the later article does not settle this issue, in fact it doesn't discuss features versus tasks at all, which is surprising given how well Joel advocates for those concepts in the first article.

For FogBugz 6.0 and earlier:
Make a case for each work item (task). FogBugz calls them "Features," only to distinguish them from bugs, but you do want one case for each task.
The best way to group a bunch of tasks is to make a Release (Fix-For) and assign all of the tasks to that release.

Responding to AviD's comment/question to Joel:
So, if you have 10 new features coming
in the next version, with each feature
needing 5 tasks to implement, you
recommend creating 10 releases? And
how do I define that these are the
features/"releases" that are to be
included in the upcoming release?
Here is how we dealt with this specific problem in our development process:
First, we made a regular release schedule: monthly internal releases and quarterly external releases. This schedule never changes but task assignment / feature completion does. This is hugely important in terms of simplifying our inter-human communication: don't try to argue with the calendar.
Major features ("10 new features" in your example) are turned into cases (e.g., case 101 to case 110).
Each task that is a sub-component of a major feature also gets created as a sub-case with a description of what makes this chunk of work an important part of the larger picture. Previously, in Fogbugz 6, we used the "See also" feature by allowing it to search the text for us ("This is a sub-component of case 101" for example). This was effectively the same thing but less aesthetic.
Now that we've broken down the work to its finest level of usefulness, we bring the actual developers into the discussion. Each task and major feature is individually assigned to a particular developer.
The developer determines when they can get their assigned work done by picking the appropriate internal release date that they think they can commit to.
At this point, we have a rough sketch of what will get done for each release. Further refinements continue as the working people actually estimate the hours that they'll need to do the work, enabling evidence-based scheduling, etc.
For AviD's question, though, he would have the release-assignment problem solved by step 5 above.
However, I think point 6 is the most interesting as that's where you really get a solid schedule. For example, if developers are having trouble estimating a larger task, they break it down into sub-cases even further. Notice how my assessment of "finest level of usefulness" can differ (perhaps greatly) from the person who really needs to get the work done.
This is also a time when a developer can reach out to someone else and say "I can do most of this but it would really help if person X could help me with this little piece Y." This is actually where I get most of my development tasking: I personally sit in multiple places during this process, from large-scale planning meetings to little fiddly tasks that no-one else has time to do.
PS: Making it a personal goal to get this answer rated higher than Joel's.... ;-)
PPS: My original response is now overcome by events since Fogbugz 7 has lovely sub-tasks. Program managers love those reports.

You may have better luck asking your questions in the FogBugz Discussion Forum

We use a combination of projects in order to accomplish your grouping goals. We also commonly setup a project "parking" Wiki where links to development cases, technical documentation, systems requirements, user documentation, external links to resources etc. can all be placed. It provides a good "one-stop-shop" for everything related to that project.
As part of that Wiki, we would then setup two specific projects. One in relation to the large overall goals/outlines similar to what might correspond to your Project Management charts/whatnot. One in relation to the development tasks of each feature as they are broken down into the smaller and more manageable chunks. You can then, as was mentioned use case linking to both reference the "master" cases in the other project as well as reference the project Wiki itself so that you can quickly and easily get back to all of your project related information which is conveniently in one spot.
You can accomplish a pile of different organizational structures using FogBugz, you just have to approach things a little differently sometimes in order to hit each and every situation.
Hope that helps.

haha, that article has a disclaimer, but I understand what you are saying.
We use Fogbugz and the only 'Feature' that I am aware of is under category and I don't think you can associated it with sub-tasks.
You can type in 'Case N' is the feature for this task if you just wanted to reference it in the case text.
That kind of stuff sound like is lies more in the project management domain instead of software used to track bugs.

thats a good question, i have asked that myself, too..
we currently test-drive fogbugz for 45 days in a group of 5 developers, and we currently create a "release" for major features. in fact we do not release it, but multiple releases together when something is ready.
there should definately be some sort of advanced task grouping in fogbugz.

Related

How should BDD statements be properly constructed? Is there a convention used in teams?

Is there a preferred way of creating BDD scenarios in small agile teams and amongst the community? I'm using courgette and it gives an example on https://courgette-testing.com/bdd
Scenario: Refunded items should be returned to stock
Given a customer previously bought a black sweater from me
And I have three black sweaters in stock.
When they return the black sweater for a refund
Then I should have four black sweaters in stock.
Does this sound like a good idea? Would this work well for communication in teams?
I've used their web steps bit, and am now doing the refactor bit to make it clear to the business.
Any links would help. Thanks
The conversations in BDD are more important than the tools. Rather than starting with the finely-grained specification in Courgette's example, try talking to the business first. Ask them for an example of the kind of behaviour they want.
When you write it down, start by just writing it the way they describe it. It's amazing how few people listen properly! After you've got the example from them, take a look at it. Can you see which bits are the contexts (Givens) and which are the outcomes (Thens)? Which is the step that's associated with triggering the behaviour you're interested in (Whens)?
Once you've worked that out, there are a couple more questions I like to ask:
Is there any other context which, for this same event, gives a different outcome?
Is there any other outcome that's important?
For instance, if I was implementing this behaviour for a big supermarket, I might come across an example like:
"Oh! No, don't add food back to stock. We don't know how it's been stored. We refund it if there's something wrong with it, but we bin it."
You can probably see how that might change your code!
Testers are really great at asking these questions and spotting missing scenarios! This leads us to the "Three Amigos" pattern. I like to include:
A business person, Product Owner, subject matter expert or person with the problem
A tester
A dev (or a pair of devs).
You can also include UI designers, technical writers, etc. - Matt Wynne says it's "Three Amigos where three is a number between 3 and 7".
I really like it when the developer writes the scenarios down, in any form that allows them to get to the "Given, When, Then". Sometimes I'll do it in the meeting; sometimes I do it later and show it or send it to my business person.
Courgette's example is something that typically happens when people don't have these conversations. If you start with the conversations, you're much more likely to get something that matches the above. Not only are those declarative steps easier for business to read and for the whole team to talk about, but they're also easier to maintain, as the detail of how they're achieved is hidden (usually in Step Definitions, and further in Page Objects).
There's all kinds of useful posts for BDD newcomers on my blog if you want to know more!

Domain Layer and BDD

Has anyone used BDD for driving their domain Layer?
Yes, we have found this process to have worked very well and used specflow to deliver this approach fairly easy. We have over 2000+ scenarios implemented in our domain layer alone and we have also used this approach for testing our controllers in our UI layer too(another 2000+ tests).
It's a good idea if working on a large project to put some thought into how to organise the steps before you start as you will quickly start to collect a large amount of steps and finding a step can become a bit of a challenge.
The biggest issue we had was having multiple people on a team writing scenarios, they would often write the same step but with a slightly different wording resulting in the same step being added twice.
Yes, although lately we've been looking at Cuke and Specification by Example as a higher level to start driving from. See http://specificationbyexample.com/
Yes, that's what it's for!
I've found BDD main benefits is how it, in a natural way, lets you;
Drive the design (plan then do)
Discover and emphasize the domains ubiquitous language
Document project progress and current state (specs maps to stories and sprint plans)
If it also results in acceptance or unit tests that's great but I think most value comes out of the above. It also helps new team members get a grip on things and it's very easy to return to projects domain mentally after being away for a while.
I also agree on before mentioned "step-duplication" problems, it's well spent time refactoring and consolidating steps trying to keep them well-structured.

What should I ask the previous development team during my only (1-3hr) meeting?

There is Ruby on Rails (1.8, 2.3.2) project. First version of project was made by some organisation. I will implement next versions of this project without any help from this organisation. I will be able to talk with developers from previous development team during meeting (1-3 hours).
Project statistics: ~10k LOC, 1.0/0.6 code to test ratio, rspec
What questions about project can you recommend to ask?
First review the entire project and to figure out as much as possible so you have context and can actually understand what they tell you.
Ask
If you can record the conversation
For an architectural overview
Why they made certain architectural decisions over another
A complete list of dependencies (if you can't figure that out on your own)
What the biggest problems are
Which parts of the projects are always / never being fixed
What the Achilles' Heel of the project is
What will cause the biggest headaches
What security issues are there and what the constraint is to fixing it
What would you do next if you were me?
What you should know that you didn't ask (most important question)
Also, don't be judgemental, you want them to reveal any problems they know about. There are probably tons of things wrong with the app that they are embarassed about, which you need to know sooner rather than later. They're not going to open up to you if they don't trust you.
I would ask for a code walkthrough. Not line-by-line, but more for the overall structure of the project, relationships between individual modules, etc.
Find out the Why's. How is easy enough to see in the codebase, but the why is sometimes impossible to figure out, and will bite you in the ass.
For instance...
Which parts of the application were the biggest performance issues? Which of those issues were resolved? Which are still issues?
Why did you opt for pattern / tool / library x? What other things did you consider? Why?
This will hopefully. (Hit some wood.) Help keep you from having to trudge through the same learning curve and mistakes that the first team had to deal with, and should give you good insight into where the first team actually made a poor choice, instead of making a choice based on factors you have not accounted for yet.
Ask if the new features will cause any major changes to the existing code (architecturally) and what the implication of that will be with other dependent parts of the application.
Also get their emails, as you will have more questions.
One of the most important things, in my opinion, is to get as much technical documentation as you can prior to meeting with them. You should try to go into the meeting as informed as possible, so that you not only know what areas you need to focus on the most, but also to have a preexisting knowledge of how some of the subsystems relate to each other.
Also, do not be afraid to ask what they would have done differently, if given the chance. Some of the best ideas come too late in the development process to be implemented - be it from library availability, change in requirements, change in team, etc.
Bring cookies (or pizza, beer, or wine as appropriate); you will want them to have positive memories of you for when you call with questions.
Edit: to put my answer in the form of a question: "May I offer you a home-baked cookie?"
Perhaps you have done this already, but I would make sure you can:
Checkout the latest version
Run all the migrations
Run all the tests
Deploy (even if to a staging server)
Run the application locally
Before you go to the meeting, so you can make sure you can by the time it is over.
Other things that might be useful
data model
UI wireframes
bug tracker data / issue tracker data
who are the customers / people representing customers
development environment configuration
source control locations, etc.
explanation of special configuration settings
Wow! All great answers, right down to the cookies.
My contribution assumes that this is your one and only chance to access the old dev team, therefore you need to kick it up a notch:
Agenda. Split the meeting into several parts, for instance:
A quick (15 min) introduction and arch overview
One on one with team members.
Design review as a group, etc.
Positive Energy. Especially if the relationship is inherently difficult, keep a positive focus by postulating: what improvements would you put into the next version - (rewrite is not an option, right Joel) - capture every nuance, and drill down past their comfort level only nearer the end.
Facilitator. Use a trained design meeting facilitator. They can help prep for the meeting, conduct pre-meeting interviews, design the agenda. During the meeting they can drive the intensity, and keep the focus. They can also suggest forms of capturing what can be a fair amount of information.
Also, I would try to id all design artifacts beyond the code, if any, and come to an understanding of how accurate it is. This may include doing design reviews of key elements of these documents vis a vis the as-built system.
Don

Difference between BPM and App. workflow?

I know there is a lot of talk about BPM these days and I am conscious that some may see it to be a craze rather than a fundamentally important piece of software.
As someone from what most would call 'The Business', I have been doing my best to learn about BPM to ensure we continue to make decisions that not only make sense to the business, but IT as well.
I have noticed while reading that mention is made to application workflow when sometimes discussing BPM. I hadn't given this much thought until recently.
Therefore, what is the difference? When would you use one and not the other?
BPM is about the process and improving it, which takes into account users and potentially more than one application,e.g. an ERP system may have more than one application to it, though there may be other uses of the term. Note that the process could be viewed without what applications or technologies are used.
Application workflow is how an application is used to go from a to b. Here it is a specific set of code that is used and what happens over the course of an application getting from a to b. In this case, the application is front and center rather than the process.
Does that provide an answer? Another way to think of it is that multiple application workflows can make up a system which is used in a process that can have BPM applied to it.
Late to the game, but workflow is to database as BPMS is to DBMS. (Convenient how the letters line up, huh?)
IOW, BPM(S) is traditionally meant to refer to a particular framework/application that allows you to manage business processes: defining them, storing them, versioning them, measuring them, etc. This is similar to how a DBMS manages databases.
Now, a workflow is a definition, much like a database is a definition. In the former case, it is a definition of operations/work (Fufill Order), steps thereof (Send Invoice) and rules/constraints on the work (If no stock, send notice). In the latter, similar case, it is a definition of data structure (CREATE TABLE) and constraints (InvoiceTotal must be > $0.00).
I think this is a potentially confusing subject, particular as some development environments use a type of process flow model to generate user facing applications (I'm thinking about Outsystems here, for example).
But, for me, the distinction is crystal clear. Application workflow, as people talk about it, refers to a user's path through an application, i.e. the pages they complete/visit, the data they enter, etc. on their way to completing a transaction of some sort. Application orkflow is a poor term for this though, I think application flow would be more meaningful.
BPM on other hand, is about modelling and executing a workflow process. By workflow, in this context, I mean a series of discrete steps (or tasks) that have to be completed (either programmatically or via human interaction) in a certain order to complete a process. These tasks can be implemented as individual application modules (each with their own "application workflow", see above). The job of the workflow engine is to make sure that these separate steps are assigned to the right people (of groups of people) in the right sequence, and that overall the process completes in an orderly way.
I don't think there's a clear answer to this at all. These are words, as opposed to theoretical concepts. If you add the word "checklist" into the mix - that just turns out to be a linear version of a process (but you can have conditionals in checklists - making them a workflow).
I am not sure how to help in reframing this question, but it's almost as if no answer can ever be possible. My own thoughts are at https://tallyfy.com/improving-efficiency-workflow-vs-business-process-management/

What is the most critical piece of code you have written and how did you approach it?

Put it another way: what code have you written that cannot fail. I'm interested in hearing from those who have worked on projects dealing with heart monitors, water testing, economic fundamentals, missile trajectories, or the O2 concentration on the space shuttle.
How did you prepare for writing this sort of code: methodologically, intellectually, and emotionally?
Edit
I've marked this wiki in case the rep issue is keeping people from replying. I thought there would be a good deal more perspective on this issue than there has been.
While I am not personally involved in what is described there, this article will hopefully contribute to the spirit of your question: They Write the Right Stuff.
I wrote a driver for a blood pressure measuring device for hospital use. If it "fails", the patient will not have his blood pressure checked at the scheduled time; if his blood pressure is abnormal, no alarm (in the larger system) will be triggered. Such an event could be clinically significant.
My approach was to thoroughly read the spec/documentation in a non-work environment (to avoid the temptation to start coding right away), then read it again at work. After that, I summarized the possible states and actions on paper and "flowcharted" an algorithm, and annotated all the potential real-world "bad events" (cables getting unplugged, batteries dying, etc). Finally, I wrote and rewrote the driver three times, each with different mechanisms (e.g. FSM), and compared their results. Each iteration helped me identify weaknesses I hadn't yet discovered. The third rewrite was the "official" result. I reviewed each iteration with my co-worker.
Emotional preparation consisted of convincing myself that should the unthinkable happen, at least I wasn't willfully negligent -- just incompetent (the old "I'm only human" excuse). ;-)
I have written computer interface to a MRI machine. It had no chance of hurting the end user as it was just record management, but it could potentially have given an incorrect diagnosis or omit important information.
Tests, lots and lots of tests.
Unit tests, mid and high level tests. Simulate all possible input combinations. Also a great deal of testing with the hardware itself. Testing must be done in a complete and methodical way. It should take a great deal more time to test than to write.
Error Reporting
All errors must be reported and be obvious. If it won't hurt the patent to do so, fail fast.
For something that is actively keeping a person alive things are even worse. It must never stop working. If it fails it needs to restart and keep trying. Redundant internals are also a must in case the hardware fails.
At the wrong company it can really a difficult kind of situation to work in. However, if things are going well, you are well funded and release pressure is not high, it can be a very rewarding space to work in.
Not really an answer, but:
I've got a friend who writes embedded control software for laser eye surgery machines. When he had laser eye surgery himself, he made sure to go to an ophthalmologist who used his company's system. I have great admiration for this guy. I can't think of a piece of software I've ever written whose level of quality was high enough that I'd trust my own eyesight to it.
Right now I'm working on some base code for a system that retrieves medical patient information from clinics and hospitals for a medical billing office. We're starting out with a smaller client and a long break-in period to ensure quality, but eventually this code needs to securely handle a large variety of report formats from a number of clients at different facilities.
It's not quite in the same scale as your examples, but a bad mistake could result in the wrong people being billed or the right person billed to a defunct address (screwing up credit reports) or open people up to identity theft, so it's still pretty critical. Oh yeah, and it could mean doctors don't get paid quite as quick. That's important, too, especially from a business perspective, but not in the same class as data protection and integrity.
I've heard crazy stories of the processes used to write code at NASA for the spaceshuttles. Every line of code has about 10-20 lines of documentation, along with tests, full revision history, etc. Every time a bug is found, not only is the code evaluated and repaired, but the entire procedure of writing code, the entire command chain, etc. is reviewed to answer the question: "What happened wrong in our process that allowed this bug to get included in the first place?"
While nothing quite so important as an MRI machine or a blood pressure monitor, I did get tapped to do a rewrite of Blackjack when I worked for an online gambling provider. Blackjack is by far the most popular online game, and millions of dollars was going to go through this software (and did).
I wrote the game engine separate from the server and the client, and used Test Driven Development to ensure that what I was assuming was coming through in the results. I also had a wrapper "server" that had console output that would allow me to play. This was actually only useful in that it mimicked the real server interface, since playing a text version of blackjack isn't very fun or easy ("You draw a 10. You now have a 10 and a 6, while the dealer has a 6 showing. [bsd] >")
The game is still being run on some sites out there, and to my knowledge, has never had any financial bugs after years of play.
My first "real" software job was writing a GUI app for planning stereotactic brain surgery. Testing, testing, testing... absolutely no formal methods, engineering-style thoughts, just younger programmers cranking it out. When they started talking about using the software to control a robotic arm with a laser, without any serious engineering methods in place, i got a bit worried, left for more officey lands.
I've created information system application for local government cultures and tourism department in Bali island which were installed in several tourism denstinations, providing extensive informations about the culture, maps, accomodations etc.
if it failed then probably tourists couldnt get the right informations they need most, cheat by brookers, or lost somewhere :)

Resources