Continuous Delivery / Deployment and Process Controls - jira

From a "process controls" POV, in a continuous delivery/deployment context, how important is it to mandate that source control commits are associated with an Agile PM (or ticketing tool) "work-item?" Here "work-item" means any of: user story, task, defect, bug, etc.
The end-goal is to ensure that developers are not placing new features into production that were not derived from the product owner. Obviously code reviews are a critical part of a proper process-controls story, but having a code review presumes the reviewer can look at the associated statement of work (e.g. user story) to ensure the code changes reflect the requested work.
Herein lies the issue.
Context
I've always assumed to have a workflow where work-tickets are associated with commits, such as with Jira, but now I'm working with a corporation whose PM tool is incapable of associating work-items or defect-tickets with source commits.
With this client, I'm also seeing a catch-22. First, I'm told by representatives of the PMO that such ticket-to-commit associations are not needed. Second, the engineering org paid for an outside consultancy to audit and flag major process flaws. The #1 flaw that was identified was the inability for management to know if developer commits have any bearing on authorized work.
From my POV, I think the PMO needs to realize that they are the "tail wagging the dog" and that they need to embrace tooling changes or special integrations to overcome this problem (not to mention more maturity with Agile philosophy).
However, perhaps I'm the one who simply is over-concerned about the ticket-to-commit associations, and perhaps there is another way to achieve effective process controls without that particular mechanism?

When dealing with regulated industries, such as health care or governing bodies, full traceability from scope to code is a requirement. I once had to perform an audit to validate that every line of code correlates to a line in the SRS for FDA approval, although generally it's enough to demonstrate that there exists a method of traceability (such as a branch in github that is named to match a task / story in JIRA and code integration is enabled).
If you're not in a regulated industry, requirement-to-code traceability is not a requirement... but it is still immensely helpful. The advantages include, and are not limited to:
Full transparency to everyone on the team, tech or not. The amount of confidence that this evokes is amazing, and the amount of chatter it reduces.
Reports to identify what theme in the requirements are causing the greatest amount of code churn, because there's a heavy cost to that.
Identifying features affected by a PR. This is immensely helpful when a release is planned, some aspects of the release are unattainable or buggy, a lot of the code has already been merged, and the team needs to isolate what to release and what not to.
Confirmation of an opinionated truth by remove the opinion: "I'm sure I did it... let's double check... yup! (or oops, let's rectify that!)". This helps deter CYA behavior, which is drain on morale and negatively influences efficiency.
Simple implementation with existing mainstream toolsets (JIRA, Trello, Asana, Freshdesk for tickets... Github, Bitbucket for repo and tickets... Zapier, IFTTT for integrations across systems that lack built-in integrations)
For every team I have ever managed or established (as dev manager, PMO, product manager, consultant or founder), it has been my explicit expectation that every line of code can be traced to the requirement for the reasons listed above. I advocate implementing this using the branch-per-topic pattern in git (Github or Bitbucket), where the branch is prefixed by the JIRA task/story/bug (eg. XYZ-2443-fix-that-bug) so that JIRA's integration automatically displays a link of the branch to the issue.
Of course, this is not the only way, but it is my preferred process at this time and is meant to illustrate a concrete example.

Related

Locking down desired but undefined behavior: Tests, Acceptance Criteria or User Stories?

My team is using scrum and TFS to manage a new software project. Some members of the team want to solve a problem in a rather unusual way. I need to know if anyone has dealt with something similar. (This is partly a scrum/project mgt question, and partly TFS because if TFS makes one approach much easier than another, that will influence the decision.)
There are parts of this software system that have not yet been specified in any manner—not via User Stories, Acceptance Criteria, tests or anything. They are to some extent "corner cases" or error-handling cases. Yet the software already behaves in a particular way when these cases are encountered. (This may be by displaying a generic error message or going down a common path to some resolution.) This situation exists because the software is designed to be error-tolerant.
Team members want to define and thereby lock down the unspecified behavior, so it won't change. If it does change—and in particular if it changes for the worse (e.g., crashing instead of displaying a generic error message), they want to catch that.
But they are proposing to do this by writing tests or Acceptance Criteria that match what the system already does in the corner cases. My position is that any currently-unspecified behavior that we want to keep stable should be defined via new User Stories first, not via Acceptance Criteria or tests. What is the proper scrum/TFS way to document and lock down as-yet-unspecified behavior in an existing body of software (preferably with the least effort)?
It is something that the users of the system will benefit from, so do it as a user story:
As a user I want the product to be stable and not crash when a specific behaviour occurs so that I have a good user experience
To satisfy this user story you may well write a technical task that leads to tests that simply match the existing edge case.
Why bother making it a user story? Well this work item will need to be prioritised against other backlog items, including new functionality. By creating a user story we allow it to be prioritised using the standard Scrum backlog approach.

Where do you record Technical Debt in TFS?

I'd like to find a way to record the Technical Debt we incur in TFS.
I need to record each item outside of a specific iteration to ensure that it is visible and easily-reported all the time. I've considered creating a separate Area for technical debt, but am unsure how well-suited that field actually is.
What are some common approaches that I might consider? Am I even barking up the right tree by trying to find a right place to put this?
I haven't found a need to track it separately; I just enter it as additional tasks. That way, they can be easily tracked and reported.
I find that there are several types of technical debt: Debt you know about and can track until fixed, and debt that becomes apparent as the result of an unexpected bug. I like to track the outstanding known technical debt in a separate Iteration I call 'Maintenance Backlog', under the area 'Technical Debt'. I can then link relevant bugs from ANY iteration to the Technical Debt area, while still tracking issues I cant resolve yet. The key is you still need bugs associated with the iteration they are found and fixed in and linked to the originating requirements for reporting purposes etc.
For what it's worth, I'm adding my 2¢ with a contemporary spin — Best practice for capturing Tech Debt work items in the backlog in Azure DevOps (the successor of TFS).
1. Use tags
Marking such tickets with a tech debt tag to indicate implicit value for the customer is always handy (e.g. to balance out the % of such tasks in the sprint).
2. Avoid tech debt features
There are 3 reasons to avoid tech debt features:
Tracking purposes. An epic or feature needs a well defined goal to achieve, so it can be crossed off the list at some point to reflect the progress. Things like refactoring or tech debt related tasks is a never ending process, so bringing them under a feature would make it pointless for tracking progress.
Violation of the single parent for tickets rule.It's convenient to practice a tree-like approach with a single parent feature for tickets and a single parent epic for features. There might be exceptions but they should be rare. Having multiple parents on tickets would trouble tracking progress.
Tech debt tasks may belong to "real" feature.
If a subset of tech debt tasks contributes to completing an ongoing or future feature / epic faster then keeping those tickets under that feature / epic makes sense. Combine it with a corresponding tag just in case. Of course, later when running out of time you may drop those tech debt tasks out.
3. Tasks without a feature is no crime
Tasks not always need a feature if you can manage them like this. Azure DevOps provides plenty of tools (e.g. querying on the tickets) to find and manage what you want.
Do what makes sense for the team and your project rather than "ticking a box".

How do you maintain technical contracts between development teams?

For example team A and team B are working on different applications that need to implement a similar feature. The feature in question relies on a database and the database is under the control of team B. Even though the user interfaces of the two applications are based on different technologies, the functionality is supposed to be roughly the same. Both teams have their own requirements and design documents. The functionality can be changed based on feedback from either team but then both teams have to update their requirement and design documents.
The teams are geographically distributed and members of each team itself are also geographically distributed. Both teams work with the same client entity but different people. Each team has their own business analyst (requirements specialist).
I am trying to make the technical communication between the teams more formal than email so that we can avoid misunderstandings.
How do you make sure that if team B changes the database and or the feature functionality, the other team gets properly notified about it? Do you use some formal text based documents such as interface contracts? Can you share any templates for those? Or do you use some other mechanism?
A couple of things from my own experience (which sounds very similar to yours)
You should try and have a single design document for the database part of the solution which as djna suggests should be posted on a wiki or similar, with a defined public contract for interaction with the data. This is a good step in the right direction, as it will give everybody a kind of 'shared vision' which helps people converge towards doing the right thing. The contracts should try to ensure that the data access is done in a standardised way.
However, from experience, the code does not always follow the spec exactly, so i would also assign a single owner from one of the teams, whose responsibility is the integration of both systems to the database.
i would then implement a continuous nightly build process with tests, and this build should include the database. This will hopefully flag any issues earlier in the process.
From the project i worked on, you may still have occasional disagreements and breakdowns, eventually we merged both teams. This was the best solution of all for us!!
Hope this helps a little
What about having a Team site (both as one team) or a Wiki so that both teams are aware of the change.
Regular stand-up meetings. Via a conference call. Stand-up == brief, highly focused, information centered. Delegate discussion to individual discussion outside meeting, reporting back at next.
There does need to be an overall authority though, to mediate where agreement cannot be reached and to ensure overall solution integrity.
I agree with Wiki or other collaborative site for publishing the current reality.

Best practices for customer involvement in Agile development? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We need to involve our customer development partners in our development process. We're more or less following Agile methodologies. Some customer partners are remote, others closer. We need to minimize travel costs.
Our customers are in health care and tend to be busy, expensive, and hard to schedule.
What practices and technologies have worked to support customer involvement? We're using phone calls, phone conferences and email. We're curious about leveraging wiki techniques and would love to hear what's worked for others.
it doesn't matter whether the customer is in the same cubicle or halfway around the planet, except for communication delays - the critical factor is availability.
a customer that is too busy to answer your emails for several days is going to cause your iteration to be late, or fail
the customer has two critical commitments for agile:
available to answer questions in a timely manner
not to change their mind/priorities during an iteration
the customer must commit to a reasonable service-level agreement (SLA) on availability, e.g. 1-hour response time, or 24-hour response time, etc., and you will need to adjust all estimates and schedules by the lag factor. If the customer will not commit or does not follow through, cancel the iteration and re-plan, bringing the customer's commitment to the forefront again. Do not just "guess" at what you think the customer might want.
Bottom line: without a customer commitment, agile will not work.
My experience with Agile methods is mostly for desktop applications. When our customers are remote, we've spent time to get an engineer to the customer site to configure/install a demo rig. The engineer works with the customer on a test and demo setup/plan that will provide an environment that the customer believes replicates the important aspects of the deployment environment but isolates the demo system from existing infrastructure (so that we can push updates whenever we need to). The engineer also sets up deployment systems to move our applications into production, so that we can "deploy" without being on site. Our applications can self-update (either for each release or each build) and we carefully instrument the releases to log all errors and submit all crashes as bugs to our bug tracker. This way we at least know what went wrong, even if we don't know what's going right.
For each release/build that shows up on the customer's test rig, we provide a (short) screencast, narrated by the project lead or primary developer, demo-ing any new features. The release notes contain any long-term issues or questions we want the customer to think about (i.e. issues that can't be resolved immediately by a phone call or email), and the application displays these notes for the user.
Finally, and possibly most importantly, we get the customer and/or the customer's liaison an account on our calendar server and configure their calendar app to make use of that account. This then goes both ways--we can schedule time (on site, phone, email, etc.) with the customer and they can do the same with our developers.
One option: Install a customer proxy at the "customer partner" site who can extract the information that you need when those customers are available. Have these proxies build the solid relationships that allow them to represent the customer view. Their time is all yours. And when questions arise that they cannot answer, they have ready access to your customer partners - even if in the coffee line.
The whole point of the customer in agile is to have open and free discourse with the developers (IE immediate feedback). If your actual customers cannot provide this, then you need an intermediary/proxy that can fill this role. You don't need actual customers, you just need someone that can represent the customers' interests well enough to meet your customers' needs.
Just a few ideas:
If you do choose to use a Wiki, make sure it supports a whole-wiki-wide "recent changes" list, and preferably one that is specific to the users. The less distant from development people are, the more likely to have email as a metaphor for their computer use. If they can't immediately tell when there's something new for them to see, they will never explore it. You also preferably need ways to signal to them that you need their attention to matters, or they will treat changes like CCs.
I'm a big believer in creating video screen captures of interactions (narrated) and distributing them to users. Unlike a real demo, customers don't feel like they need to interrupt, and they can rewind and re-watch the same interaction over and over, paying attention to little details.
Finally, if you do distribute prototypes, make sure to send someone (or at least a screen sharing session) to see how the prototypes are used. Contextual design is effective. You can count on people using your prototype differently from the way you expect, and you have to understand how they use it to really understand where the issues are, even if they don't report them.
Have you considered something like LogMeIn.
This would allow customers to either log-in to a PC on your network already running your application, or alternatively allow you to install/update the application on one of their computers.
This would solve the remote customer issue and would also support the ongoing continual customer feedback requirement in the agile process.
I used it a previous company for technical support, but there is no reason (except maybe cost) that it would not work for your situation.
It is also a great way to actually see how users are using your application and therefore find out what works and what doesn't.
First of all, make sure that you have a product manager or a product owner close the the developers. This person will be managing the relationship with the customer.
Then, the product manager can demonstrate the product to the customer at the end of each iteration and also ask customers question when the developer need feedback to implement a user story.
It is amazing the positive feedback you can get from customers when you involve them.
We did not use a wiki and most of the communication is done via E-Mail, phone, and a screen sharing application (we are using GoToMeeting, but there are tons of alternative out there).
You should probably do a kick-off once with everyone at one place. Face-to-face time is invaluable. That includes all developers. Prepare some metaplan questions, but also have enough time to just mingle.
I think by most definitions of Agile processes that have high dependence on customer involvement you've already missed "best practice", which would be for an on-site, and preferably "in-team" customer present at all times. So I suppose we're looking for a "next-best practice". :)
There's the possibility of introducing a "proxy customer" on-site. I have to admit to being very sceptical about the value of a proxy customer. I'm concerned about the risk of introducing some sort of second-rate and otherwise unnecessary business analyst function to the mix, with the increased signal-to-noise ratio and potential for garbled messages. It also carries the risk of allowing busy real customers to reduce their involvement in the process, which is likely to lead to dissatisfaction. I wonder if there might be someone with good domain knowledge who has recently retired and might be available to act in this capacity as a consultant?
Communication bandwidth with remote customers is astonishingly lower than face-to-face, something I had not fully realised until I started dealing with users in another country. Even with video the loss is significant.
How long are your iterations? How hard is planning iterations? Might it be easier to go for longer iterations and get more planning done less frequently, or reduce iteration length and go to smaller, but more frequent planning sessions? Are more than one customer involv
Do you have a useable and available build at the end of each iteration? Is there time for involved users to have hands-on time before the next planning session? Keeping users engaged by shipping frequently would seem on the surface to be a Good Idea, which perhaps legislates for small frequent iterations (a week? two weeks?)
The wiki idea might work: have you looked at the FIT Framework? It's a sort of integrated acceptance test/wiki, which might help in getting acceptance tests from remote customers. I think I'd also look to provide some sort of (separate or integrated) "project dashboard", possibly pushed regularly to key customers as well as available on demand. use it as a substitute for things like post-its on whiteboards, Big Visible Charts and the like. There are a number of open-source or low-cost options that may serve - writing your own simple alternative need not be too time-consuming or costly, either.
Above all, remember that "Agile" is a kind-of catch-all label for developments that are carried out with an emphasis on the values and principles espoused in the Agile manifesto. What is considered "best" in one situation may not be so in another. If you understand the principles and regularly review your methods with a critical eye then you're probably going to be close enough to the best practice application to your situation.
I haven't looked at it for some time but with Beck and Fowler on the author list, there should be something useful in Planning Extreme Programming.
In my previous position #drchrono.com I aggregated data/feedback/iteration requests from 20,000 clinicians across the country. The best way to do this is to to evangelize a site like uservoice.com. I held "daily live web demonstrations" with sometimes 50 to 100 doctors (doctors signed up right from our website). In these demos I would demonstrate our current product and evangelize user voice to drive their feedback into a useful tool for our development team. All of this was done remotely and led to a 1,400% overall increase in recurring revenue growth.

How do I represent features v. tasks in FogBugz 6?

In FogBugz 6, how do I represent the concepts of a "feature" versus a "task"? As defined by Joel Spolsky, the owner of Fog Creek Software (which makes FogBugz), a feature is essentially a user-visible capability. To estimate the time to implement a feature, the developer should break the implementation into short tasks (2 days max) to ensure they think about each step.
FogBugz has only cases. I can't tell whether they're supposed to correspond to features or tasks. Some FogBugz documentation indicates that each case is a task, which is fine except there is no way to group all the tasks for a given feature together. This is especially odd given that, before FogBugz 6, Joel advocated using a spreadsheet with that grouped all the tasks for each feature. But his own software doesn't appear to meaningfully support that grouping.
I realize that the Joel article I reference includes a disclaimer pointing to a later article. However, the later article does not settle this issue, in fact it doesn't discuss features versus tasks at all, which is surprising given how well Joel advocates for those concepts in the first article.
For FogBugz 6.0 and earlier:
Make a case for each work item (task). FogBugz calls them "Features," only to distinguish them from bugs, but you do want one case for each task.
The best way to group a bunch of tasks is to make a Release (Fix-For) and assign all of the tasks to that release.
Responding to AviD's comment/question to Joel:
So, if you have 10 new features coming
in the next version, with each feature
needing 5 tasks to implement, you
recommend creating 10 releases? And
how do I define that these are the
features/"releases" that are to be
included in the upcoming release?
Here is how we dealt with this specific problem in our development process:
First, we made a regular release schedule: monthly internal releases and quarterly external releases. This schedule never changes but task assignment / feature completion does. This is hugely important in terms of simplifying our inter-human communication: don't try to argue with the calendar.
Major features ("10 new features" in your example) are turned into cases (e.g., case 101 to case 110).
Each task that is a sub-component of a major feature also gets created as a sub-case with a description of what makes this chunk of work an important part of the larger picture. Previously, in Fogbugz 6, we used the "See also" feature by allowing it to search the text for us ("This is a sub-component of case 101" for example). This was effectively the same thing but less aesthetic.
Now that we've broken down the work to its finest level of usefulness, we bring the actual developers into the discussion. Each task and major feature is individually assigned to a particular developer.
The developer determines when they can get their assigned work done by picking the appropriate internal release date that they think they can commit to.
At this point, we have a rough sketch of what will get done for each release. Further refinements continue as the working people actually estimate the hours that they'll need to do the work, enabling evidence-based scheduling, etc.
For AviD's question, though, he would have the release-assignment problem solved by step 5 above.
However, I think point 6 is the most interesting as that's where you really get a solid schedule. For example, if developers are having trouble estimating a larger task, they break it down into sub-cases even further. Notice how my assessment of "finest level of usefulness" can differ (perhaps greatly) from the person who really needs to get the work done.
This is also a time when a developer can reach out to someone else and say "I can do most of this but it would really help if person X could help me with this little piece Y." This is actually where I get most of my development tasking: I personally sit in multiple places during this process, from large-scale planning meetings to little fiddly tasks that no-one else has time to do.
PS: Making it a personal goal to get this answer rated higher than Joel's.... ;-)
PPS: My original response is now overcome by events since Fogbugz 7 has lovely sub-tasks. Program managers love those reports.
You may have better luck asking your questions in the FogBugz Discussion Forum
We use a combination of projects in order to accomplish your grouping goals. We also commonly setup a project "parking" Wiki where links to development cases, technical documentation, systems requirements, user documentation, external links to resources etc. can all be placed. It provides a good "one-stop-shop" for everything related to that project.
As part of that Wiki, we would then setup two specific projects. One in relation to the large overall goals/outlines similar to what might correspond to your Project Management charts/whatnot. One in relation to the development tasks of each feature as they are broken down into the smaller and more manageable chunks. You can then, as was mentioned use case linking to both reference the "master" cases in the other project as well as reference the project Wiki itself so that you can quickly and easily get back to all of your project related information which is conveniently in one spot.
You can accomplish a pile of different organizational structures using FogBugz, you just have to approach things a little differently sometimes in order to hit each and every situation.
Hope that helps.
haha, that article has a disclaimer, but I understand what you are saying.
We use Fogbugz and the only 'Feature' that I am aware of is under category and I don't think you can associated it with sub-tasks.
You can type in 'Case N' is the feature for this task if you just wanted to reference it in the case text.
That kind of stuff sound like is lies more in the project management domain instead of software used to track bugs.
thats a good question, i have asked that myself, too..
we currently test-drive fogbugz for 45 days in a group of 5 developers, and we currently create a "release" for major features. in fact we do not release it, but multiple releases together when something is ready.
there should definately be some sort of advanced task grouping in fogbugz.

Resources