We use Jira for issues bugs estimates and timesheets.
I've seen 2 approaches to using Jira and I want to hear what other people are doing.
Approach 1:
Log one feature, such as "Allow user to save as CSV". The task is assigned to a Developer and the workflow progresses from Not started, In Progress, Complete. Once done it's assigned to a Tester and they change workflow to Testing, then to Tested/Completed.
Approach 2:
Log a task/user story called "Allow user to save as CSV". Then developer logs subtasks such as, Front end, Backend and tester logs tasks such as create test plan, test right clicking. Once all dev and test sub-tasks are complete, someone marks the task as completed.
I prefer the first way, I've heard the second way is better for tracking time. It seems harder to manage what's going on with a sea of issues in Jira.
My company does first approach. This seems to be working so far ( about a year now ). With either approach I really love how everything seems to be logged in JIRA for history tracking.
I recommend using sub-tasks when you need to have work proceed in parallel. Or if the parent task is really large and the sub-tasks are around a few days each. But don't create sub-tasks unless they are needed.
Related
I am trying to find a way that will cause TFS to automatically create a second task when another task has been completed. The idea here is that once a developer has completed a task and moved it to the done column a 2nd task is created off the first one for the QA/QC members to start looking at the code and doing what they need to do.
Any suggestions?
There is no built-in feature, but there are tools, many open source, that can help you implement your request.
One is TFS Aggregator: I am part of the core team, always looking for new contributors.
This should be managed via your user stories, either via having a task for testing that can be grabbed by the QA team once the user story is ready for QA, or by defining a board column for the user story to indicate that it's development-complete and ready for QA.
Once a user story starts development, you shouldn't be adding tasks to it. Each task should be defined and estimated prior to the sprint starting for capacity planning purposes.
currently we are having the following states/columns in JIRA:
Open/Todo (-> Developer takes task and starts work)
In Progress (-> Developer sets tasks to done)
Done (-> QA tests on staging and sets task to ready to deploy or reopens)
Ready to deploy (-> Developer deploy these tasks at date of release)
Deployed (-> QA/Stakeholder tests task again on Live/Production and closes or reopens)
Done/Closed
In my current understanding this is wrong, because we try to handle two concerns in one status dimension: Development and deployment. I would like to decouple sprint from release/versions. Currently we cant end a sprint until all tickets are approved on production which leads to bottlenecks.
What would be your suggestion? One idea I have in mind: Limit the status to Open, In Progress, Done, Closed and handle the deployment/release over JIRA build-in versioning. If a problem occures on production, a bug ticket must be opened.
Otherwise I don't see a chance since the versioning/releasing of JIRA 6.4 does not seem to include status columns by itself.
Is releasing to production part of your team's 'definition of done'? If it is then the workflow you have makes a lot sense.
There is no separation of concerns between development and deployment. Code that has been developed but not deployed has no value to the business. Development is simply a step in the process towards release, which is the point at which value is realised.
A sprint is a timebox, not a set amount of work. When the timebox ends then the work that you have still in progress is not 'done'. If you are regularly unable to complete all the work you bring in to a sprint then that suggests you are bringing too much work in. The team's velocity, which is a measure of the work that gets 'done' each sprint, should be a good indication of what your sprint capacity is.
If your bottleneck is the release to production and verification of the release, then perhaps you should focus some effort on improving this process? Possibly this could mean more release automation or better coordination with the stakeholders over validating releases.
I am reviewing JIRA for possible use within several development teams at the company I work for. We use Scrum as a base for our project management. We have good, self-organizing teams, almost no assigned work, etc. JIRA seems great for some of these items, but something we are struggling with is handling the management of process vs technical tasks and something we call "issue bundling".
Process control. Currently we will create a story, say "The graph on the Profit and Loss report has issues with overlapping legend text". Okay, good enough. We will then create a technical sub-task, for simplicity let's say it's "Research and correct the issue". Next we have a set of process sub-tasks that we create. Peer Review, Make Build, QA Testing, Merge, Track. Each of these can then be independently assigned to users and placed into the Pending bin on our scrum board (BTW, we use a Pending, Awaiting Action, In Progress, Done, Merged model rather than a todo, in progress, done model). Pending basically means, I'm waiting if I'm next in priority. During development the programmers will grab the technical task, set themselves as assignee and move to in progress. When they are done they will move it to the Done bin and then update the Peer Review process sub task to "Awaiting Action", and set the assignee to their code partner. Emails are sent, peer review is done. When that is completed the peer review partner will move Peer Review to the Done bin and set the Make Build sub task to the build manager and move it to Awaiting Action. Build manager sees this, makes build, moves it to done and updates the QA Testing ticket to a Awaiting Action status, you get the point.
It's working, but are their any suggestions on alternatives, best practices, etc. Is creating technical and process sub-tasks not the way to go? One thing I notice is that we have to filter the issue list to hide the sub tasks and the scrum board can get pretty overwhelming for the stakeholder who just wants to see the status of the parent story. Since the parent story does not move until the sub-tasks move they don't see anything that is of interest to them, not even if the story is "in progress" while the sub-tasks are moving. Ugh.
Issue bundling. We often have a set of issues that are related perhaps tightly, but typically more general. For example, all issues that are related to reports in our software. At the present there are say 15 known issues. These issues may be on different reports in the system, with specific steps to reproduce, etc. When we are gearing up for a sprint we will select bundles of these related issues. The reason is that QA can more efficiently test a bunch of small fixes that are generally related in one pass rather than testing each report as a separate process.
Currently we move each issue to a subtask of a bundle. The bundle for example might be simply called "Report Fixes 1", and it will have, for example, 5 sub-tasks that are technical, each being a different report bug to be fixed. We can then add the process control items from above to the overall bundle. We also know that we won't merge until all items in the bundle are done, so they all get the same version.
However, as stated above, visibility is reduced as you cannot see the status easily of the subtasks now that they are in the bundle.
Again, best practice? Ideas? How are others handling this?
Brian,
Have you considered combining your process sub-tasks (Peer Review, Make Build) and your Scrum board "bins" (Pending, Awaiting Action)? Subtasks are the usual way to provide parallel tasks in JIRA, but the way you describe the whole process it sounded more linear. If each story really gets bounced from one assignee to another, just change the assignee and the status.
"Issue bundling" sounds like Epics in GreenHopper to me. You can also do a similar thing using a Labels field (standard or custom) to group issues.
~Matt
Here is how we are handling this -
The process sub-tasks you mentioned are statuses in our implementation. So, I'll have the story broken into Technical tasks that the dev finishes and then moves the story to "Pending Review" status, from where it goes to Make Build and QA Testing and so on. This is pretty much what Matt said as well. This workflow gives the stakeholders the sense of the progress that is being made on the sprint and thus is very helpful.
As such there is nothing like best practice in JIRA, it is very flexible and one can use it the way one wants/needs.
I agree with you on Epics not being a complete solution. We overcome this by adding labels and creating filters and swimlanes based on these filters.
My organization is building a new version of our ticketing site and is looking for the best way to build an online waiting room when the number of users in our purchase path exceeds a certain limit. The best version of this queue would let new users in after existing users have either completed their purchase or have exceeded a timeout limit after entering the path.
I'm trying to get an idea of how this has been implemented by other organizations. Has anyone out there done something similar or have any experience with this? We have some ideas, but I'd like to get a sense of what solutions have been tried and what problems those solutions have run up against.
Just to be complete, this site is being built in Ruby on Rails, though I'd love to hear about how people have solved this regardless of platform.
Edit: To clarify: The need for the queue is not primarily to reduce load, but to limit the speed at which the web is purchasing tickets relative to people buying in other ways, like over the phone.
Before I outline one method for this, I want to point out that what you want to do doesn't make a lot of sense. Services on the web aren't like a physical store, where I can walk up and see that it's crowded and decide to stay or not. Queueing people on your site strikes me as shifting the blame from you (unable or unwilling to adequately provision resources) to me (punishing me for trying to use your site).
If you're selling something like show tickets, where quantity is limited and each item is tied to a seat, I think it's better to reserve items and time out those reservations if they aren't paid for in a timely manner. Ticketmaster does this, and I think it's a much better solution than blocking people at the door.
If you still want to go down this path, then I'd design the system like this:
As customers come to your site, record their arrival time. As they interact with the site, record a "last seen" time. "Last seen" will be used to determine activeness. You'll need a background job running very frequently to expire sessions quickly.
Once your limit is hit, you have an ordered queue of people who are blocked. As customers complete their transaction or time out, you'll mark the next person in the queue for entry into the purchase path.
For queued users, their browsers will make a request on a regular basis, checking to see if you've let them in yet. If yes, they proceed to the purchase path. If no, they continue to wait.
The purchase path needs a mechanism to check if someone is trying to circumvent your waiting area, and sends them back.
You might find the Online queuing for ticketing guide helpful. Check their repository at GitHub.
They've integration with Ruby On Rails, PHP, .NET, iOS, Android and similar platforms.
Queue-it enables you to gain control of website overload during extreme traffic peaks by offloading end users into an online queue.
When a peak traffic event occurs on a website, the online queue system sends users to the virtual waiting room environment where the users wait and are redirected back to the website at a rate it can handle.
I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.