currently we are having the following states/columns in JIRA:
Open/Todo (-> Developer takes task and starts work)
In Progress (-> Developer sets tasks to done)
Done (-> QA tests on staging and sets task to ready to deploy or reopens)
Ready to deploy (-> Developer deploy these tasks at date of release)
Deployed (-> QA/Stakeholder tests task again on Live/Production and closes or reopens)
Done/Closed
In my current understanding this is wrong, because we try to handle two concerns in one status dimension: Development and deployment. I would like to decouple sprint from release/versions. Currently we cant end a sprint until all tickets are approved on production which leads to bottlenecks.
What would be your suggestion? One idea I have in mind: Limit the status to Open, In Progress, Done, Closed and handle the deployment/release over JIRA build-in versioning. If a problem occures on production, a bug ticket must be opened.
Otherwise I don't see a chance since the versioning/releasing of JIRA 6.4 does not seem to include status columns by itself.
Is releasing to production part of your team's 'definition of done'? If it is then the workflow you have makes a lot sense.
There is no separation of concerns between development and deployment. Code that has been developed but not deployed has no value to the business. Development is simply a step in the process towards release, which is the point at which value is realised.
A sprint is a timebox, not a set amount of work. When the timebox ends then the work that you have still in progress is not 'done'. If you are regularly unable to complete all the work you bring in to a sprint then that suggests you are bringing too much work in. The team's velocity, which is a measure of the work that gets 'done' each sprint, should be a good indication of what your sprint capacity is.
If your bottleneck is the release to production and verification of the release, then perhaps you should focus some effort on improving this process? Possibly this could mean more release automation or better coordination with the stakeholders over validating releases.
Related
We use Jira for issues bugs estimates and timesheets.
I've seen 2 approaches to using Jira and I want to hear what other people are doing.
Approach 1:
Log one feature, such as "Allow user to save as CSV". The task is assigned to a Developer and the workflow progresses from Not started, In Progress, Complete. Once done it's assigned to a Tester and they change workflow to Testing, then to Tested/Completed.
Approach 2:
Log a task/user story called "Allow user to save as CSV". Then developer logs subtasks such as, Front end, Backend and tester logs tasks such as create test plan, test right clicking. Once all dev and test sub-tasks are complete, someone marks the task as completed.
I prefer the first way, I've heard the second way is better for tracking time. It seems harder to manage what's going on with a sea of issues in Jira.
My company does first approach. This seems to be working so far ( about a year now ). With either approach I really love how everything seems to be logged in JIRA for history tracking.
I recommend using sub-tasks when you need to have work proceed in parallel. Or if the parent task is really large and the sub-tasks are around a few days each. But don't create sub-tasks unless they are needed.
I'm using githubs integration of travis-ci with coverity-scan (the free versions of all these services) to test my FLOSS code.
The problem I'm facing is that when continuously working on the code, i'm hitting the coverity quota pretty soon.
Since I'm working on multiple projects simultaneously, it can therefore well be that I switch away from working on a given project before I'm allowed to submit a coverity again, thus potentially having flaws in the code for weeks although they would have been caught easily by coverity.
I would like to avoid this.
The first measure to prevent hitting the quota too frequently, is by using a dedicated branch (usually coverity_scan) which does not receive pushes as often as the master and/or feature branches.
However, this puts cognitive load on the user (me), which I also like to avoid.
Also, sometimes I still hit the quota (some of my projects as in the 100k-500k lines-of-code range, so they have a lower threshold than usual).
What I would like to have is being able to automatically re-trigger a coverity-scan once the quota has expired, if (and only if) the current build did hit the quota.
Is somthing like this possible with plain travis-ci/coverity features?
Or would I have to setup a separate hook, that monitors the coverity quota and travis-ci builds?
You don't need to run Coverity on every check-in. It's just too slow.
You should configure your (coverity build) system to poll your repo for changes, but have them checked infrequently. Something like a few times per day.
This will trigger the build when things change, but not on every change that is detected.
I am reviewing JIRA for possible use within several development teams at the company I work for. We use Scrum as a base for our project management. We have good, self-organizing teams, almost no assigned work, etc. JIRA seems great for some of these items, but something we are struggling with is handling the management of process vs technical tasks and something we call "issue bundling".
Process control. Currently we will create a story, say "The graph on the Profit and Loss report has issues with overlapping legend text". Okay, good enough. We will then create a technical sub-task, for simplicity let's say it's "Research and correct the issue". Next we have a set of process sub-tasks that we create. Peer Review, Make Build, QA Testing, Merge, Track. Each of these can then be independently assigned to users and placed into the Pending bin on our scrum board (BTW, we use a Pending, Awaiting Action, In Progress, Done, Merged model rather than a todo, in progress, done model). Pending basically means, I'm waiting if I'm next in priority. During development the programmers will grab the technical task, set themselves as assignee and move to in progress. When they are done they will move it to the Done bin and then update the Peer Review process sub task to "Awaiting Action", and set the assignee to their code partner. Emails are sent, peer review is done. When that is completed the peer review partner will move Peer Review to the Done bin and set the Make Build sub task to the build manager and move it to Awaiting Action. Build manager sees this, makes build, moves it to done and updates the QA Testing ticket to a Awaiting Action status, you get the point.
It's working, but are their any suggestions on alternatives, best practices, etc. Is creating technical and process sub-tasks not the way to go? One thing I notice is that we have to filter the issue list to hide the sub tasks and the scrum board can get pretty overwhelming for the stakeholder who just wants to see the status of the parent story. Since the parent story does not move until the sub-tasks move they don't see anything that is of interest to them, not even if the story is "in progress" while the sub-tasks are moving. Ugh.
Issue bundling. We often have a set of issues that are related perhaps tightly, but typically more general. For example, all issues that are related to reports in our software. At the present there are say 15 known issues. These issues may be on different reports in the system, with specific steps to reproduce, etc. When we are gearing up for a sprint we will select bundles of these related issues. The reason is that QA can more efficiently test a bunch of small fixes that are generally related in one pass rather than testing each report as a separate process.
Currently we move each issue to a subtask of a bundle. The bundle for example might be simply called "Report Fixes 1", and it will have, for example, 5 sub-tasks that are technical, each being a different report bug to be fixed. We can then add the process control items from above to the overall bundle. We also know that we won't merge until all items in the bundle are done, so they all get the same version.
However, as stated above, visibility is reduced as you cannot see the status easily of the subtasks now that they are in the bundle.
Again, best practice? Ideas? How are others handling this?
Brian,
Have you considered combining your process sub-tasks (Peer Review, Make Build) and your Scrum board "bins" (Pending, Awaiting Action)? Subtasks are the usual way to provide parallel tasks in JIRA, but the way you describe the whole process it sounded more linear. If each story really gets bounced from one assignee to another, just change the assignee and the status.
"Issue bundling" sounds like Epics in GreenHopper to me. You can also do a similar thing using a Labels field (standard or custom) to group issues.
~Matt
Here is how we are handling this -
The process sub-tasks you mentioned are statuses in our implementation. So, I'll have the story broken into Technical tasks that the dev finishes and then moves the story to "Pending Review" status, from where it goes to Make Build and QA Testing and so on. This is pretty much what Matt said as well. This workflow gives the stakeholders the sense of the progress that is being made on the sprint and thus is very helpful.
As such there is nothing like best practice in JIRA, it is very flexible and one can use it the way one wants/needs.
I agree with you on Epics not being a complete solution. We overcome this by adding labels and creating filters and swimlanes based on these filters.
Currently after a build/deployment of our app (58 projects, large asp.net MVC 3 front end) takes ~15-20secs to load as it goes through the whole 'recycling the app pool' (release configuration).
We do have a web farm if that alters people's answers, but the question really is:
What are people doing in large scale applications where a maintenance window isn't viable (we're a 24/7 very active website) to minimize that initial 'first hit' on the app pool recycle after a deploy?
We've used a number of tools to analyze that startup time and there doesn't really seem to be any way to bring it down so what I'm looking for are what techniques do people employ in order to minimize the impact of a large application deploy affecting users.
By default - if you change 15 files in an ASP.NET application at once (even via FTP) then the app pool is automatically recycled. You can change the number of files but as soon as web.config and bin files are changed then it needs to recycle. So in my opinion the ideal solution for an environment like yours would be as follows:
4 web servers (this is an arbitrary number)
each server has a status.aspx that the load balancer looks at - use TeamCity to take 2 of these servers "off line" (off the load balancer) and wait 20 seconds for the traffic to filter across. A distributed cache will help keep user experience problems
Use TeamCity to deploy to those 2 servers - run your automated tests etc. and once you are happy put those back into the farm and take the other 2 offline and deploy to those
This can all be scripted / automated. The only issue with this is any schema changes that are not backwards compatible may not allow running the new version site in parallel with old version of the site for the 20 seconds for the load balancer to kick back in
This is good old fashioned Canary Releasing - there are some patterns here http://continuousdelivery.com/patterns/ to help take into consideration. Id also suggest a copy of that continuous delivery book - its like a continuous delivery bible and has got me out of a few situations :)
At the very base you could run a tinyget script against the application after completion of deployment which will "warm up" the application however if a customer hits your site before the script can run, they will still face a delay. What do you currently have in place, what post deployment steps do you have in place?
In a farm environment you could stage deployments too, so take one server out of load balance, update it and then bring that online after deployment and take the other out, complete the deployment and then reintroduce into the farm. How is your SQL Server setup - clustered?
copy and paste from my post here
We operate a Blue/Green deployment strategy on a 4 tier architecture which has a web site over 4 servers at the top tier. Due to the complexity the architecture introduced for deployments, we needed a way to deploy without disturbing any traffic to the "live" site. Following Fowler's advice, but not quite in the same way, we came up with a solution that means we have 2 sites on each server (a blue and a green, or in our case site A and site B). The live site has the appropriate host header, and once we have deployed and tested to the non-live site, we then flip the headers of the 2 sites so that what was once live is now the non-live site, and vice-versa. The effect is, a robust deployment that can be done in business hours and with the highest level of confidence.
This of course complicates your configuration and deployment slightly, but it's worth the effort. I guess it kind of goes without saying that you want to script both the deployment, and the host header swapping.
Firstly, unless you're running Google or something bigger, does a 15-20s load time at 3am for a handful of users really impact that much? I'd say the effort invested in eliminating the occasional lag would far outweigh the 15-20s inconvenience of a couple of users.
I consider it a necessary evil of using ASP.NET unfortunately. Using a pre-compiled site (.DLLs instead of the code-behind files) will lessen the time but not necessarily eliminate it.
The best thing you can do is use something like a status notification bar to warn users they may experience some "issues" during "essential maintenance".
But even then, I'd say in terms of user experience it'd be better to keep quiet and have a handful of people blame their "slow internet" when your site takes 20s to load on one occasion, than announce to all and sundry that it will be slow.
You can also try this approach : http://weblogs.asp.net/scottgu/archive/2009/09/15/auto-start-asp-net-applications-vs-2010-and-net-4-0-series.aspx
without knowing anything about your site, my first thought is that you might be able to break it down into smaller sites so that they start faster individually.
second, with your web farm, i assume you have some sort of load balancing device in front of that from which you can pull machines out of the pool when they are being deployed. don't put them back in the pool until after you have sent a request against the site to get it started up. you should be able to script this such that you are pretty much clicking a button that takes a machine out, deploys to it, and sends a request after it's back up and happy.
You can consider using aspnet_compiler.exe to precompile your application, because I think the delay after deployment is caused by the compilation phase rather than "whole recycling the app pool".
I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.