Which software development practices provide the highest ROI? - roi

The software development team in my organization (that develops API's - middleware) is gearing to adopt atleast one best practice at a time. The following are on the list:
Unit Testing (in its real sense),
Automated unit testing,
Test Driven Design & Development,
Static code analysis,
Continuous integration capabilities, etc..
Can someone please point me to a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster. Is there a study out there?
This should help me (support my claim to) prioritize the implementation of these practices.

"a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster"
Wouldn't that be great! If there was such a thing, we'd all be doing it, and you'd simply read it in DDJ.
Since there isn't, you have to make a painful judgement.
There is no "do X for an ROI of 8%". Some of the techniques require a significant investment. Others can be started for free.
Unit Testing (in its real sense) - Free - ROI starts immediately.
Automated unit testing - not free - requires automation.
Test Driven Design & Development - Free - ROI starts immediately.
Static code analysis - requires tools.
Continuous integration capabilities - inexpensive, but not free
You can't know the ROI. So you can only prioritize on investment. Some things are easier for people to adopt than others. You have to factor in your team's willingness to embrace the technique.
Edit. Unit Testing is Free.
"time spend coding the test could have been taken to code the next feature on the list"
True, testing means developers do more work, but support does less work debugging. I think this is not a 1:1 trade. A little more time spent writing (and passing) formal unit tests dramatically reduces support costs.
"What about legacy code?"
The point is that free is a matter of managing cost. If you add unit tests to legacy code, the cost isn't free. So don't do that. Instead, add unit tests as part of maintenance, bug-fixing and new development -- then it's free.
"Traning is an issue"
In my experience, it's a matter of a few solid examples, and management demand for unit tests in addition to code. It doesn't require more than an all-hands meeting to explain that unit tests are required and here are the examples. Then it requires everyone report their status as "tests written/tests passed". You aren't 60% done, you're 232 out of 315 tests.
"it's only free on average if it works for a given project"
Always true, good point.
"require more time, time aren't free for the business"
You can either write bad code that barely works and requires a lot of support, or you can write good code that works and doesn't require a lot of support. I think that the time spent getting tests to actually pass reduces support, maintenance and debugging costs. In my experience, the value of unit tests for refactoring dramatically reduces the time to make architectural changes. It reduces the time to add features.
"I do not think either that it's ROI immediately"
Actually, one unit test has such a huge ROI that it's hard to characterize. The first test to pass becomes the one think that you can really trust. Having just one trustworthy piece of code is a time-saver because it's one less thing you have to spend a lot of time thinking about.
War Story
This week I had to finish a bulk data loader; it validates and loads 30,000 row files we accept from customers. We have a nice library that we use for uploading some internally developed files. I wanted to use that module for the customer files. But the customer files are enough different that I could see that the library module API wasn't really suitable.
So I rewrote the API, reran the tests and checked the changes in. It was a significant API change. Much breakage. Much grepping the source to find every reference and fix them.
After running the relevant tests, I checked it in. And then I reran what I thought was an not-closely-related test. Ooops. It had a failure. It was testing something that wasn't part of the API, which also broke. Fixed. Checked in again (an hour late).
Without basic unit testing, this would have broken in QA, required a bug report, required debugging and rework. Look at the labor: 1 hour of QA person to find and report the bug + 2 hours of developer time to reconstruct the QA scenario and locate the problem + 1 hour to determine what to fix.
With unit testing: 1 hour to realize that a test didn't pass, and fix the code.
Bottom Line. Did it take me 3 hours to write the test? No. But the project got three hours back for my investment in writing the test.

Are you looking for something like this?
The ROI of Software Process Improvement A New 36 Month Case Study by Capers Jones
Agile Practices with the Highest Return on Investment

You're assuming that the list you present constitutes a set of "best practices" (although I'd agree that it probably does, btw)
Rather than try to cherry-pick one process change, why not examine your current practices?
Ask yourself this:
Where are you feeling the most pain? What might you change to reduce/eliminate it?
Repeat until pain-free.

You don't mention code reviews in your list. For our team, this is probably what gave us the greatest ROI (yes, investment was steep, but return was even greater). I know Code Complete (the original version at least) mentioned statistics relative to the efficiency of reviews in finding defect VS testing.

There are some references for ROI with respect to unit testing and TDD. See my response to this related question; Is there hard evidence of the ROI of unit testing?.

There is such a thing as “local optimum”. You can read about it in Goldratt book Goal. It says that innovation is of any value only if it improves overall throughput. Decision to implement new technology should be related to critical paths inside of projects. If technology speeds up already fast enough process it only creates unnecessary backlog of ready modules. Which is not necessary improve overall speed of projects development.

I wish I had a better answer than the other answers, but I don't, because what I think really pays off is not conventional at present. That is, in design, to minimize redundancy. It is easy to say but takes experience.
In data it means keeping the data normalized, and when it cannot be, handling it in a loose fashion that can tolerate some inconsistency, not relying on tightly-bound notifications. If you do this, it simplifies the code a lot and reduces the need for unit tests.
In source code, it means if some of your "input data" changes at a very slow rate, you could consider code generation, as a way to simplify source code and get additional performance. If the source code is simpler, it is easier to review, and the need for testing it is reduced.
Not to be a grump, but I'm afraid, from the projects I've seen, there is a strong tendency to over-design, with way too many "layers of abstraction" whose correctness would not have to be questioned if they weren't even there.

One practice at a time is not going to give the best ROI. The practices are not independent.

Related

Unit Testing Strategy, Ideal Code Coverage Baseline

There's still not much information out there on the XCode7 and Swift 2.0 real-world experiences from a unit testing and code coverage perspective.
While there're plenty of tutorials and basic how-to guides available, I wonder what is the experience and typical coverage stats on different iOS teams that actually tried to achieve a reasonable coverage for their released iOS/Swift apps. I specifically wonder about this:
1) while code coverage percentage doesn't represent the overall quality of the code base, is this being used as an essential metric on your team? If not, what is the other measurable way to assess the quality of your code base?
2) For a bit more robust app, what is your current code coverage percentage? (just fyi, we have hard time getting over 50% for our current code base)
3) How do you test things like:
App life-cycle, AppDelegate methods
Any code related to push/local notifications, deep linking
Defensive programming practices, various piece-of-mind (hardly reproducible) safe guards, exception handling etc.
Animations, transitions, rendering of custom controls (CG) etc.
Popups or Alerts that may include any additional logic
I understand some of the above is more of a subject for actual UI tests, but it makes me wonder:
Is there a reasonable way to get the above tested from the UTs perspective? Should we be even trying to satisfy an arbitrary minimal code coverage percentage with UTs for the whole code base or should we define that percentage off a reasonably achievable coverage given the app's code base?
Is it reasonable to make the code base more inflexible in order to achieve higher coverage? (I'm not talking about a medical app where life would be in stake here)
are there any good practices on testing all the things mentioned above, other than with UI tests?
Looking forward to a fruitful discussion.
You do ask a very big and good question. Although your question includes:
I wonder what is the experience and typical coverage stats on different iOS teams ...
I think the issue is language/OS agnostic. Sure some languages and platform are more unit testable than others. So some are more expensive to unit test (as opposed to other forms of automated/coded testing). I think you are searching for a cost/benefit equation to maximize productivity. Ah the fun of software development processes.
To jump to the end to give you the quick sound grab answer:
You should unit test all code that you want to work and is appropriate to unit testing.
So now why the all and why the emphasis on unit testing ...
What is a unit test?
The language in the development community is corrupted, so please bear with me. Unit testing is just one type of automated testing. Others are Automated Acceptance Tests, Application tests, Integration Tests, and Components test. These all test different things. They have different purposes.
However, when I hear unit testing two things pop into mind:
What is a unit test?
As part of TDD (Test Driven Development)?
TDD is about writing tests before writing code. It is a very low level coding practice/process (XP - eXtreme Programming) as you write a test to write a statement and then another test. Very much a coding practice but not an application/requirements practice as it is about writing code that does what you intended, not what the product requirements are (oh gosh I feel the points being lost).
Writing code and then unit testing it is ... in my experience ... fun, short term team building, but not productive. Sure some defects are found, but not many. TDD leads to better "healthy" code.
My point here is that unit testing is:
A subset of automated/coded testing.
Is part of a coding process.
Is about code health (maintainability).
Does note prove that your application works (sound of falling points).
Why all?
If you're team delivers zero defect software (ZDFD is real and achievable .. but that a flat earth discussion) all the time without unit testing then this is nonsense and you would not be asking any questions here.
The only valid reason for a team to include unit testing as part of its coding process is to improve productivity. If all team members commit to team productivity then the only issue is identifying which code profits from unit testing. This is the context of the all.
The easiest way I think to illustrate this is to list types I do not unit test:
Factories - They only instantiate types.
Builders / writing (IoC) - Same as factories - No domain logic.
Third party libraries - We call 3rd party libraries as documented. If you want to test these then use integration/component tests.
Cyclomatic Complexity of one - Every method of of type has a CC of 1. That is, no conditions. Unit tests will tell you nothing useful, peer review is more useful.
The practical answer
My teams have expected 100% unit test coverage on all new code that should be unit tested. This is achieved by attributing code that does not meeting the unit testing criteria. All code must go through code review and the attributes must be specific to the why options listed above. -- Simple.
A long answer, and perhaps not easy to digest, nor what people want to hear. But, from long experience, I know it is the best answer that can lead to best profitability.
Post comment
My answer is aimed at the unit testing aspects of the question. As for defensive programming and other practices, TDD is a process that mitigates that by making it harder to do the wrong thing. But build system static code analysis tools may help you capture these before they get to peer review (they can fail a build on new issues). Look at others like SonarQube, Resharper, CppDepend, NDepend (yes language dependent).

Using cucumber for writing the functional requirements document for a Rails app

In the light of BDD, would it be a good idea to use Cucumber features and scenarios to write up the functional requirements document at the start of a new Rails project?
Probably not. This would be a case of BDUF (http://en.wikipedia.org/wiki/Big_Design_Up_Front).
It's unlikely you'll be able to think of all the scenarios up front. You should split the high-level requirements into features to help estimate and prioritise them, but leave the detailed scenario writing to just before you're ready to begin implementing each feature.
If you are not the one making all the decisions and someone thinks you need these,
then it might appear to be a better choice than MS word.
I actually joined a project with a million un-implemented features,
so we had loads of integrations test in theory, just none
of them were actually implemented.
Its months later and we still have some,
its really difficult working in an environent where
everything is failing at once.
Its better to have 1 failing step at a time.
I also think feautures should be written by application developers who
understand user flow realities.
I like the business or client to explain stuff to me in a conversational syle,
a bit at a time, not their entire vision for world domination in 10 000 words,
I will keep that for bed time.

Tips for avoiding second system syndrome [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Lately I have seen our development team getting dangerously close to the 'second system syndrome' type ideas while we are planning the next version of our product. While I'm all for making improvements and undoing some of the mistakes of our past, I would hate to see us stuck in an endless loop of rewriting and never launching anything.
Is there a good design / development method that lends itself to building a better version 2.0 while avoiding second system scenarios?
I have experience the second system syndrome from both sides as a customer/sponsor and part of a development team.
A root cause for problems is when the team latches on to an Utopian vision of version 2, such as the desire to make the new software "flexible". In this scenario everything is abstracted to the nth degree. For example, instead of data objects that represent real-world entities a generic "item" object is created that can represent anything. One flawed idea is that we can build in so much flexibility into the software to anticipate future needs, that non-programmers will be able to just configure new capabilities. Often one goal such as "flexibility" overshadows the effort to a point that the resulting software doesn't work.
A balanced practical consideration of usability, performance, scalability, features, maintainability, and flexibility goals can bring the team back to earth. "It would be great if..." should be prohibited from the vocabulary of the team. The Scrum backlog is a good tool and the team should be heard saying often... "Let's backlog that...we don't need that for version 2."
"I would hate to see us stuck in an endless loop of rewriting and never launching anything."
Which is why people use Scrum.
Define a backlog of things to build.
Prioritize, so that things which lead to a release are first. Things which should be fixed are second.
Execute sprints to get to the release. Execute a release sprint.
Get someone who has written at least three systems to lead the project.
Try to focus on short delivery cycles, i.e. force yourself to deliver something tangible and useful to the users every few weeks or month. Prioritise the tasks based on their value to the customer. This way you always have a usable, functional application with satisfied users, while under the hood you can refactor the architecture in small steps if you wish (and if you can justify the need for it - that is, towards management / the customers, not just teammates!).
It helps a lot if you have a stable build process with a good suite of automatic (unit / integration) tests.
Agile development methods like Scrum do this, and they are warmly recommended. But of course it is not always easy or even possible to adapt such a method in your team. Even if you can't, you can still take elements of it and apply it to your project's benefit (maybe without even mentioning the words "agile" or "Scrum" publicly ;-)
Make sure you document the requirements as well as possible. While obviously you need to also manage what gets into the requirements to avoid over-designing, having a fixed scope helps prevent developers from running off with ideas or gold-plating what needs to be done and it helps keep management or clients from introducing scope creep. Define all requirements and how scope changes will be addressed.
I'm all for short development cycles (make sure you're writing tests) and agile methodology, but neither of those is a shield against second syndrome system. In some ways it's easier to keep adding on feature after feature if you're working in short sprints without stopping to look at the overall picture. Use agile practices to build the simplest thing that works, then keep adding your other requirements as simply as possible. Remember YAGNI, because when you build a system a second time, that's when you're most likely to build something you're sure you'll need at some point, something that will make the system "extensible" so there never has to be a third build. It's the best of intentions, but the road to hell and all that.
You can't get close to second system syndrome. Either you're in it, or you're away from it. You'll know when you're in it, but only after wasting a lot of resources.
Tips are: know about it (so basically we got that covered already). It's invaluable information to know what a "second system" is, and even more to have some experience with that. I think we all have some experience with that, hopefully benign.
That knowledge will often make you think twice and you'll find a solution without venturing into second-system limbo.
PS: Also know how to use your current system, that includes, maybe documented solutions, and other documentation.
Focusing on the system architecture should help e.g.
Having documented interfaces which support "loose coupling" between sub-systems
Having documented design decisions (avoid re-hashing previously beaten paths)
Hence, without going for an all out swap, the current system can be "upgraded" with more appropriate interfaces to help future growth.
Another good way to focus: assign a $$$ figure to each feature and prioritize accordingly ;-)
Anyhow, just my 2cents
I up-voted S. Lott's answer and would like to add some more suggestions:
Deliver a working product to your customer as frequently as possible. Iterations lasting between a few weeks and 2 months are ideal. Agile methodologies, such as Scrum, lend themselves well to this.
Use FogBugz for feature and bug tracking. Its features are very practical for agile projects. FogBugz allows easy prioritization according to releases and priorities. If your team enters their estimated levels of effort for each task, you can also use this to calculate reasonable ship dates.
Prioritize which features you will develop according to the 80/20 rule (20 percent of the features will be used 80 percent of the time) and the most bang for the buck. This helps keep the system as simple as possible, helps prevent feature bloat, and saves development and testing time.
Give similar thought to both new features and bugs when you determine the priority of an issue. One point of the Joel Test is "Do you fix bugs before writing new code?". In most shops this doesn't happen, but do not make debugging the system an afterthought. Also, keep a working copy of the old system to compare against when bugs are found on the new system.
Also factor in the level of effort required to maintain, and if necessary rewrite, existing code. If you have not already done this, give the team some time to code review whole files that they find troublesome to change. If the system's code was difficult to maintain the first time, this will only get worse and cause your team to spend more time on new features down the road.
It can never be avoided at its entirety. But being cautious could alleviate the problem.
You should formulate some thumb rule based on the vital parameters (scarcest resource) that define the success of the system. For example, reducing potential number of bugs might directly decrease operational cost (potential service calls to support center). But this might not be the case in every other systems. Another example, scarce use of CPU, memory and other resources might be beneficial in some cases but there could be environments where they could be available in abundant.
So simply to avoid "temptations", identify the scarcest resource (time, effort, money$ etc) and consider implementing only those that exceed threshold value.[f(s1,s2...) > threshold]
Despite the iterative development, I would emphasize on how technical debts are handled.
Links that are related to this:
Tech Debts: Effort Vs Time
Tech Debt Quadrant

What kind of software development process should a lone developer have? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I work as a lone developer in a very small company. My work is quite chaotic and I'm looking for ways to make it more organized.
One problem is that my projects have practically no management. Rarely anyone asks me what I'm doing, or if I have any problems. At some point there was talk about weekly status meetings, but that's some time ago. Seems that if I'd want something like that, I would have to arrange those myself.. Sometimes I'm a bit lost on what I should do next because I don't have tasks or a clear schedule defined.
From books and articles I have found many things that might be helpful. Like having a good coding standard (there exists only a rough style guide which is somewhat outdated in my opinion), code inspections, TDD, unit testing, bug database... But in a small company it seems there are no resources or time for anything that's not essential. The fact that I work in the embedded domain seems to make things only more complicated.
I feel there's also a custom of cutting corners and doing quick hacks on short notice. This leads to unfinished and unprofessional products and bugs waiting to emerge at a later date. I would imagine they are also a pain to maintain. So, I'm about to inherit a challenging code base, doing new development that requires learning a lot of new things and I guess trying to build a process for it all at the same time. It might be rewarding in the end, but as not too experienced I'm not sure if I can pull it off.
In a small shop like this the environment is far from optimal for programming. There's many other things needed to be done occasionally like customer support, answering the phone, signing parcels, hardware testing, assembly and whatever miscellaneous tasks might appear. So you get the idea about the resources. It's not all bad (sometimes it's enlightening to solve some customer problems) and I believe it can be improved, but it's the other things that I'm really concerned.
Is it possible to have a development process in a place like this?
Would it help to have some sort of management? What kind of?
Is it possible to make quality products with small resources?
How do I convince myself and others that the company which has worked successfully for decades needs to change? What would be essential?
Maybe there's someone working in a similar shop?
Use Source Control for EVERYTHING
Develop specifications and get signoff before starting - there will be resistance, but explain it's for their own good.
Unit tests! It hurts because you just want to get it done, but this will save you in the long run.
Use bug tracking - Bugzilla or FogBugz if you can afford it.
My advice is not to be extreme. From my experience, pure agile or pure traditional will not work. Before you use any process, know what it mean to solve.
I personally use a variation of Agile RUP. I do some upfront effort such as investigate the actual needs, do high-level design with possible extension. And ask customer to sign-off some major high-level requirements.
If you work in small group, detail design or specification may not worth. Of course, if there is some libraries that are shared by many, it will be worth the trouble.
Deciding what to invest in up-front depending on its risk (likelihood and effect).
Moreover, many SW best practice is really 'best' like version control, automatic testing (to me I only used it way to early detect regression as I do not believe in TDD as driven force of the development). I suggest you read 'Pragmatic Programmer' it presents many of those techines.
As to answer you questions:
(1). Is it possible to have a development process in a place like this?
Yes, but as I say, trailer it to fit your organization.
(2). Would it help to have some sort of management? What kind of?
Management helps but no control freak. Plan what to do when: integrate, resolve conflict, dead line of some mile stone. And roughly keep them on schedule (I particularly like Scrum's sprint).
(3). Is it possible to make quality products with small resources?
Definitely as soon as the size of the work, the time to develop and the size of the team is balance. If you definition of Quality is the same with me. To me, Quality means: solve the problem it set out to in an efficient and reliable fashion.
(4). How do I convince myself and others that the company which has worked successfully for decades needs to change? What would be essential?
Point out the problems. If there are none, why change? If you want to change, you should be able to identify the problem OR potential problems. Point out the problem.
Some big one are:
Without any process, it is harder for new recruited to blend in as they must learn from observing other how to deal with things.
Without process, it is harder to work in stress.
Without schedule, it is hard to determine the progress.
Without automatic testing, it will takes more time to identify problems and regression.
Without version control, it will be harder to roll-back mistake and separation of work to each team members will be mess.
Just my though.
You need to work with the owner and setup short medium and long term goals. You will want to let them know progress even if only through email.
You will need to enforce some order on your workday or you will never get anything done (those long term goals).
Divide your day up into chunks when you can code, when you are working on hacks to keep it togther, when answering emails etc.
Definitely setup a bug tracker. This can help keep your email clean. You can even setup an email address to forward bugs to be categorized later. This is good because the bug reporters will eventually tire of the bug tracker and want to just email you the bugs anyway.
edit
And as lod3n said, source control, but you are using that already right???!!?!
Been there, done that.
The book Planning Extreme Programming helped a lot. I used 3x5 cards stuck on a wall. This kept my boss informed of my progress, helped with estimates and planning, and kept me on track. The Planning Game gives good ammo if your boss's expectations are unrealistic.
Unit testing, as others have stated, helps even if you're a sole developer. I find the TDD style valuable.
lod3n is absolutely right about Source Control.
I've gone with XP-style iterations before; you may want to establish a backlog of items (even something simple like a spreadsheet or 3x5 cards) as well.
Avoid deathmarches. Stick to 40 hours in the sense of not working overtime 2 weeks in a row. Spend extra time outside of work learning new skills - not just technologies, but principles and best practices.
Have a bug tracking system, both for defects and new features. Don't rely on your memory.
Continuous integration and an automated build can help even a single developer.
Can't emphasize the recommendation for source control enough.
I've decided I don't need unit tests because I can have automated functional/integration tests instead: because with incremental development there's no need to test before integration.
as crazy as this sounds I use scrum just because I like the concepts of sprints, and backlogs. It makes it easier to set realistic goals. Of course the idea of scrum master and team is all you but if you are working on outside projects where it is possible that you may pick up an extra team member with your backlogs it will be easy to distribute work. Maybe I just like backlogs. With scrum you will need to get someone to be the product manager to talk about the features of the product. Version control is a must at probably should be implemented b4 even worrying about a software development process. I have worked in a company that started from 2 developers and went to 12 in a year. We started with no version control and low coding standards. The changes you will need to make will be gradual so don't worry about rushing to do a 180. Set a goal to change one thing a month and find supporters of your changes to make things go smooth.
As well as the recommedations of others I'd say that if you are tight on resources but have more say over how things get done you should make good use of off the shelf and open source products and libraries. This leverages the efforts of others, saving you time, ensures your code base doesn't become too esoteric and adds to your skillset so you don't end up being an expert in something that's useless everywhere else.
First, lets make a distinction between a development process and best practices. Best practices like source control, defect tracking, unit testing, etc. are an given.
Then there is the actual development processes. I would always recommend having a process, no matter small or large the team is. The trick is finding the right process. You have a process now - it is just an ad-hoc process that doesn't seem to be working out too well for for you. Rarely can you take a textbook development process and directly apply it. What you need to do is tailor the process to your companies needs and culture. Look at as many development paradigms as you can and try to find something that is a good fit and them start molding it to your needs. You may have to try and fail with a number of different processes. Perhaps the Personal Software Process will be a good starting process, maybe an agile process, a variant of RUP? You have a lot of options, start trying them out.
You are also going to have to work with the rest of your organization - they need to be a part of the process. You may be the lone developer, but a development process involves more than the developer, it involves ever person that has a say or impact in the product.
This may not be a specific answer, but my point is that you will need some kind of process. So start researching them and trying them out and molding them to your needs until you have something that works.

Does anyone still believe in the Capability Maturity Model for Software?

Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes.
But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses.
So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM?
And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial?
Does anyone still believe?
As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody
At the heart of the matter lies this problem, neatly described by the CMM guidance itself...
“...Sound judgment is necessary to use the CMM correctly and with insight. Intelligence, experience and knowledge must shape an appropriate interpretation of the CMM in a specific environment. That interpretation should be based on the business needs and objectives of the organization and the projects. A rote, checklist-oriented application of the CMM has the potential to harm an organization rather than help it...”
From Page 14, section 1.6 of The Capability Maturity Model, Guidelines for Improving the Software Process by the Carnegie Mellon University Software Engineering Institute, ISBN 0-201-54664-7.
I found it to be bloated, documentation exercise that was used mainly as a contract-acquiring/maintaining vehicle. Once we had the contract, then it was an exercise in getting around the process.
As a developer, I got nothing out of it but lost MONTHS of my professional life fiddle-farting around with CMMI.
The same goes for 6 Sigma, which I branded "Common Sense in a Box". I didn't need to be trained how to figure out what the problem was to a process - it was generally quite evident.
For me, small teams and agile mechanisms work much better. Short cycles, lots of communication. That might not work in all environments, but it definitely works in mine.
Just my two cents.
For a typical CMM level 1 programming shop, making the effort to get to level 2 is worthwhile; this means that you need to think about you processes and write them down. Naturally, this will meet resistance from cowboy programmers who feel limited by standards, documentation, and test cases.
The effort to get from level 2 ("there is a process") to level 3 ("everyone has the same process") normally gets bogged down in inter-departmental warfare, so it's probably not worth starting.
If you see CMM run. And run fast.
CMM and CMMI both offer some benefits if your organization takes the lessions it tries to teach at heart. The problem is that getting to the higher levels is very difficult and expensive, and the only time I have seen an organization go through the effort is because their customers won't let them bid on contracts until they are at a certain level.
This has the effect of the organization doing everything they can to "just get the number" without actually caring about it improving their process.
The higher end? No. CMM-5 shops do not impress me.
The lower end? Yes. CMM-1 organizations scare me.
CMM can help a new/novice team measure themselves and do the self improvement thing.
CMMI isn't really about improving your software, it is about documenting what you have done. You can almost estimate a company's CMMI level by the weight of the documentation it produces.
Background: I have studied CMMI in my Software Engineering graduate program and have worked on a team that followed its guidelines.
My experience is that the CMM is so vague that its very easy to fulfill. Also, when they come to certify you, they look at the project that your organization chooses. Where I used to work, this was the project with no real deadline, plenty of money, and lots of time to spend on every nook and cranny of process. Many of the other projects continued with little to no code/design review sometimes without versioning software.
I think the emphasis on CMM certification is unfortunate. Companies know how to work the system, and do. Instead of focussing on real process improvement that meets their bottom line, they focus on getting a certification and working the system. I honestly think most organizations would rather spend time on the former instead of wasting so much time on the latter.
Really what matters is having conscientious people who want to make good development decisions and know that they will need help making those decisions. There is no substitute for quality programmers who know that programming is an ongoing group activity where they are just as likely to make a mistake as anyone else.
I have been doing a lot of interviewing for small teams doing iterative development. Personally, if I see CMM on a resume it is a big red flag that signals interest in process over results.
All formal methods exist to sell books/training courses/certification, and for no other reason. That's why there are so many formal methods. Once you realise this, you are free :-)
Yourdon still believes. But he might also still believe the world is going to end with Y2K.
This is not something I would personally put a lot of faith in or want to be yoked with in the future. But often ours is not to reason why...
P.S. Though a bit off-topic, I would like to mention that faked CMMI certifications are very common as well as real certifications obtained through bribery.
CMM doesn't really speak to the quality of the software, but more towards the documentation and repeatability of the process. In other words, it is possible to have an orderly and repeatable development process, but still create crappy software. As long as the process is properly documented, it is possible to achieve CMM Level 5.
At the end of the day CMM is another tool that can be used or misused. If the end goal is to improve software quality, it is possible to use CMM to improve the development process and improve software quality. If achieving a certain CMM level is the goal, then most likely software quality will suffer.
The Model is losing it's credibility, first because the companies adopt the model not looking for a maturer software development model, but to be appraised to a CCMI level.
And the other problem, the one that I think leads to the lost credibility is that as a contractor, you have no guarantee that the project your CMMI appraisal supplier is selling you will be developed using the model practices. The CMMi label only states that the company have once developed projects that were evaluated as adherents to a specific CMMi Maturity level.
The problem is not just on CMMi but on the process developed by the companies. The CMMi does not describe the process itself, but just what the process should do. You have the same problem with PMBOK. Actually the problem is not just the PMBOK, but primarily the problem is the bad project managers that claim to follow the PMI statements.
At school, I was taught: CMM is a good Idea, but lacking certification (anyone can say they are level 5 / level 4) it ends up being a marketing tool for offshore shops. So, yeah, the idea is sound, but how do you prove adherence?
I used to. But now I find that CMM and CMMI don't really fit that well with agile approaches.
Oh sure you can squeeze things to get that square peg into the round hole, but when push comes to shove, you are still basing your approach on an ability to predict everything that is needed, and anticipating everything that will be encountered, when building a software system.
And we all know, how well that approach works in real life! (-:
cheers,
Rob
Agile is the next CMM and both are fragile. The field of process and quality consulting is a good business in any industry and like the engineering folks everyone needs new buzzwords to keep the money flowing.
CMM when it first came out of the SEI was a good concept based on solid academic work but it was soon picked up by the process consultants and is a worthless certification now, which is used by most CIOs to cover their ass (Nobody got fired for picking a CMM Level 5 company)
Agile is going to go down that route soon and then we can be sure to see the next silver bullet in the horizon soon :)
When I worked on commercial flight software, we used CMM and as our processes improved our ability to accurately predict completion times improved. But this was a cumbersome process, other approaches should work just as well.
Smaller projects are less dependant on process for success. The key metric is the Hero to Bystander Ratio. Any project with an HTBR of less than 0.2 is in serious trouble.
There are quite a few good ideas that can readily be adapted and adopted by any organisation for their own good, but getting a badge is a pain due to the requirement for all kinds of redundant documentation.
The problem is that CMMi is not a process but just a guide for whatever process you might choose to have and that in itself invites half-baked ideas flowing around.
Another point is that migration is a real pain when you are starting, but its the same as any other teething trouble, I guess.
The main issue with understanding the value of CMMi is understanding CMMi itself.
CMMi is a documented approach to Continuous Improvement for Software Production.
Understanding Continuous Improvement with SPC is difficult enough in manufacturing but add the intangible Software product and the difficulty is exponential.
I would recommend to anyone, or organization, new to CMMi: to document their current process then look at what outcomes (cost/benefit) can be measured independently of the process. In this way if any process, procedure of standard was changed would it yield a 'better' result. The prerequisite to this exercise is a documented, stable repeatable process since it is impossible to measure the benefit of any change within an ad-hoc environment as you are not comparing 'like for like'.
By focusing on the above concepts initially, the organization will begin to understand and embrace the essential value of the CMMi.
Legend has it that the US Department of Defense, which did a lot of contracting, found that many of its projects faced time and cost overruns, and even when they were delivered, the projects were not exactly what was ordered.
So they wanted a way to be sure that that a contractor would be able to deliver on time, within budget and close to what was required. Thus the capability maturity model was born.
The thesis is that if things are written down, then they survive attrition. But saying that write down everything would not be enough, it must be checked that they are written down correctly. Among other things.
Throughout all this, it never crossed their minds to consider the cost of doing all this. Because from the point of view of the DoD, if it gave out a project for $ 1 million to get something in a year, and ended up paying $ 10 million over 10 years and not getting what they wanted, and now if they instead had to pay $ 5 million for that same thing to get what they actually wanted in two years, they are still saving $ 5 million, and not to mention that they are actually getting something.
So if you are contractor to US DoD or something like that, go ahead and get CMM, because it would be a requirement. But if you are competing with the 1000s of software development shops on elance, to get projects with limited budgets, limited time and so on... CMM is not a good choice.
That said, feel free to read the CMMI Dev pdf (v 1.3 at time of writing). It makes a lot of good points. It deconstructs the organisation very nicely. And if you see any points which make you go 'aha! i have this problem', then by all means use that wisdom to resolve your problem. In our case, one small change we made was to ensure that we make a list of all the people who are allowed to give us requirements. If there was more than one person who was allowed to give us requirements, then any requirement coming from one source was circulated to the others, and they had to say 'okay' before we added it to the backlog. This small change made a big difference in how much we worked and reworked.
In short look at the process areas and compare them to your pain areas, and take the suggestions given by CMM. The way you implement it is your own. And you can always implement it in a way that does not take too much time or cost too much money. But I guess the same applies even to the relevant ISO/IEC standards.

Resources