There's still not much information out there on the XCode7 and Swift 2.0 real-world experiences from a unit testing and code coverage perspective.
While there're plenty of tutorials and basic how-to guides available, I wonder what is the experience and typical coverage stats on different iOS teams that actually tried to achieve a reasonable coverage for their released iOS/Swift apps. I specifically wonder about this:
1) while code coverage percentage doesn't represent the overall quality of the code base, is this being used as an essential metric on your team? If not, what is the other measurable way to assess the quality of your code base?
2) For a bit more robust app, what is your current code coverage percentage? (just fyi, we have hard time getting over 50% for our current code base)
3) How do you test things like:
App life-cycle, AppDelegate methods
Any code related to push/local notifications, deep linking
Defensive programming practices, various piece-of-mind (hardly reproducible) safe guards, exception handling etc.
Animations, transitions, rendering of custom controls (CG) etc.
Popups or Alerts that may include any additional logic
I understand some of the above is more of a subject for actual UI tests, but it makes me wonder:
Is there a reasonable way to get the above tested from the UTs perspective? Should we be even trying to satisfy an arbitrary minimal code coverage percentage with UTs for the whole code base or should we define that percentage off a reasonably achievable coverage given the app's code base?
Is it reasonable to make the code base more inflexible in order to achieve higher coverage? (I'm not talking about a medical app where life would be in stake here)
are there any good practices on testing all the things mentioned above, other than with UI tests?
Looking forward to a fruitful discussion.
You do ask a very big and good question. Although your question includes:
I wonder what is the experience and typical coverage stats on different iOS teams ...
I think the issue is language/OS agnostic. Sure some languages and platform are more unit testable than others. So some are more expensive to unit test (as opposed to other forms of automated/coded testing). I think you are searching for a cost/benefit equation to maximize productivity. Ah the fun of software development processes.
To jump to the end to give you the quick sound grab answer:
You should unit test all code that you want to work and is appropriate to unit testing.
So now why the all and why the emphasis on unit testing ...
What is a unit test?
The language in the development community is corrupted, so please bear with me. Unit testing is just one type of automated testing. Others are Automated Acceptance Tests, Application tests, Integration Tests, and Components test. These all test different things. They have different purposes.
However, when I hear unit testing two things pop into mind:
What is a unit test?
As part of TDD (Test Driven Development)?
TDD is about writing tests before writing code. It is a very low level coding practice/process (XP - eXtreme Programming) as you write a test to write a statement and then another test. Very much a coding practice but not an application/requirements practice as it is about writing code that does what you intended, not what the product requirements are (oh gosh I feel the points being lost).
Writing code and then unit testing it is ... in my experience ... fun, short term team building, but not productive. Sure some defects are found, but not many. TDD leads to better "healthy" code.
My point here is that unit testing is:
A subset of automated/coded testing.
Is part of a coding process.
Is about code health (maintainability).
Does note prove that your application works (sound of falling points).
Why all?
If you're team delivers zero defect software (ZDFD is real and achievable .. but that a flat earth discussion) all the time without unit testing then this is nonsense and you would not be asking any questions here.
The only valid reason for a team to include unit testing as part of its coding process is to improve productivity. If all team members commit to team productivity then the only issue is identifying which code profits from unit testing. This is the context of the all.
The easiest way I think to illustrate this is to list types I do not unit test:
Factories - They only instantiate types.
Builders / writing (IoC) - Same as factories - No domain logic.
Third party libraries - We call 3rd party libraries as documented. If you want to test these then use integration/component tests.
Cyclomatic Complexity of one - Every method of of type has a CC of 1. That is, no conditions. Unit tests will tell you nothing useful, peer review is more useful.
The practical answer
My teams have expected 100% unit test coverage on all new code that should be unit tested. This is achieved by attributing code that does not meeting the unit testing criteria. All code must go through code review and the attributes must be specific to the why options listed above. -- Simple.
A long answer, and perhaps not easy to digest, nor what people want to hear. But, from long experience, I know it is the best answer that can lead to best profitability.
Post comment
My answer is aimed at the unit testing aspects of the question. As for defensive programming and other practices, TDD is a process that mitigates that by making it harder to do the wrong thing. But build system static code analysis tools may help you capture these before they get to peer review (they can fail a build on new issues). Look at others like SonarQube, Resharper, CppDepend, NDepend (yes language dependent).
Related
I just learned Ruby on Rails,and to advance my skills i am trying to build a large web application .But to this day i am really confused about unit test and functional test(i dont know how to write them).My question is for now can i start building the app and and test the the app by using real data on the browser (as actual user trying to interact with the app)instead of writing test code?(Eventually later on I will have an exepert jump in and help out writing the test code).Thank you in advance
There is one simple truth about unit testing... it is much easy to start unit testing early on in your project. Unit testing becomes more difficult to implement as your application grows more complex.
Check out the following link:
http://guides.rubyonrails.org/testing.html
You can find general information on the how/why of unit testing for RoR.
Thanks,
Tom
One of the benefits of writing Unit Tests while in the course of developing the application is that it will help keep your code of a high quality. It is often extremely difficult to impossible to effectively apply Unit Tests to a poorly written code base. Even if you wouldn't consider the code to be of poor quality, it may not be of the standard necessary for someone to write unit tests against it.
If you are serious about writing unit tests, it is often considered acceptable practice to write a "walking skeleton" - the smallest amount of code possible to get the application in a runnable state, before writing a unit test. However, the longer you wait after that to write your first tests the more likely you will be to never have any tests at all.
I am looking at SpecFlow examples, and it's MVC sample contains several alternatives for testing:
Acceptance tests based on validating results generated by controllers;
Integration tests using MvcIntegrationTestFramework;
Automated acceptance tests using Selenium;
Manual acceptance tests when tester is prompted to manually validate results.
I must say I am quite impressed with how well SpecFlow examples are written (and I managed to run them within minutes after download, just had to configure a database and install Selenium Remote Control server). Looking at the test alternatives I can see that most of them complement each other rather than being an alternative. I can think of the following combinations of these tests:
Controllers are tested in TDD style rather than using SpecFlow (I believe Given/When/Then type of tests should be applied on higher, end-to-end level; they should provide good code coverage for respective components;
MvcIntegrationTestFramework is useful when running integration tests during development sessions, these tests are also part of daily builds;
Although Selenium-based tests are automated, they are slow and are mainly to be started during QA sessions, to quickly validate that there are no broken logic in pages and site workflow;
Manual acceptance tests when tester is prompted to confirm result validity are mainly to verify page look and feel.
If you use SpecFlow, Cucumber or other BDD acceptance test framework in you Web development, can you please share your practices regarding choosing between different test types.
Thanks in advance.
It's all behaviour.
Given a particular context, when an event occurs (within a particular scope), then some outcome should happen.
The scope can be a whole application, a part of a system or a single class. Even a function behaves this way, with inputs as context and the output as outcome (you can use BDD for functional language as well!)
I tend to use Unit frameworks (NUnit, JUnit, RSpec, etc.) at a class or integration level, because the audience is technical. Sometimes I document the Given / When / Then in comments.
At a scenario level, I try to find out who actually wants to help read or write the scenarios. Even business stakeholders can read text containing a few dots and brackets, so the main reason for having a natural language framework like MSpec or JBehave is if they want to write scenarios themselves, or show them to people who will really be put off by the dots and brackets.
After that, I look at how the framework will play with the build system, and how we'll give the ability to read or write as appropriate to the interested stakeholders.
Here's an example I wrote to show the kind of thing you can do with scenarios using simple DSLs. This is just written in NUnit.
Here's an example in the same codebase showing Given, When, Then in class-level example comments.
I abstract the steps behind, then I put screens or pages behind those, then in the screens and pages I call whatever automation framework I'm using - which could be Selenium, Watir, WebRat, Microsoft UI Automation, etc.
The example I provided is itself an automation tool, so the scenarios are demonstrating the behaviour of the automation tool through demonstrating the behaviour of a fake gui, just in case that gets confusing. Hope it helps anyway!
Since acceptance tests are a kind of functional tests, the general goal is to test your application with them end-to-end. On the other hand, you might need to consider efficiency (how much effort is to implement the test automation), maintainability, performance and reliability of the test automation. It is also important that the test automation can easily fit into the development process, so that it supports a kind of "test first" approach (to support outside-in development).
So this is a trade off, that can be different for each situation (that's why we provided the alternatives).
I'm pretty sure, that today the most widely fitting option is to test at the controller layer. (Maybe later as UI and UI automation frameworks will evolve, this will change.)
I've been searching on how to do Unit testing and find that it is quite easy, but, what I want to know is, In a asp.net mvc application, what should be REALLY important to test and which methods you guys use?
I just can't find a clear answer on about WHAT TO REALLY TEST when programming unit tests.
I just don't want to make unnecessary tests and lose development time doing overkill tests.
You should unit test as much as possible of your application.
For every line of code you write, you need to verify that it works. If you don't unit test it, you need to test it in some other fashion. Even starting up the site and clicking around is a sort of testing.
When you compare unit testing with other sorts of testing (including running the site and manually using it), unit tests tend to give the best return of investment because they are relatively easy to write and maintain, and can give you rapid feedback on whether you just introduced a regression bug or not.
I'm not saying that there's no overhead in writing unit tests - there is, but there's overhead in any sort of testing, and a big overhead in not testing at all (because regression bugs slip through quite easily).
It's still good practice to supplement unit tests with other types of tests, but a good unit test suite offers an excellent regression test suite.
Ron Jeffries says "Test everything that could possibly break."
Someone else - I think it was Kent Beck, but I can't find a reference - says, "Only test the code you want to work."
Either of these is a pretty good strategy.
I actually don't think anything needs to be tested in MVC. I think all your business logic, rules etc need testing but the Views and Controllers?
The only real reason I can see to test a Controller is for integration testing. If all your business logic is correct then that should be a simple test always returning true.
Controllers should really only get data from the view and pass data to it so....
As for views, what sort of testing can be done there other than to open the view and see what it does?
When I write my projects there is next to no code in the controller and I put all the grunt in my business engine which I have extensive tests for.
Unit testing is good for testing of services/models. But when you need in testing of application functionality the better choice will be functional tests (i.e. Selenium)
I admit that I have almost none experience of unittesting. I did a try with DUnit a while ago but gave up because there was so many dependencies between classes in my application.
It is a rather big (about 1.5 million source lines) Delphi application and we are a team that maintain it.
The testing for now is done by one person that use it before release and report bugs. I have also set up some GUI-tests in TestComplete 6, but it often fails because of changes in the application.
Bold for Delphi is used as persistance framework against the database.
We all agree that unittesting is the way to go and we plan to write a new application in DotNet with ECO as persistance framework.
I just don't know where to start with unittesting...
Any good books, URL, best practice etc ?
Well, the challenge in unit testing is not the testing itself, but in writing testable code. If the code was written not thinking about testing, then you'll probably have a really hard time.
Anyway, if you can refactor, do refactor to make it testable. Don't mix object creation with logic whenever possible (I don't know delphi, but there might be some dependency injection framework to help in this).
This blog has lots of good insight about testing. Check this article for instance (my first suggestion was based on it).
As for a suggestion, try testing the leaf nodes of your code first, those classes that don't depend on others. They should be easier to test, as they don't require mocks.
Writing unit tests for legacy code usually requires a lot of refactoring.
Excellent book that covers this is Michael Feather's "Working Effectively with Legacy Code"
One additional suggestion: use a unit test coverage tool to indicate your progress in this work. I'm not sure about what the good coverage tools for Delphi code are though. I guess this would be a different question/topic.
Working Effectively with Legacy Code
One of the more popular approaches is to write the unit-tests as you modify the code. All new codes gets unit tests, and for any code you modify you first write its test, verify it, modify it, re-verify it, and then write/fix any tests that you need due to your modifications.
One of the big advantages of having good unit test coverage is being able to verify that the changes you make don't inadvertently break something else. This approach allows you to do that, while focusing your efforts on your immediate needs.
The alternate approach I've employed is to develop my unit tests via Co-Ops :)
When you work with legacy code, mock objetcs are really usefull to build unit tests.
Take a look at this question regarding Delphi and mocks: What is your favorite Delphi mocking library?
For .Net unittesting read this : "The Art of Unit Testing: with Examples in .NET"
About best pratices :
What you said is right : Sometimes, it's difficult to write unit tests because of the dependancy between classes...
So write unit tests just after or just before ;-) the implementation of the classes. Like this, if you have some difficulties to write the tests, maybe it means you have a design problem !
The software development team in my organization (that develops API's - middleware) is gearing to adopt atleast one best practice at a time. The following are on the list:
Unit Testing (in its real sense),
Automated unit testing,
Test Driven Design & Development,
Static code analysis,
Continuous integration capabilities, etc..
Can someone please point me to a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster. Is there a study out there?
This should help me (support my claim to) prioritize the implementation of these practices.
"a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster"
Wouldn't that be great! If there was such a thing, we'd all be doing it, and you'd simply read it in DDJ.
Since there isn't, you have to make a painful judgement.
There is no "do X for an ROI of 8%". Some of the techniques require a significant investment. Others can be started for free.
Unit Testing (in its real sense) - Free - ROI starts immediately.
Automated unit testing - not free - requires automation.
Test Driven Design & Development - Free - ROI starts immediately.
Static code analysis - requires tools.
Continuous integration capabilities - inexpensive, but not free
You can't know the ROI. So you can only prioritize on investment. Some things are easier for people to adopt than others. You have to factor in your team's willingness to embrace the technique.
Edit. Unit Testing is Free.
"time spend coding the test could have been taken to code the next feature on the list"
True, testing means developers do more work, but support does less work debugging. I think this is not a 1:1 trade. A little more time spent writing (and passing) formal unit tests dramatically reduces support costs.
"What about legacy code?"
The point is that free is a matter of managing cost. If you add unit tests to legacy code, the cost isn't free. So don't do that. Instead, add unit tests as part of maintenance, bug-fixing and new development -- then it's free.
"Traning is an issue"
In my experience, it's a matter of a few solid examples, and management demand for unit tests in addition to code. It doesn't require more than an all-hands meeting to explain that unit tests are required and here are the examples. Then it requires everyone report their status as "tests written/tests passed". You aren't 60% done, you're 232 out of 315 tests.
"it's only free on average if it works for a given project"
Always true, good point.
"require more time, time aren't free for the business"
You can either write bad code that barely works and requires a lot of support, or you can write good code that works and doesn't require a lot of support. I think that the time spent getting tests to actually pass reduces support, maintenance and debugging costs. In my experience, the value of unit tests for refactoring dramatically reduces the time to make architectural changes. It reduces the time to add features.
"I do not think either that it's ROI immediately"
Actually, one unit test has such a huge ROI that it's hard to characterize. The first test to pass becomes the one think that you can really trust. Having just one trustworthy piece of code is a time-saver because it's one less thing you have to spend a lot of time thinking about.
War Story
This week I had to finish a bulk data loader; it validates and loads 30,000 row files we accept from customers. We have a nice library that we use for uploading some internally developed files. I wanted to use that module for the customer files. But the customer files are enough different that I could see that the library module API wasn't really suitable.
So I rewrote the API, reran the tests and checked the changes in. It was a significant API change. Much breakage. Much grepping the source to find every reference and fix them.
After running the relevant tests, I checked it in. And then I reran what I thought was an not-closely-related test. Ooops. It had a failure. It was testing something that wasn't part of the API, which also broke. Fixed. Checked in again (an hour late).
Without basic unit testing, this would have broken in QA, required a bug report, required debugging and rework. Look at the labor: 1 hour of QA person to find and report the bug + 2 hours of developer time to reconstruct the QA scenario and locate the problem + 1 hour to determine what to fix.
With unit testing: 1 hour to realize that a test didn't pass, and fix the code.
Bottom Line. Did it take me 3 hours to write the test? No. But the project got three hours back for my investment in writing the test.
Are you looking for something like this?
The ROI of Software Process Improvement A New 36 Month Case Study by Capers Jones
Agile Practices with the Highest Return on Investment
You're assuming that the list you present constitutes a set of "best practices" (although I'd agree that it probably does, btw)
Rather than try to cherry-pick one process change, why not examine your current practices?
Ask yourself this:
Where are you feeling the most pain? What might you change to reduce/eliminate it?
Repeat until pain-free.
You don't mention code reviews in your list. For our team, this is probably what gave us the greatest ROI (yes, investment was steep, but return was even greater). I know Code Complete (the original version at least) mentioned statistics relative to the efficiency of reviews in finding defect VS testing.
There are some references for ROI with respect to unit testing and TDD. See my response to this related question; Is there hard evidence of the ROI of unit testing?.
There is such a thing as “local optimum”. You can read about it in Goldratt book Goal. It says that innovation is of any value only if it improves overall throughput. Decision to implement new technology should be related to critical paths inside of projects. If technology speeds up already fast enough process it only creates unnecessary backlog of ready modules. Which is not necessary improve overall speed of projects development.
I wish I had a better answer than the other answers, but I don't, because what I think really pays off is not conventional at present. That is, in design, to minimize redundancy. It is easy to say but takes experience.
In data it means keeping the data normalized, and when it cannot be, handling it in a loose fashion that can tolerate some inconsistency, not relying on tightly-bound notifications. If you do this, it simplifies the code a lot and reduces the need for unit tests.
In source code, it means if some of your "input data" changes at a very slow rate, you could consider code generation, as a way to simplify source code and get additional performance. If the source code is simpler, it is easier to review, and the need for testing it is reduced.
Not to be a grump, but I'm afraid, from the projects I've seen, there is a strong tendency to over-design, with way too many "layers of abstraction" whose correctness would not have to be questioned if they weren't even there.
One practice at a time is not going to give the best ROI. The practices are not independent.