I am developing an iOS app and I am trying to do it with test-driven development. I've been reading a lot on the subject and one of the things I came across in a book is that tests are supposed to be fast, repeatable and reliable. As a consequence, when writing a test for networking code, the test should "simulate" the network, so that it can be executed regardless of network availability.
In my code I have a method that retrieves an image from the internet with a URL, scales it and stores it in the file system using dataWithContentsOfURL from NSData:
+ (void) getImage:(NSString *)imageUrl {
UIImage *image = [UIImage imageWithData:
[NSData dataWithContentsOfURL:[NSURL URLWithString:imageUrl]]];
// Scale image
// Save image in storage
}
My question is: what would be a good approach to test this method, so that the test passes when the code is correct and fails when it isn't, without taking into account possible network failures?
Currently in my tests I am ignoring the fact that the method needs a connection, and just expect the outcome to be correct. However, I already see how this can become a problem when my test suit grows, since they already take a considerable amount of time to run. More importantly, I don't want my tests to fail just because there isn't a good network connection, as I believe that is not their purpose. I appreciate any suggestions.
Your intention to keep your tests fast, reliable and repeatable are good ones. Removing the dependency on the network is a great way to help achieve those goals.
Instead of being locked into the method as it is now, remember that the tool you are trying to use is called test-DRIVEN development. Tests aren't just a wrapper to make sure your code works as written. Its a way to discover ways that your code can be improved to make it more testable, reliable, reusable, etc.
To eliminate the network dependency, you could consider passing a URL (your parameter's name is already imageURL) instead of a string. This way you can create URL to the local file system instead of to a location on the network.
I also notice that your method is really doing a lot of work. A name that describes what its really doing might be loadImageFromURLAndScaleItAndSaveItToDisk:. Why not break it up into several smaller, more testable methods? Maybe one that loads an image from a URL, one that scales an image and one that saves an image to the local file system. You could test each individual portion and then make another unit test that tests your original method by comparing its results against the results of calling the smaller methods in order.
I guess I would also say that keeping tests fast, reliable and repeatable is a good goal, but if you want make a test that is 'slower' or 'not 1000% reliable' that OK. A test that provides value to you is better than not writing a test because you don't think its a good enough test.
I know most of that is general, but I hope it helps.
Related
I am doing iOS UI testing with XCUITest.
Since we do not have access to the app, how do we set defaults to the app?
You can pass all the required data using launch arguments.
Please read documentation
https://developer.apple.com/documentation/xctest/xcuiapplication/1500477-launcharguments
The other (and a bit slower) option is to use deep links.
This sounds much more complex than it is, but a technique that has worked for me is to set up an HTTP server in the testing suite that you can use to fetch mock data in your test code. I have had success with Embassy and Ambassador.
So you'd pass in a launch argument telling your app code to fetch from the server. For the case of UserDefaults a helper class for making these specific requests to the local endpoint works well. This unfortunately means your app code has to be doing some setup for testing, but depending on your needs it could be a good compromise.
Another possible solution to crossing the process boundary:
If you are not doing on device testing, you can access "SIMULATOR_SHARED_RESOURCES_DIRECTORY" and provide the data in a file for your test to consume.
On a real device this would be more difficult because you would need to use a shared Group container.
You probably can use "Application Data Package". It's when you save your app state into a container and then run tests with the saved environment.
There are many guides how to do it, that's just one of them:
https://www.codementor.io/paulzabelin/xcode-app-data-suni6p4ma
It might me a more powerful and overcomplicated thing that you need, but opens a big world of possibilities ;)
I am writing an app that makes plenty of network requests. As usual they are
async, i.e. the call of the request method returns immediately and the result
is delivered via a delegate method or in a closure after some delay.
Now on my registration screen I sent a register request to my backend and
want to verify that the success UI is shown when the request finishes.
Which options are out there to wait for the request to finish, verify the
success UI and only after that leave the test method?
Also are there any more clever options than waiting for the request to finish?
Thanks in advance!
Trivial Approach
Apple implemented major improvements in Xcode 9 / iOS 11 that enables you to wait for the appearance of a UI element. You can use the following one-liner:
<#yourElement#>.waitForExistence(timeout: 5)
Advanced Approach
In general UI and unit tests (referred to as tests here) must run as fast as possible so the developer can run them often and does not get frustrated by the need to run a slow test suite multiple times a day. In some cases, there is the possibility that an (internal or security-related) app accesses an API that can only be accessed from certain networks / IP ranges / hosts. Also, most CI services offer pretty bad hardware and limited internet-connection speed.
For all of those reasons, it is recommended to implement tests in a way that they do no real network requests. Instead, they are run with fake data, so-called fixtures. A clever developer realizes this test suite in a way that source of the data can be switched using a simple switch like a boolean property. Additionally, when the switch is set to fetch real backend data the fixtures can be refreshed/recorded from the backend automatically. This way it is pretty easy to update the fake data and quickly detect changes of the API.
But the main advantage of this approach is speed. Your test will not make real network requests but instead run against local data what makes them independent on:
server issues
connection speed
network restrictions
This way you can run your tests very fast and thus much more often - which is a good way of writing code ("Test Driven Development").
On the other hand, you won't detect server changes immediately anymore since the fake data won't change when the backend data changes. But this is solved by simply refreshing your fixtures using the switch you have implemented because you are a smart developer which makes this issue a story you can tell your children!
But wait, I forgot something! Why this is a replacement for the trivial approach above - you ask? Simple! Since you use local data which is available immediately you also can call the completion handler immediately too. So there is no delay between doing the request and verifying your success UI. This means you don't need to wait which makes your tests even faster!
I hope this helps some of my fellows out there. If you need more guidance regarding this topic don't hesitate and reply to this post.
Cya!
I was wondering how to create a golden master approach to start creating some tests for my MVC 4 application.
"Gold master testing refers to capturing the result of a process, and
then comparing future runs against the saved “gold master” (or known
good) version to discover unexpected changes." - #brynary
Its a large application with no tests and it will be good to start development with the golden master to ensure the changes we are making to increase the test coverage and hopefully decrease the complexity in the long don't break the application.
I am think about capturing a days worth of real world traffic from the IIS logs and use that to create the golden master however I am not sure the easiest or best way to go about it. There is nothing out of the ordinary on the app lots controllers with post backs etc
I am looking for a way to create a suitable golden master for a MVC 4 application hosted in IIS 7.5.
NOTES
To clarify something in regards to the comments the "golden master" is a test you can run to verify output of the application. It is like journalling your application and being able to run that journal every time you make a change to ensure you have broken anything.
When working with legacy code, it is almost impossible to understand
it and to write code that will surely exercise all the logical paths
through the code. For that kind of testing, we would need to
understand the code, but we do not yet. So we need to take another
approach.
Instead of trying to figure out what to test, we can test everything,
a lot of times, so that we end up with a huge amount of output, about
which we can almost certainly assume that it was produced by
exercising all parts of our legacy code. It is recommended to run the
code at least 10,000 (ten thousand) times. We will write a test to run
it twice as much and save the output.
Patkos Csaba - http://code.tutsplus.com/tutorials/refactoring-legacy-code-part-1-the-golden-master--cms-20331
My question is how do I go about doing this to a MVC application.
Regards
Basically you want to compare two large sets of results and control variations, in practice, an integration test. I believe that the real traffic can't exactly give you the control that I think you need it.
Before making any change to the production code, you should do the following:
Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
Bombard the class or system under test with these random inputs.
Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.
This doesn't match with your approach. Imagine a scenario in which a user makes a purchase of a certain product, you can not determine the outcome of the transaction, insufficient credit, non-availability of the product, so you cannot trust in your input.
However, what you now need is a way to replicate that data automatically, and the automation of the browser in this case cannot help you much.
You can try a different approach, something like the Lightweight Test Automation Framework or else MvcIntegrationTestFramework which are the most appropriate to your scenario
so my application connects to a URL (via URLConnectionDelegate), gathers data, which contains image URLs. It then connects to each and every image url (again, via URLConnectionDelegate), and gathers the images for each image.
Everything works perfect, couldn't be happier.
But the problem is that I can't really track the networkActivityIndicator. There are like, 100 connections going off at once, so I don't know when or how to set the networkActivityIndicator to turn off once the last image is done loading.
Does anyone have any suggestions without me having to redo a bunch of code?
Thanks for the help guys
The typical solution is a singleton object that you call [NetworkMonitor increaseNetworkCount] and [NetworkMonitor decreaseNetworkCount] at the appropriate points.
The nicer solution is a toolkit like MKNetworkKit, which will handle this and a bunch of similar things for you (like managing your download queue, since 100 simultaneous connections is actually very bad on iOS).
I develop an iOS app that uses a REST API. The iOS app requests data in worker threads and stores the parsed results in core data. All views use core data to visualize the information. The REST API changes rapidly and I have no real control over the interface.
I am looking for advice how perform integration tests for the app as easy as possible. Should I test against the API or against Mock data? But how to mock GET requests properly if you can create resources with POST or modify them with PUT?
What frameworks do you use for these kind of problems? I played with Frank, which looks nice but is complicated due to rapid UI changes in the iOS app. How would you test the "API request layer" in the app? Worker threads are NSOperations in a queue - everything is build asynchronously. Any recommendations?
I would strongly advise you to mock the server. Servers go down, the behavior changes, and if a test failure implies "maybe my code still works", you have a problem on your hands, because your test doesn't tell you whether or not the code is broken, which is the whole point.
As for how to mock the server, for a unit test that does this:
first_results = list_things()
delete_first_thing()
results_after_delete = list_thing()
I have a mock data structure that looks like this:
{ list_things_request : [first_results, results_after_delete],
delete_thing_request: [delete_thing_response] }
It's keyed on your request, and the value is an array of responses for that request in the order that they were seen. Thus you can support repeatedly running the same request (like listing the things) and getting a different result. I use this format because in my situation it is possible for my API calls to run in a slightly different order than it did last time. If your tests are simpler, you might be able to get away with a simple list of request/response pairs.
I have a flag in my unit tests that indicate if I am in "record" mode (that is, talking to a real server and recording this data-structure to disk) or if I am in "playback" mode (talking to the datastructure). When I need to work with a test, I "record" the interactions with the server and then play them back.
I use the little-known SenTestCaseDidStartNotification to track which unit test is running and isolate my data files appropriately.
The other thing to keep in mind is that instability is the root of all evil. If you have code that does things with sets, or gets the current date, and such, this tends to change the requests and responses, which do not work in an offline scenario. So be careful with those.
(Since nobody stepped in yet and gave you a complete walkthrough) My humble advice: Step back a bit, take out the magic of async, regard everything as sync (api calls, parsing, persistence), and isolate each step as a consumer/producer. After all you don't wan't to unit-test NSURLConnection, or JSONKit or whatever (they should have been tested if you use them), you want to test YOUR code. Your code takes some input and produces output, non-aware of the fact that the input was in fact the output genereated in a background thread somewhere. You can do the isolated test all sync.
Can we agree on the fact that your Views don't care about how their model data was provided? If yes, well, test your View with mock objects.
Can we agree on the fact that your parser doesn't care about how the data was provided? If yes, well, test your parser with mock data.
Network layer: same applies as described above, in the end you'll get an NSDictionary of headers, and some NSData or NSString of content. I don't think you want to unit-test NSURLConnection or any 3'rd party networking api you trust (asihttp, afnetworking,...?), so in the end, what's to be tested?
You can mock up URLs, request headers and POST data for each use-case you have, and setup test cases for expected responses.
In the end, IMHO, it's all about "normalizing" out asyc.
Take a look at Nocilla
For more info, check this other answer to a similar question