XCTest with Core Data - ios

Do XCTests run in parallel (that is, are multiple tests allowed to run at the same moment?). Consider a single test file with multiple tests, and multiple test files with multiple tests each.
If core data is used in some of the components being tested, and tests can run in parallel, what is the correct method of using the core data in the tests? For example, the test should start with a 'clean' data store, and then add objects as needed, then get tested based off of the contents of the store. It sounds like if they all use the same managed object context/store they'll be pointing to the same data and thus be at risk for colliding with each other.

Tests are run serially on the main thread of the test runner process. However nothing automatically guards against you starting asynchronous actions which could extend into the execution of a future test case.
For example a call to perfomBlock on a NSManagedObjectContext is not guaranteed to execute before the next test starts. This can be especially problematic if your tests trigger saves which propagate asynchronously to parent managed object contexts.
I've found it valuable to write easily testable code which means injecting the managed object contexts or other dependencies into the code under test. That should allow you to build an independent Core Data stack for each test case rather than unexpectedly sharing some global state in a single context. Then you just need to beware of overly permissive notification observers which don't bother to check the sender of a NSNotification (i.e. when observing NSManagedObjectContextDidSaveNotifications).

Each XCTest method is run sequentially (one at the moment)
To test Core Data I often create in memory Persistance Store, here you have good snipped: code using this kind of MOC you always have clear core data state
Please check also this Rays tutorial

1.Test cases are executed one by one and test files as well.
2.Your core data managed object context should be created by importing you app in your test case(#testable import product_name) file and access the core date objects in test case files.All the test cases will be run independently.
So As you mentioned the test should start with a 'clean' data store, and then add objects as needed, then get tested based off of the contents of the store.yes,this is correct way.make sure core data managed objects are created in test files.test cases can be tested in Test Navigation section.

Related

XCUITest: How to jump into app code? How to modify the state of the app under test?

Coming from an Android/Espresso background, I am still struggling with XCUITest and UI testing for iOS.
My question is about two related but distinct issues:
How to compile and link against sources of the app under test?
How to jump into methods of the app under test and modify its state at runtime?
To tackle these questions, we should first understand the differences between XCode's "unit test targets" and "UI test targets".
XCUITests run inside a completely separate process and cannot jump into methods of the app under test. Moreover, by default, XCUITests are not linked against any sources of the app under test.
In contrast, XCode's unit tests are linked against the app sources. There is also the option to do "#testable imports". In theory, this means that unit tests can jump into arbitrary app code. However, unit tests do not run against the actual app. Instead, unit tests run against a stripped-down version of the iOS SDK without any UI.
Now there are different workarounds for these constraints:
Add some selected source files to a UI test target. This does not enable to call into the app, but at least it enables to share selected code between app and UI tests.
Pass launch arguments via CommandLine.arguments from the UI tests to the app under test. This enables to apply test-specific configurations to the app under test. However, those launch arguments need to be parsed and interpreted by the app, which leads to a pollution of the app with testing code. Moreover, launch arguments are only a non-interactive way to change the behavior of the app under test.
Implement a "debug UI" that is only accessible for XCUITest. Again, this has the drawback of polluting app code.
This leads to my concluding questions:
Which alternative methods exist to make XCUI tests more powerful/dynamic/flexible?
Can I compile and link UI tests against the entire app source and all pod dependencies, instead of only a few selected files?
Is it possible to gain the power of Android's instrumented tests + Espresso, where we can perform arbitrary state modifications on the app under test?
Why We Need This
In response to #theMikeSwan, I would like to clarify my stance on UI test architecture.
UI Tests should not need to link to app code, they are designed to
simulate a user tapping away inside your app. If you were to jump into
the app code during these tests you would no longer be testing what
your app does in the the real world you would be testing what it does
when manipulated in a way no user ever could. UI tests should not have
any need of any app code any more than a user does.
I agree that manipulating the app in such a way is an anti-pattern that should be only used in rare situations.
However, I have a very different stance on what should be possible.
In my view, the right approach for UI tests is not black-box testing but gray-box testing. Although we want UI tests to be as black-boxy as possible, there are situations where we want to dig deep into implementation details of the app under test.
Just to give you a few examples:
Extensibility: No UI testing framework can provide an API for each and every use case. Project requirements are different and there are times where we want to write our own function to modify the app state.
Internal state assertions: I want to be able to write custom assertions for the state of the app (assertions that do not only rely on the UI). In my current Android project, we had a notoriously broken subsystem. I asserted this subsystem with custom methods to guard against regression bugs.
Shared mock objects: In my current Android project, we have custom hardware that is not available for UI tests. We replaced this hardware with mock objects. We run assertions on those mock objects right from the UI tests. These assertions work seamlessly via shared memory. Moreover, I do not want to pollute the app code with all the mock implementations.
Keep test data outside: In my current Android project, we load test data from JUnit right into the app. With XCUITest's command line arguments, this would be way more limited.
Custom synchronization mechanisms: In my current Android project, we have wrapper classes around multithreading infrastructure to synchronize our UI tests with background tasks. This synchronization is hard to achieve without shared memory (e.g. Espresso IdlingResources).
Trivial code sharing: In my current iOS project, I share a simple definition file for the aforementioned launch arguments. This enables to pass launch arguments in a typesafe way without duplicating string literals. Although this is a minor use case, it still shows that selected code sharing can be valuable.
For UI tests you shouldn't have to pollute your app code too much. You
could use a single command line argument to indicate UI tests are
running and use that to load up some test data, login a test user, or
pick a testing endpoint for network calls. With good architecture you
will only need to make the adjustment once when the app first launches
with the rest of your code oblivious that it is using test data (much
like if you have a development environment and a production
environment that you switch between for network calls).
This is exactly the thing that I am doing in my current iOS project, and this is exactly the thing that I want to avoid.
Although a good architecture can avoid too much havoc, it is still a pollution of the app code. Moreover, this does not solve any of the use cases that I highlighted above.
By proposing such a solution, you essentially admit that radical black-box testing is inferior to gray-box testing.
As in many parts of life, a differentiated view is better than a radical "use only the tools that we give you, you should not need to do this".
UI Tests should not need to link to app code, they are designed to simulate a user tapping away inside your app. If you were to jump into the app code during these tests you would no longer be testing what your app does in the the real world you would be testing what it does when manipulated in a way no user ever could. UI tests should not have any need of any app code any more than a user does.
For unit tests and integration tests of course you use #testable import … to get access to any methods and properties that are not marked private or fileprivate. Anything marked private or fileprivate will still be inaccessible from test code, but everything else including internal will be accessible. These are the tests where you should intentionally throw data in that can't possibly occur in the real world to make sure your code can handle it. These tests should still not reach into a method and make any changes or the test won't really be testing how the code behaves.
You can create as many unit test targets as you want in a project and you can use one or more of those targets to hold integration tests rather than unit tests. You can then specify which targets run at various times so that your slower integration tests don't run every time you test and slow you down.
The environment unit and integration tests run in actually has everything. You can create an instance of a view controller and call loadViewIfNeeded() to have the entire view setup. You can then test for the existence of various outlets and trigger them to send actions (Check out UIControl's sendActions(for: ) method). Provided you have setup the necessary mocks this will let you verify that when a user taps button A, a call gets sent to the proper method of thing B.
For UI tests you shouldn't have to pollute your app code too much. You could use a single command line argument to indicate UI tests are running and use that to load up some test data, login a test user, or pick a testing endpoint for network calls. With good architecture you will only need to make the adjustment once when the app first launches with the rest of your code oblivious that it is using test data (much like if you have a development environment and a production environment that you switch between for network calls).
If you want to learn more about testing Swift Paul Hudson has a very good book you can check out https://www.hackingwithswift.com/store/testing-swift. It has plenty of examples of the various kinds of tests and good advice on how to split them up.
Update based on your edits and comments:
It looks like what you really want is Integration Tests. These are easy to miss in the world of Xcode as they don't have their own kind of target to create. They use the Unit Test target but test multiple things working together.
Provided you haven't added private or fileprivate to any of your outlets you can create tests in a Unit Test target that makes sure the outlets exist and then inject text or trigger their actions as needed to simulate a user navigating through your app.
Normally this kind of testing would just go from one view controller to a second one to test that the right view controller gets created when an action happens but nothing says it can't go further.
You won't get images of the screen for a failed test like you do with UI Tests and if you use storyboards make sure to instantiate your view controllers from the storyboard. Be sure that you are grabbing any navigation controllers and the such that are required.
This methodology will allow you to act like you are navigating through the app while being able to manipulate whatever data you need as it goes into various methods.
If you have a method with 10 lines in it and you want to tweak the data between lines 7 and 8 you would need to have an external call to something mockable and make your changes there or use a breakpoint with a debugger command that makes the change. This breakpoint trick is very useful for debugging things but I don't think I would use it for tests since deleting the breakpoint would break the test.
I had to do it for a specific app. What we did is creating a kind of debug menu accessible only for UI tests (using launch arguments to make it available) and displayed using a certain gesture (two taps with two fingers in our case).
This debug menu is just a pop-up view appearing over all screens. In this view, we add buttons which allow us updating the state of the app.
You can then use XCUITest to display this menu and interact with buttons.
I've come across this same problem OP. Coming from the Android ecosystem and attempting to leverage solutions for iOS will have you banging your head as to why Apple does things this way. It makes things difficult.
In our case we replicated a network mocking solution for iOS that allows us to control the app state using static response files. However, Using a standalone proxy to do this makes running XCUITest difficult on physical devices. Swift provides some underlying Foundations.URLSession features that allow you to do the same thing from inside of the configured session objects. (See URLProtocolClasses)
Now, our UI tests are having an IPC problem since the runner app is in its own process. Prior the proxy and UI tests lived in the same process so it was easy to control responses returned for certain requests.
This can be done using some odd bridges for interprocess communication, like CFMessaging and some others (see NSHipster here)
Hope this helps.
how exactly does the unit test environment work
The unit tests are in a bundle which is injected into the app.
Actually several bundles are injected. The overarching XCTest bundle is a framework called XCTest.framework and you can actually see it inside the built app:
Your tests are a bundle too, with an .xctest suffix, and you can see that in the built app as well:
Okay, let's say you ask for one or more tests to run.
The app is compiled and runs in the normal way on the simulator or the device: for example, if there is a root view controller hierarchy, it is assembled normally, with all launch-time events firing the way they usually do (for instance, viewDidLoad, viewDidAppear, etc.).
At last the launch mechanism takes its hands off. The test runner is willing to wait quite a long time for this moment to arrive. When the test runner sees that this moment as arrived, it executes the test bundle's executable and the tests begin to run. The test code is able to see the main bundle code because it has imported the main bundle as a module, so it is linked to it.
When the tests are finished, the app is abruptly torn down.
So what about UI tests?
UI tests are completely different.
Nothing is injected into your app.
What runs initially is a special separate test runner app, whose bundle identifier is named after your test suits with Runner appended; for example, com.apple.test.MyUITests-Runner. (You may even be able to see the the test runner launch.)
The test runner app in turn backgrounds itself and launches your app in its own special environment and drives it from the outside using the Accessibility framework to "tap" buttons and "see" the interface. It has no access to your app's code; it is completely outside your app, "looking" at its interface and only its interface by means of Accessibility.
We use GCDWebServer to communicate between test and the app under test.
The test asks the app under test to start this local server and then test can talk to app using this server. You can make requests on this server to fetch some data from app as well as to tell app to modify some behavior by providing data.

How to create snapshot of CoreData state?

Background story
I am developing a big iOS app. This app works under specific assumptions. The main of them is that app should work offline with internal storage which is a snapshot of last synchronized state of data saved on server. I decided to use CoreData to handle this storage. Every time app launches I check if WiFi connection is enabled and then try to synchronize storage with server. The synchronization can take about 3 minutes because of size of data.
The synchronization process consists of several stages and in each of them I:
fetch some data from the server (XML)
deserialize it
save it in Core Data
Problem
Synchronization process can be interrupted for several reasons (internet connection, server down, user leaving application, etc). This may cause data to be out-of-sync.
Let's assume that synchronization process has 5 stages and it breaks after third. It results in 3/5 of data being updated in internal storage and the rest being out of sync. I can't allow it because data are strongly connected to each other (business logic).
Goal
I don't know if it is possible but I'm thinking about implementing one solution. On start of synchronization process I would like to create snapshot (some kind of copy) of current state of Core Date and during synchronization process work on it. When synchronization process completes with success then this snapshot could overwrite current CoreData state. When synchronization interrupts then snapshot can be simply aborted. My internal storage will be secured.
Questions
How to create CoreData snapshot?
How to work with CoreData snapshot?
How to overwrite CoreDate state with snapshot?
Thanks in advice for any help. Code examples, if it is possible, will be appreciated.
EDIT 1
The size of data is too big to handle it with multiple CoreData's contexts. During synchronization I am saving current context multiple times to cleanup memory. If I do not do it, the application will crash with memory error.
I think it should be resolved with multiple NSPersistentStoreCoordinators using for example this method: link. Unfortunately, I don't know how to implement this.
You should do exactly what you said. Just create class (lets call it SyncBuffer) with methods "load", "sync" and "save".
The "load" method should read all entities from CoreData and store it in class variables.
The "sync" method should make all the synchronisation using class variables.
Finally the "save" method should save all values from class variables to CoreData - here you can even remove all data from CoreData and save brand new values from SyncBuffer.
A CoreData stack is composed at its core by three components: A context (NSManagedObjectContext) a model (NSManagedObjectModel) and the store coordinator (NSPersistentStoreCoordinator/NSPersistentStore).
What you want is to have two different contexts, that shares the same model but use two different stores. The store itself will be of the same type (i.e. an SQLite db) but use a different source file.
At this page you can see some documentation about the stack:
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/CoreData/InitializingtheCoreDataStack.html#//apple_ref/doc/uid/TP40001075-CH4-SW1
The NSPersistentContainer is a convenience class to initialise the CoreData stack.
Take the example of the initialisation of a NSPersistentContainer from the link: you can have the exact same code to initialise it, twice, but with the only difference that the two NSPersistentContainer use a different .sqlite file: i.e. you can have two properties in your app delegate called managedObjectContextForUI and managedObjectContextForSyncing that loads different .sqlite files. Then in your program you can use the context from one store to get current data to show to the user and you can use the context that use the other store with a different .sqlite if you are doing sync operations. When the sync operations are finally done you can eventually swap the two files and after clearing and reloading the NSPersistentContainer (this might be tricky, because you will want to invalidate and reload all managed objects: you are switching to an entirely new context) you can then show the newly synced data to the user and start syncing again on a new .sqlite file.
The way I understand the problem is that you wish to be able download a large "object graph". It is however so large that it cannot be loaded at once in memory, so you would have to break it in chunks and then merge it locally into to Core data.
If that is the case, I think that's not trivial. I am not sure I can think of direct solution without understanding the object relations and even then it may be really overwhelming.
An overly simplistic solution may be to generate the sqlite file on the backend then download it in chunks; it seems ugly, but it serves to separate the business logic from the sync, i.e. the sqlite file becomes the transport layer. So, I think the essence to the solution would be to find a way to physically represent the data you are syncing in a format that allows for splitting it in chunks and that can afterwards be merged into a sqlite file (if you insist on using Core data).
Please also note that as far as I know Amazon (https://aws.amazon.com/appsync/) and Realm (https://realm.io/blog/introducing-realm-mobile-platform/) provide background sync of you local database, but those are paid services and you would have to be careful not be locked in (should not depend on their libs in your model layer, instead have a translation layer).

How can I access a core data store from an Xcode UI test?

We've got a suite of UI tests for our app written using KIF which I'd like to convert to use the new Xcode UI test framework.
Some of our current tests work like this:
Assert that there are no objects in a core data table
Do some stuff in the UI
Assert that there are some objects in the core data table
Cancel the thing we were doing in the UI
Assert that there are no leaked objects in the core data table
I need to look at the actual core data store, not just the UI, because the leaked rows wouldn't necessarily be shown in the UI. How can I access a core data store from an Xcode UI test?
The answer is that you can't, not without abusing signals. XCUITests are not intended to touch the metal; they are intended to exercise user facing behavior only.
The test that you describe sounds like a perfectly good candidate for a unit test, though!
UPDATE:
(based on comments from OP)
Well, as far as I can tell you have four options
you can create a backchannel that will use signals passing to break
the separation between XCUITest and the app's internals.
you can
build functionality to mock the UI interactions in your unit tests
so that you can validate against side effects.
you can add an
assertion and then exercise it manually.
you can file a Radar
asking for the functionality.
You can easily test against Core Data but your test you described does not make much sense. If you are cancelling the UI action then Core Data does not save to disk. When you mention "table" that means to me that you are looking on disk.
If you want to load an empty Core Data store, create some objects, verify they were created, delete them and confirm they were deleted; that can all be done in a UI test.
Start with no store on disk (or use an in-memory store)
Run your UI test
Do a fetch against Core Data and confirm objects are in memory
Perform your cancel code
Confirm code is removed from Core Data.
What part are you having an issue with?

Unit Testing with Kiwi, Core Data and Magical Record

I'm having issues using a 'fake' store for my Unit Tests.
I have installed Kiwi by adding its framework folder to my project and replacing the Xcode's default test cases with Kiwi tests. These all run fine.
Since I'm using Core Data, I need to create a 'fake' store so I'm playing with the real database. I used http://www.cimgf.com/2012/05/15/unit-testing-with-core-data/ as my basic guide to do this.
However, since Xcode's default test implementation runs tests after launching the app, my '[MagicalRecord setupCoreDataStackWithStoreNamed:#"Store.sqlite"]' is still fired inside the App Delegate before any of the tests run.
By the time the tests try to use '[MagicalRecord setupCoreDataStackWithInMemoryStore]', this sqlite store is set up, and so the in-memory store doesn't get set up (AFAIK), since the aforementioned setup stack method checks first to see if a stack already exists, and just returns without executing anything if it does, so I end up with the sqlite database still.
As far as I can tell, this leaves me with the following options:
Put some environment variables or flags in for the test cases, and check for these in the app delegate, creating the appropriate store depending on this variable (i.e. tweaking my actual code for the sake of testing - not pretty, nor recommended by any practising TDD/BDDers).
Add managed context properties on all my controllers so I can manually specify the store to use (removing a great deal of the niceties of the MagicalRecord singleton access pattern).
Play (carefully) with my actual database (I'm not really willing to even contemplate this).
None of these seems to be a particularly good solution, so I'm hoping someone can see a better solution that I've stupidly overlooked.
Your tests should not be launching the app delegate. Try setting up your tests so that only the tests setup the in-memory core data store, as suggested in the article you reference.

Recommendations to test API request layer in iOS apps using NSOperations and Coredata

I develop an iOS app that uses a REST API. The iOS app requests data in worker threads and stores the parsed results in core data. All views use core data to visualize the information. The REST API changes rapidly and I have no real control over the interface.
I am looking for advice how perform integration tests for the app as easy as possible. Should I test against the API or against Mock data? But how to mock GET requests properly if you can create resources with POST or modify them with PUT?
What frameworks do you use for these kind of problems? I played with Frank, which looks nice but is complicated due to rapid UI changes in the iOS app. How would you test the "API request layer" in the app? Worker threads are NSOperations in a queue - everything is build asynchronously. Any recommendations?
I would strongly advise you to mock the server. Servers go down, the behavior changes, and if a test failure implies "maybe my code still works", you have a problem on your hands, because your test doesn't tell you whether or not the code is broken, which is the whole point.
As for how to mock the server, for a unit test that does this:
first_results = list_things()
delete_first_thing()
results_after_delete = list_thing()
I have a mock data structure that looks like this:
{ list_things_request : [first_results, results_after_delete],
delete_thing_request: [delete_thing_response] }
It's keyed on your request, and the value is an array of responses for that request in the order that they were seen. Thus you can support repeatedly running the same request (like listing the things) and getting a different result. I use this format because in my situation it is possible for my API calls to run in a slightly different order than it did last time. If your tests are simpler, you might be able to get away with a simple list of request/response pairs.
I have a flag in my unit tests that indicate if I am in "record" mode (that is, talking to a real server and recording this data-structure to disk) or if I am in "playback" mode (talking to the datastructure). When I need to work with a test, I "record" the interactions with the server and then play them back.
I use the little-known SenTestCaseDidStartNotification to track which unit test is running and isolate my data files appropriately.
The other thing to keep in mind is that instability is the root of all evil. If you have code that does things with sets, or gets the current date, and such, this tends to change the requests and responses, which do not work in an offline scenario. So be careful with those.
(Since nobody stepped in yet and gave you a complete walkthrough) My humble advice: Step back a bit, take out the magic of async, regard everything as sync (api calls, parsing, persistence), and isolate each step as a consumer/producer. After all you don't wan't to unit-test NSURLConnection, or JSONKit or whatever (they should have been tested if you use them), you want to test YOUR code. Your code takes some input and produces output, non-aware of the fact that the input was in fact the output genereated in a background thread somewhere. You can do the isolated test all sync.
Can we agree on the fact that your Views don't care about how their model data was provided? If yes, well, test your View with mock objects.
Can we agree on the fact that your parser doesn't care about how the data was provided? If yes, well, test your parser with mock data.
Network layer: same applies as described above, in the end you'll get an NSDictionary of headers, and some NSData or NSString of content. I don't think you want to unit-test NSURLConnection or any 3'rd party networking api you trust (asihttp, afnetworking,...?), so in the end, what's to be tested?
You can mock up URLs, request headers and POST data for each use-case you have, and setup test cases for expected responses.
In the end, IMHO, it's all about "normalizing" out asyc.
Take a look at Nocilla
For more info, check this other answer to a similar question

Resources