Swift - Getting around build errors in test-driven development? - ios

I'm starting to learn how to do test-driven development, and I'm working with Swift. I'm suppose to have a test which should fail then write the code needed to get it to pass. From my understanding the test should successfully run, just fail. However, in Swift, when I try to write a test that, say, checks the value of a object's specific attribute, if that class doesn't yet have such an attribute (because I'm suppose to write the test first before I create it for that class) I don't get a failing test, but instead a build error when attempting to build and run the test. The error is that the test is trying to access an attribute that doesn't exist for the given object. Am I going about this the wrong way? Or are these test build breaking errors suppose to be what I get when doing TDD in Swift? Thanks!

According to Uncle Bob's 3 Rules of Tdd:
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
(emphasis mine). So there is actually no need for "the test to successfully run" - compilation error is a fine excuse to write code :)

TDD is a great idea, but don't forget to apply some common sense. In a case like this, treat the build error as if it was a test failure. At some point you have to create the class and the attribute to get the code to build. Then, elaborate on your test to make it do something that fails, write the code that makes it pass, and continue.

Related

Understanding XCTestCase life cycle - How to perform setup BEFORE Xcode unit test launches the app?

I am new to unit tests in Xcode and Swift and have some trouble to understand the life cycle of XCTestCase.
How/where to add setup code which is executed before the actual app is launched?
Problem is, that first the host app is launched before any of the test setup methods (class func setUp(), func setUp(), func setUpWithError()) are executed.
It is even possible to run test code before the host app launches?
Details:
As described in a previous question my app uses a SQLite database to persist some data. When the app launches a database connection is created and data is read from the database.
To make tests consistent and repeatable I would like to use a fresh database with some well defined data every time a run the test. To archive this I tried to override setUpWithError remove the existing db file and move a file with pre-defined data in place instead.
Unfortunately this does not work, because setUpWithError is executed only after the host app was launched. The same is true for all other test setup methods.
Moving a fresh database file in place before running the tests is only an example. The problem is the same for all local data which should be in place before the host app launches to ensure repeatable tests.
An answer to my previous question included a UIApplication extension with a isTesting method which can be used to check if a test is performed. While I could use this in my app code to setup the test data, I would consider this a bad solution. I would like to keep the code completely separated from the production code. Is this possible?
There are several approaches to set up data before running a test case
NSPrincipalClass
As described here you can create a class, and the init method of that class is executed before running any test. This helps setting up dependencies used by many test cases. I don't think this is the way to go in your case.
isTesting
Instead of setting up the code in your app target, you can also check for isTesting early in the didFinishLaunchingWithOptions method of your AppDelegate and simply return there. In this case the regular code is not executed and you can run you custom database setup code in the setup of your test class.
Dependency Injection
Right now you are doing integration testing in my opinion. If you want to have proper unit tests, they should not operate against the database of the underlying app. Instead create an extra (in memory) database and inject that into the code you are testing. I think this is the way you should follow. The benefits are
Your app is not affected by your tests. Next time you start the app to manually test, your original data is still there.
Your unit tests are not affected by manual changes to the app
Your unit tests are also independent of each other. Order of execution doesn't change the results when setting up the database fresh for each test case.
It is even possible to run test code before the host app launches
If you have to ask that, your tests are not unit tests. Unit tests test code, not the app. A test that requires the app to be running would be a UI test.
In fact, the best approach for unit tests is to put all your testable code into a framework and give the framework unit tests; that way, your tests run much faster because the app never has to launch at all.
It sounds to me like the problem lies deeper in your app's code: you have evidently not written your code in such a way as to be testable. So that would be your first move. Writing testable code, and writing unit tests, is an art; you have to separate out the "system under test", which should be your code alone, and make sure you are not testing anything that doesn't belong to you and whose workings are already known — like Core Data.

Running Geb + spock tests headless

I have a number of geb functional tests for a grails application.
The tests are working as expected when executed from terminal or IDE.
Although the tests need to be executed by hudson, so they are run in headless mode using Xvfb.
The problem is that the tests keep failing, or behaving unexpectedly, returning errors like RequiredPageContentNotPresent and Stale Element Reference Exception in places that doesn't make sense.
For example:
(at LicencePage is verified above, and page isn't changed)
when:
addDocument(Data.Test_Doc_name,Data.Test_Doc_file)
sometimes throws
Failure: Add Actual Licence (HomePageSpec)
| geb.error.RequiredPageContentNotPresent: The required page content 'addDocument - SimplePageContent (owner: LicencePage, args: [Functional Test Doc, /var/lib/hudson/jobs/KB-Functional_Tests/workspace/app/../manual_test_data/so_v3/os_test_1], value: null)' is not present
at geb.content.TemplateDerivedPageContent.require(TemplateDerivedPageContent.groovy:61)
at geb.content.PageContentTemplate.create_closure1(PageContentTemplate.groovy:63)
at geb.content.PageContentTemplate.create(PageContentTemplate.groovy:82)
at geb.content.PageContentTemplate.get(PageContentTemplate.groovy:54)
at geb.content.NavigableSupport.getContent(NavigableSupport.groovy:45)
at geb.content.NavigableSupport.methodMissing(NavigableSupport.groovy:121)
at geb.Browser.methodMissing(Browser.groovy:194)
at geb.spock.GebSpec.methodMissing(GebSpec.groovy:51)
at HomePageSpec.Add Actual Licence (HomePageSpec.groovy:228)
The method addDocument() is defined on an 'abstract' page, which LicencePage is extending. In most cases like this, if I copy the method code directly into my Spec, it is going to work, although its ruining all the structure I have on my test pages.
Anyone has experience running geb tests with Xvfb? Have you faced these issues?
All tests are passing when executed locally, and this not a data issue as the DB is always cleared
Also, without making any changes, the tests are behaving non-deterministic (on hudson) so the above exception is not always thrown. Without any changes at all, tests are sometimes successful and sometimes fail.
The description you gave seems to be the symptom of a flackey test-suite. we were facing this problem as well some time ago. A good starting point for this is this presentation (around min. 35) and the documentation about the wait stuff in geb.
If you think, it could have something to do with xvfb (where i have no experiences with), you could try to use phantomjs as the test-runner and check if it works correctly.

Fitnesse - runner process not starting

I have a test in fitnesse which used to work, but when I got into work today the test did not start at all. As soon as I press test I get the "0 errors 0 warrings..." text on the top of the test. Looking in the source control software, I can not find any changes to the test, or to anything related to it. I have noticed that the runner process does not start when I run the test. Other tests seem to work fine, and I can copy the tables from the test which is not working into an other test and everything is fine. Any ideas on what could be wrong?
My standard answer in this situation is, "have you checked your classpath?"
I say this, as typically when this sort of thing happens, it is that the !path doesn't point to the stuff you need, whether it be FitNesse.jar or your own custom code.
Also, do you get an output page where you can check the classpath? i doubt it, but that can help diagnose classpath issues.

How do I skip an eunit test?

I wonder how to mark a specific test in eunit in a way that will force it to be ignored (ie compiled, but not executed) on the next test run. I'm asking this question in a TDD context, ie I'd like to refactor in the green, but still have some test cases that I'll get to later.
I'd rather not comment-out the test, that is a good way of forgetting about it. eunit's test summary line does have a skipped line, but I could not find any docs about that functionality.
You can temporarily remove '_test' suffix from test's name (or add any other, e.g. '_ignore'). It will compile, but won't show up in the summary (as it will be treated like a regular function and thus will be ignored by eunit).
This is a workaround of course, eunit should support such functionality, but I'm afraid it doesn't.
EUnit's notion of "skipped" means that something prevented the test from running, such as a compilation failure, the node that was in charge of the test crashing, or the setup failing.
This concept is pretty deeply embedded in the code, so there's no simple way to get user-skipped tests.

Why is my functional test getting the meta tag http-equiv='refresh' and then quitting?

When I run a simple functional test to get (for example) the users/signIn page, I'm getting this:
<html><head><meta http-equiv="refresh" content="0;url=https://localhost/index.php/users/signIn"/></head></html>
and then the functional test just stops. It happens in other functional tests too, but not on every request. Other tests will run fine, then when it gets to a certain request in the test, it will get that response (with the requested URL in the content attribute), and stop.
Any ideas on why this might be happening?
These functional tests used to work, but I just got this project back from another development company and I don't have an idea of where to start looking for the changes. Of course I can do diffs on the files with the version control, but I don't know where to start. Thanks for any leads!
Argh, found it quicker than I thought.
The SSL filter was turned on, and needs to be disabled for the test environment. They had removed the test environment from app.yml.
test:
disable_sslfilter: true

Resources