I wonder how to mark a specific test in eunit in a way that will force it to be ignored (ie compiled, but not executed) on the next test run. I'm asking this question in a TDD context, ie I'd like to refactor in the green, but still have some test cases that I'll get to later.
I'd rather not comment-out the test, that is a good way of forgetting about it. eunit's test summary line does have a skipped line, but I could not find any docs about that functionality.
You can temporarily remove '_test' suffix from test's name (or add any other, e.g. '_ignore'). It will compile, but won't show up in the summary (as it will be treated like a regular function and thus will be ignored by eunit).
This is a workaround of course, eunit should support such functionality, but I'm afraid it doesn't.
EUnit's notion of "skipped" means that something prevented the test from running, such as a compilation failure, the node that was in charge of the test crashing, or the setup failing.
This concept is pretty deeply embedded in the code, so there's no simple way to get user-skipped tests.
Related
I have a lua project with lua files specified in multiple directories all under the same root folder with some dependencies.
Occasionally I run into issues where when a table is being loaded at load time I get a nil exception as the table is referencing a not yet initialised table, like:
Customer =
{
Type = CustomerTypes.Friendly
}
Which causes a nil exception for CustomerTypes as CustomerTypes.lua has not yet loaded.
My current solution is to simply have a global function call in these lua files to load the dependency scripts.
What I would like to do is pre-process my lua files to find all dependencies and at run time load them in that order without needing function calls or special syntax in my lua files to determine this (i.e. the pre-processor will procedurally work out dependencies).
Is this something which can be realistically achieved? Are there other solutions out there? (I've come across some but not sure if they're worth pursuing).
As usual with lua there are about 230891239122 ways to solve this. I'll name 3 off the top of my head but I bet I could illustrate at least 101 of them and publish a coffee table book.
First of all, it must be said that the notion of 'dependencies' here is strictly up to your application. Lua has no sense of it. So this isn't anything like overcoming a deficiency in lua, it's simply you creating a scripting environment in your application that makes you comfortable, and that's what lua's all about.
Now, it seems to me you've jumped to a conclusion that preprocessing is required to solve the given problem. I don't think that's warranted. I feel somewhat comfortable saying a more conventional approach to solving the problem would be to make a __newindex metamethod globally which handles the "CustomerTypes doesnt exist yet" situation by referencing a list of scripts which have been scanned out of the filesystem initially for one called CustomerTypes.lua and running that.
But maybe you have some good reason for wanting it done strictly as preprocessing. In your case, I would start by considering 'dependencies' to be any name which is a script found in your scripts filesystem. Then scan each script to look for the names of dependencies using the definitions/list you just created, and prepend a load(dependency) command to each of those scripts.
Since the concept of "runtime" or "preprocessing" is somewhat ambiguous in this context, you might mean at script-compile-time. You could use the LuaMacros token filters system to effect a macro which replaces CustomerTypes with require("CustomerTypes.lua") or something to that effect after having discovered that CustomerTypes is a legal dependency name.
I'm starting to learn how to do test-driven development, and I'm working with Swift. I'm suppose to have a test which should fail then write the code needed to get it to pass. From my understanding the test should successfully run, just fail. However, in Swift, when I try to write a test that, say, checks the value of a object's specific attribute, if that class doesn't yet have such an attribute (because I'm suppose to write the test first before I create it for that class) I don't get a failing test, but instead a build error when attempting to build and run the test. The error is that the test is trying to access an attribute that doesn't exist for the given object. Am I going about this the wrong way? Or are these test build breaking errors suppose to be what I get when doing TDD in Swift? Thanks!
According to Uncle Bob's 3 Rules of Tdd:
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
(emphasis mine). So there is actually no need for "the test to successfully run" - compilation error is a fine excuse to write code :)
TDD is a great idea, but don't forget to apply some common sense. In a case like this, treat the build error as if it was a test failure. At some point you have to create the class and the attribute to get the code to build. Then, elaborate on your test to make it do something that fails, write the code that makes it pass, and continue.
I have a number of geb functional tests for a grails application.
The tests are working as expected when executed from terminal or IDE.
Although the tests need to be executed by hudson, so they are run in headless mode using Xvfb.
The problem is that the tests keep failing, or behaving unexpectedly, returning errors like RequiredPageContentNotPresent and Stale Element Reference Exception in places that doesn't make sense.
For example:
(at LicencePage is verified above, and page isn't changed)
when:
addDocument(Data.Test_Doc_name,Data.Test_Doc_file)
sometimes throws
Failure: Add Actual Licence (HomePageSpec)
| geb.error.RequiredPageContentNotPresent: The required page content 'addDocument - SimplePageContent (owner: LicencePage, args: [Functional Test Doc, /var/lib/hudson/jobs/KB-Functional_Tests/workspace/app/../manual_test_data/so_v3/os_test_1], value: null)' is not present
at geb.content.TemplateDerivedPageContent.require(TemplateDerivedPageContent.groovy:61)
at geb.content.PageContentTemplate.create_closure1(PageContentTemplate.groovy:63)
at geb.content.PageContentTemplate.create(PageContentTemplate.groovy:82)
at geb.content.PageContentTemplate.get(PageContentTemplate.groovy:54)
at geb.content.NavigableSupport.getContent(NavigableSupport.groovy:45)
at geb.content.NavigableSupport.methodMissing(NavigableSupport.groovy:121)
at geb.Browser.methodMissing(Browser.groovy:194)
at geb.spock.GebSpec.methodMissing(GebSpec.groovy:51)
at HomePageSpec.Add Actual Licence (HomePageSpec.groovy:228)
The method addDocument() is defined on an 'abstract' page, which LicencePage is extending. In most cases like this, if I copy the method code directly into my Spec, it is going to work, although its ruining all the structure I have on my test pages.
Anyone has experience running geb tests with Xvfb? Have you faced these issues?
All tests are passing when executed locally, and this not a data issue as the DB is always cleared
Also, without making any changes, the tests are behaving non-deterministic (on hudson) so the above exception is not always thrown. Without any changes at all, tests are sometimes successful and sometimes fail.
The description you gave seems to be the symptom of a flackey test-suite. we were facing this problem as well some time ago. A good starting point for this is this presentation (around min. 35) and the documentation about the wait stuff in geb.
If you think, it could have something to do with xvfb (where i have no experiences with), you could try to use phantomjs as the test-runner and check if it works correctly.
I have a test in fitnesse which used to work, but when I got into work today the test did not start at all. As soon as I press test I get the "0 errors 0 warrings..." text on the top of the test. Looking in the source control software, I can not find any changes to the test, or to anything related to it. I have noticed that the runner process does not start when I run the test. Other tests seem to work fine, and I can copy the tables from the test which is not working into an other test and everything is fine. Any ideas on what could be wrong?
My standard answer in this situation is, "have you checked your classpath?"
I say this, as typically when this sort of thing happens, it is that the !path doesn't point to the stuff you need, whether it be FitNesse.jar or your own custom code.
Also, do you get an output page where you can check the classpath? i doubt it, but that can help diagnose classpath issues.
I'm attempting to run a suite of Fitnesse tests however I keep getting the following error message.
Testing was interupted and results are incomplete.
Test Pages: 0 right, 0 wrong, 0 ignored, 0 exceptions Assertions: 0 right, 0 wrong, 0 ignored, 0 exceptions
The two pages run fine by themselves however when the links are included on a suite page they don't appear to be getting detected.
Has anyone come across this before?
When you include a page into another page only the text of the page is included, not the attributes. Pages marked as Suite do not execute as tests unless they are also marked as Test. If you mark your Suite page with the Test attribute as well your included tests may run (though this depends a lot on what is in those test pages of course!)
I'm also guessing that the two pages you are "linking" into the test are not underneath the suite page (i.e. SuitePage.TestPageOne; a.k.a. sub-wiki) but are elsewhere in the wiki (i.e. SomeOtherPage.TestPageOne). If this is the case you may also want to move your test pages so they are directly underneath your suite page. This is what is referred to as a sub-wiki. You can find out more about them here.
Hope That Helps
Another possibility is that the "Prune" checkbox on the page properties got checked. It has a very similar impact. prune is intended to allow turning off sections of the suite so the don't run.
Interestingly enough, prune has no effect on child trees or nodes if they are run directly.