In my web test, I have used a loop to redirect it to the previous request if it fails to get the expected response which includes a context parameter param01.
So, when I run a load test sometimes the loop 1 fails because it cannot find the Context parameter param01 and throws an extraction rule error, but loop2 becomes success because it found the context parameter and extraction rule is passed.
Now, I want to suppress the Extraction rule error which occurs in the failed loop (loop 1) and pass the test. Please help me on this.
Many extraction rules have a Required property, of boolean type. Set it to False so that the rule (and hence the test) will not fail when the expected data is not available.
Be aware that the extraction rule for Extract regular expression has a Required property but it appears to be handled wrongly in some Visual Studio versions . See this fault report. See also this discussion of the issue plus a work-around that calls the real extraction rule then writes true to the Success field.
I have a test suite which has the init and end functions implemented in it.
When I run the suite it produces some html outputs to show the results of the test cases (pass and fail etc.) from the suite.
But in the log the init_per_suite and end_per_suite are also counted as test cases and their run result is shown in the log. Is there a way to avoid this? I guess there might be a flag in Erlang common test which can be used to disable this.
No, you can't disable it. Besides it may be important information if start_per_suite/end_per_suite succeeds or or fails.
Also you can see that start_per_suite/end_per_suite are not included in general numeration of testcases in resulting html. May be it'll help you if you want to parse the html output. Also you can sort cases by their numbers so the unnumered cases will be on the top/bottom.
What is the best way to manually make a test case fail in Erlang common test?
I am using something like this:
ok = nok, % fail as soon as possible
to raise a badmatch exception and make the case fail.
I wonder if there are other (better) ways to achieve this?
ct:fail/1 and ct:fail/2 seem to be there for that reason.
No, that's a standard perfectly good way of doing it. An alternative would be to not just fail but to try and generate a more relevant error. For example if the function foo_test() returns ok when successful then an alternative could be to write:
ok = foo_test()
to both test and generate an error. It is still a badmatch error but it is easier to see what went wrong.
I sometimes use the error function:
error(incorrect_foo)
That way I can easily distinguish different causes for failure within the same test case. For example, I might have error(incorrect_bar) somewhere in the same function.
From time to time I run into the issue that Grails integration tests the name of which ends in "IntegrationTests" don't work coming up with exceptions that show that GORM methods have not been added to domain classes. After renaming those tests to "*IntegrationTest" (no s at the end) they work fine.
A short example:
class MyIntegrationTests {
#Test
void myTest() {
assert MyDomainClass.count() == 0
}
}
Will fail with the following exception:
Failure: myTest(de.myproject.MyIntegrationTests)
groovy.lang.MissingMethodException: No signature of method: de.myproject.MyDomainClass.count() is applicable for argument types: () values: []
Possible solutions: count(), ident(), print(java.io.PrintWriter), print(java.lang.Object), getCount(), wait()
at de.myproject.MyIntegrationTests.myTest(MyIntegrationTests.groovy:9)
After renaming MyIntegrationTests to MyIntegrationTest the test passes.
Is there some kind of magic happening according to the test's name? All I found in Grails documentation is: "Tests can also use the suffix of Test instead of Tests. " Any ideas?
I eventually found the cause for the different behaviour of "*Test" and "*Tests" myself: Different postfixes change the order in which the tests are being run. To make things worse, the exact order is platform-dependent. Thus, my tests ran locally (OSX) in a different order than on my CI machine (Linux), and thereby produced different results.
Why the exception occurrs in some order is a totally different problem, though, which I haven't figured out (yet).
This should work how you had it originally as long as the file is in the integration folder. Are you sure you didn't have it in the unit test folder and then move it into the integration folder on the rename? Or possibly that you're using intellij and you did a "junit" test run rather than a "grails" one?
The error you're getting seems to imply that grails didn't start up when running your test.
Your test won't be executed if it does not has the suffix Tests.
Copied From Grails documentation home page (http://grails.org/doc/latest/guide/testing.html):
The default class name suffix is Tests but as of Grails 1.2.2, the
suffix of Test is also supported.
j-
I have to ask this, because: The only thing I recognize is, that if the assertion fails, the app crashes. Is that the reason why to use NSAssert? Or what else is the benefit of it? And is it right to put an NSAssert just above any assumption I make in code, like a function that should never receive a -1 as param but may a -0.9 or -1.1?
Assert is to make sure a value is what its supposed to be. If an assertion fails that means something went wrong and so the app quits. One reason to use assert would be if you have some function that will not behave or will create very bad side effects if one of the parameters passed to it is not exactly some value (or a range of values) you can put an assert to make sure that value is what you expect it to be, and if it's not then something is really wrong, and so the app quits. Assert can be very useful for debugging/unit testing, and also when you provide frameworks to stop the users from doing "evil" things.
I can't really speak to NSAssert, but I imagine that it works similarly to C's assert().
assert() is used to enforce a semantic contract in your code. What does that mean, you ask?
Well, it's like you said: if you have a function that should never receive a -1, you can have assert() enforce that:
void gimme_positive_ints(int i) {
assert(i > 0);
}
And now you'll see something like this in the error log (or STDERR):
Assertion i > 0 failed: file example.c, line 2
So not only does it safe-guard against potentially bad inputs but it logs them in a useful, standard way.
Oh, and at least in C assert() was a macro, so you could redefine assert() as a no-op in your release code. I don't know if that's the case with NSAssert (or even assert() any more), but it was pretty useful to compile out those checks.
NSAssert gives you more than just crashing the app. It tells you the class, method, and the line where the assertion occurred. All the assertions can also be easily deactivated using NS_BLOCK_ASSERTIONS. Thus making it more suitable for debugging. On the other hand, throwing an NSException only crashes the app. It also does not tell about the location of the exception, nor can it be disabled so simply. See the difference in the images below.
The app crashes because an assertion also raises an exception, as the NSAssert documentation states:
When invoked, an assertion handler prints an error message that
includes the method and class names (or the function name). It then
raises an NSInternalInconsistencyException exception.
NSAssert:
NSException:
Apart from what everyone said above, the default behaviour of NSAssert() (unlike C’s assert()) is to throw an exception, which you can catch and handle. For instance, Xcode does this.
Just to clarify, as somebody mentioned but not fully explained, the reason for having and using asserts instead of just creating custom code (doing ifs and raising an exception for bad data, for instance) is that asserts SHOULD be disabled for production applications.
While developing and debugging, asserts are enabled for you to catch errors. The program will halt when an assert is evaluated as false.
But, when compiling for production, the compiler omits the assertion code and actually MAKE YOUR PROGRAM RUN FASTER. By then, hopefully, you have fixed all the bugs.
In case your program still has bugs while in production (when assertions are disabled and the program "skips over" the assertions), your program will probably end up crashing at some other point.
From NSAssert's help: "Assertions are disabled if the preprocessor macro NS_BLOCK_ASSERTIONS is defined."
So, just put the macro in your distribution target [only].
NSAssert (and its stdlib equivalent assert) are to detect programming errors during development. You should never have an assertion that fails in a production (released) application. So you might assert that you never pass a negative number to a method that requires a positive argument. If the assertion ever fails during testing, you have a bug. If, however, the value that's passed is entered by the user, you need to do proper validation of the input rather than relying on the assertion in production (you can set a #define for release builds that disables NSAssert*.
Assertions are commonly used to enforce the intended use of a particular method or piece of logic. Let's say you were writing a method which calculates the sum of two greater than zero integers. In order to make sure the method was always used as intended you would probably put an assert which tests that condition.
Short answer: They enforce that your code is used only as intended.
It's worthwhile to point out that aside from run time checking, assert programming is a important facility used when you design your code by contract.
More info on the subject of assertion and design by contract can be found below:
Assertion (software development)
Design by contract
Programming With Assertions
Design by Contract, by Example [Paperback]
To fully answer his question, the point of any type of assert is to aid debugging. It is more valuable to catch errors at their source, then to catch them in the debugger when they cause crashes.
For example, you may pass a value to a function expects values in a certain range. The function may store the value for later use, and on later use the application crashes. The call stack seen in this scenario would not show the source of the bad value. It's better to catch the bad value as it comes in to find out who's passing the bad value and why.
NSAssert make app crash when it match with the condition. If not match with the condition the next statements will execute. Look for the EX below:
I just create an app to test what is the task of NSAssert is:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self testingFunction:2];
}
-(void)testingFunction: (int)anNum{
// if anNum < 2 -> the app will crash
// and the NSLog statement will not execute
// that mean you cannot see the string: "This statement will execute when anNum < 2"
// into the log console window of Xcode
NSAssert(anNum >= 2, #"number you enter less than 2");
// If anNum >= 2 -> the app will not crash and the below
// statement will execute
NSLog(#"This statement will execute when anNum < 2");
}
into my code the app will not crash.And the test case is:
anNum >= 2 -> The app will not crash and you can see the log string:"This statement will execute when anNum < 2" into the outPut log console window
anNum < 2 -> The app will crash and you can not see the log string:"This statement will execute when anNum < 2"