Customizing jUnit AssertFailure Error messages - combined with slf4j logging - ant

I am new to jUnit and Selenium. Currently, we are doing a POC in our project for evaluating the tools' suitability.
One of the key requirements for us would be use friendly messages in case of assertion failures and verify Failures. The objective is to have those analyzed by manual testers by keeping those user intuitive.
I am trying out SLF4J for logging INFO messages for user friendly messages.
The challenge I am facing is that - when assertions fail, how to pass a custom message instead of jUnit default message?
For example, I want to get rid of the following default assertion failure message
expected "Health A-Z" to match glob "Health A-ZZ" (had transformed the glob into regexp "Health A-ZZ"
and frame it as
The title of the Menu Item doesn't match the expected value "Health A-ZZ". The actual value seen is "Health A-Z"
Question1 - Is there a way I override the default message with what I want?
Question2 - Can I pass variable parameters to the message and make it a dynamic message?
Sample Code:
try {
assertEquals("Health A-ZZ", selenium.getText("//div[#id='topnav']/ul/li[2]/a"));
logger.info("SUCCESS: Menu Item {} title verification Passed. EXPECTED : A-Z and Actual: {}",mainCounter, selenium.getText("//div[#id='topnav']/ul/li[2]/a"));
}
catch (AssertionError e) {
logger.error("FAILURE: Menu Item {} title verification Failed. EXPECTED : A-ZZ and Actual: {}",mainCounter, selenium.getText("//div[#id='topnav']/ul/li[2]/a"));
}
I get the assertFailure message printed out twice.. One with the error message in the catch block above and the other, the default junit assertfailre message like the one I mentioned above I wanted to get rid of.
Thanks in Advance.
Regards,
Prakash

You can use the Junit assert function.
assertEquals(message, expected, actual)
This will print the message that has been given in the message String in case of failure.
You can also pass String variable to the "message".

Related

OPA unit-test failing, How to output response variable?

Newbie to OPA, I am writing OPA unit test case.
test_valid_type {
response = evaluate with
input as valid_type
response == "approved"
}
it's failing response == "approved". I want to see the output of response variable, how do i output it?
Try the trace method provided by OPA for debugging.
https://www.openpolicyagent.org/docs/latest/policy-reference/#debugging
This will let you print the output.
In your example you can add trace(response) which will print the response output.
After reading many documention finally found it.
trace(variable)
prints the content of the variable.
[Ref][1]

WithNewWindow() returns MultipleCompilationErrorsException in Geb

I am getting weird error in my geb functional tests.
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Spec expression: 1: expecting '}', found 'assert' # line 1, column 71.
} ) { at(JobOfferDetailPage) assert des
My test looks like this. I click on a link which opens a new window with details of the job offer. Than I want to assert some text on the new page using Page Pattern.
Test:
withNewWindow( { quickShowOption.click() } ) { //TODO fix me
at(JobOfferDetailPage)
assert description.text() == 'some text'
assert requirements.text() == 'some text'
assert advatages.text() == 'some text.'
assert categories.text() == 'some text'
assert locality.text() == 'some text'
}
Page:
class JobOfferDetailPage extends Page {
static at = {$('#contactLabel').text() == 'Contact'}
static content = {
description {$('#jobOfferDescription')}
requirements {$('#jobOfferRequirements')}
advatages {$('#jobOfferAdvantages')}
jobOfferType {$('#jobOfferType')}
categories {$('#categories')}
locality {$('#locality')}
startDate {$('#startDate')}
requiredLanguages {$('#requiredLanguages')}
}
}
I get compilation error after my conditions are asserted. If I make a typo in asserted text than the test will fail normally, but if it passes, than it fails with this weird error.
Thank you #Erdi.
I use spock,geb versions "0.13.1" and selenium version "2.51.0".
If one was to believe this comment in one of Geb's own tests, which was nota bene written by me some time ago, this indeed seems like some sort of bug in Spock. What is interesting is that I just now moved that statement to an expect block and it works as long as the last statement in the second closure passed to newWindow() evaluates to true. This makes me think that it is an issue with old version of Spock and/or Groovy. Which versions of the aforementioned tools are you using?
One possible workaround would be to move your statement from expect/then to one that is not asserting (given or when) as shown in the test I linked to.

iOS Testing: Is there a way to skip tests?

I don't want to execute certain tests if the feature is currently disabled. Is there a way to "skip" a test (and to get appropriate feedback on console)?
Something like this:
func testSomething() {
if !isEnabled(feature: Feature) {
skip("Test skipped, feature \(feature.name) is currently disabled.")
}
// actual test code with assertions here, but not run if skip above called.
}
You can disable XCTests run by Xcode by right clicking on the test symbol in the editor tray on the left.
You'll get this menu, and you can select the "Disable " option.
Right clicking again will allow you to re-enable. Also, as stated in user #sethf's answer, you'll see entries for currently disabled tests in your .xcscheme file.
As a final note, I'd recommend against disabling a test and committing the disabling code in your xcscheme. Tests are meant to fail, not be silenced because they're inconvenient.
Another possible solution which I found in some article: prefix your skipped tests with something like "skipped_"
Benefits:
XCode will not treat them as tests
You can easily find them using search
You can make them tests again, replacing "skipped_" to ""
Beginning with Xcode 11.4 you'll be able to using XCTSkipUnless(_:_:file:line:).
The release notes read,
XCTest now supports dynamically skipping tests based on runtime
conditions, such as only executing some tests when running on certain
device types or when a remote server is accessible. When a test is
skipped, Xcode displays it differently in the Test Navigator and Test
Report, and highlights the line of code where the skip occurred along
with an optional user description. Information about skipped tests is
also included in the .xcresult for programmatic access.
To skip a test, call one of the new XCTSkip* functions from within a
test method or setUp(). For example:
func test_canAuthenticate() throws {
try XCTSkipIf(AuthManager.canAccessServer == false, "Can't access server")
// Perform test…
}
The XCTSkipUnless(::file:line:) API is similar to
XCTSkipIf(::file:line:) but skips if the provided expression is
false instead of true, and the XCTSkip API can be used to skip
unconditionally. (13696693)
I've found a way to do this by modifying my ui test .xcscheme file and adding a section called SkippedTests under TestableReference, then adding individual Test tags with an 'Identifier' attribute with the name of your class and test method. Something like:
<SkippedTests>
<Test Identifier="ClassName/testMethodName" />
</SkippedTests>
Hope this helps
From Xcode 11.4+, you can use XCTSkipIf() or XCTSkipUnless().
try XCTSkipIf(skip condition, "message")
try XCTSkipUnless(non-skip condition, "message")
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests#overview
This is what test schemes are meant to do.
You can have different schemes targeting different testing situations or needs.
For example, you may want to create a scheme that runs all your tests (full regression scheme), or you may want to select a handful of them to do a quick smoke test on your app when small changes are made.
This way, you can select different schemes according to how much testing you need to do.
Just go to
Product >> Scheme
It's not that universal, but you can override invokeTest in XCTestCase and avoid calling super where necessary. I'm not sure about the appropriate feedback in console though.
For instance the following fragment makes the test run only on iOS Simulator with iPhone 7 Plus/iPad Pro 9.7"/iOS 11.4:
class XXXTests : XCTestCase {
let supportedModelsAndRuntimeVersions: [(String, String)] = [
("iPhone9,2", "11.4"),
("iPad6,4", "11.4")
]
override func invokeTest() {
let environment = ProcessInfo().environment
guard let model = environment["SIMULATOR_MODEL_IDENTIFIER"], let version = environment["SIMULATOR_RUNTIME_VERSION"] else {
return
}
guard supportedModelsAndRuntimeVersions.contains(where: { $0 == (model, version) }) else {
return
}
super.invokeTest()
}
If you use Xcode 11 and TestPlan, you can tweak your configuration to skip or allow specific tests. Xcode TestPlan is a JSON format after all.
By default, all tests are enabled, you can skip a list of tests or test file.
"testTargets" : [
{
"skippedTests" : [
"SkippedFileTests", // skip the whole file
"FileTests\/testSkipped()" // skip one test in a file
]
...
On the opposite, you can also skip all tests by default and enable only few.
"testTargets" : [
{
"selectedTests" : [
"AllowedFileTests", // enable the whole file
"FileTests\/testAllowed()" // enable only a test in a file
]
...
I'm not sure if you can combine both configurations though. It flips the logic based on Automatically includes new tests.
Unfortunately, there is no build-in test case skipping. The test case either passes or fails.
That means you will have to add that functionality by yourself - you can add a function to XCTestCase (e.g. XCTestCase.skip) via a category that will print the information into console. However, you will have to put a return after that to prevent the other asserts from running.
While there are answers which covered almost similar logic, if you don't want to have an extra file to manage conditions then you can mark your test function with throws then use XCTSkip with a nice description to explain why it is skipped. Note a clear message is important as it will make it easy for you to just read it on Report Navigator and understand why it is skipped without having to open a related XCTestCase.
Example:
func test_whenInilizedWithAllPropertiesGraphQLQueryVariableDict_areSetCorrectly() throws {
// Skip intentionally so that we can remember to handle this.
throw XCTSkip("This method should be implemented to test equality of NSMutableDictoinary with heterogenious items.")
}
Official iOS documentation
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests
Use XCTSkipIf() or XCTSkipUnless() when you have a Boolean condition that you can use to evaluate when to skip tests.
Throw an XCTSkip error when you have other circumstances that result in skipped tests. For example:
func testSomethingNew() throws {
guard #available(macOS <#VersionNumber#>, *) else {
throw XCTSkip("Required API is not available for this test.")
}
// perform test using <#VersionNumber#> APIs...
}
There is no test case skipping. You can use if-else block:nested and run/print your desired output.

Stubbing stripe subscription error

I'm trying to test my payment process and I'm stuck with the problem of stubbing the subscription, this is the error message I get :
Double "Stripe::Customer" received unexpected message :[] with ("subscription")
This is the relevant part of the code for stubbing subscription :
#subscription = double('Stripe::Subscription')
#subscription.stub(:id) { 1234 }
#customer.stub(:subscription) { [#subscription] }
When I try the payment with the test card and it works, but I want to have an automated test in place in case something changes which could impact the payments
Edit :
per mcfinnigan suggestion I changed he last bit of code to :
#customer.stub(:[]).with(:subscription).and_return { [#subscription] }
And now I get this error :
Double "Stripe::Customer" received :[] with unexpected arguments
expected: (:subscription)
got: ("subscription")
Please stub a default value first if message might be received with other args as well.
You're not stubbing the right thing - your error indicates that something is attempting to call the method [] (i.e. array or hash dereferencing) on your double #customer.
Check your code and see whether you are sending a [] to a customer object anywhere.
Are you positive the last line should not be
#customer.stub(:[]).with(:subscription).and_return { [#subscription] }
instead?

Output a text file from Ranorex to include just a pass/fail result and a number

I am trying to get Ranorex to output a text file which will look like the following:
Pass
74
The pass/fail result will be obtained based on whether the test running has passed or failed. The number will be hardcoded to all I need to do is store that in a variable and include it in the output.
I would have thought it would have been simple but I'm struggling to get any help from Ranorex. I though I might be able to use the reporting function, change the output file type and alter the report structure but that didn't work either.
Although I am used to Ranorex and writing my own user code, I am new to adapting it in this way.
All my user code is written in C#
Can anyone offer any assistance?
Thanks!
Edit: So I've now managed to get Ranorex to output a text file and I can put any text into it, including a string stored in a variable.
However I'm struggling to store the pass/fail result of my test in a string that I can output.
I've discovered a way to do this however it relies on the following:-
The user code must be in separate test
This separate test must exist in a sibling test case to the one your main test is in
Both this test case and the case containing your main test must both be part of a parent test case
For example:
Parent TC
.....-AddUser TC
.........-MAIN TEST
.....-AddUser FailCheck
.........-USER CODE
You can then set your AddUser TC to 'Continue with sibling on fail'
The user code is as follows:
public static void Output()
{
string result = "";
ITestCase iCase = TestSuite.Current.GetTestCase("Add_User_Test"); // The name of your Test Case
if(iCase.Status == Ranorex.Core.Reporting.ActivityStatus.Failed){
result = "Failed"; }
if(iCase.Status == Ranorex.Core.Reporting.ActivityStatus.Success){
result = "Passed"; }
int testrunID = 79;
using (StreamWriter writer =
new StreamWriter("testresult.txt"))
{
writer.WriteLine(testrunID);
writer.WriteLine(result);
}
}
This will take the testrunID (specific to each test case) and the result of the test and output it to a text file.
The idea is then to read in the file with a custom java application I've developed and push the data into a test case management program such as QA Complete which can mark tests as Passed/Failed automatically
You can run the test suite directly using the TestSuiteRunner.Run() method. This will allow you to look at the return value of that directly and output pass or failure based on the return value.
http://www.ranorex.com/Documentation/Ranorex/html/M_Ranorex_Core_Testing_TestSuiteRunner_Run.htm
if(TestSuiteRunner.Run(typeof({testSuiteclass}),{Command Line Arguments})==0)
{
File.WriteLine("success");
}
else
{
File.WriteLine("failure");
}

Resources