Ranorex custom re-run component - ranorex

Since Ranorex does not provide re-run functionality from under the hood, I have to write my own and before I started, just want to ask for advice from people who've done it or maybe possible existing solution on the market.
Goal is:
In the end of the run, to re-run failed test cases.
Requirements:
Amount of recursive iterations should be customized
If Data binding is used, should include only Iterations for Data binding that failed

I would use the Ranorex command line argument possiblities to achieve this. Main thing would be to structure the suit accordingly that each test-case could be run seperately.
During the test I would log down the failed test cases either into a text file, database or any other solution that you can later on read the data from (even parse it from the xml result if you want to).
And from that data you'll just insert the test-case name as a command line argument while running the suite again:
testSuite.exe /testcase:TestCaseName
or
testSuite.exe /tc:TestCaseName
The full command line args reference can be found here:
https://www.ranorex.com/help/latest/lesson-4-ranorex-test-suite

Possible solutions:
1a. Based on the report xml: Parse report and collect info about all failed TC.
Cons:
Parse will be tricky
or:
1b. Or create list of failed TC on runtime : If failure occurs on tear-down add this iteration to the re-run list (could be file or DB table).
Using for example:
string testCaseName = TestCaseNode.Current.Name;
int testCaseIndex = TestSuite.Current.GetTestCase(testCaseName).DataContext.CurrentRowIndex;
then:
2a. Based on the list, run executable with parameters, looping though each record.
like this:
testSuite.exe /tc:testCaseName tcdr:testCaseIndex
or:
2b. Or generate new TestSuite file .rxtxt and recompile solution to created updated executable.
and last part:
3a. In the end repeat process, checking that failedTestCases == 0 || currentRerunIterations < expectedRerunIterations with script through CI run executable
or:
3b. Wrap whole Test Suite into Rerun test module and do the same check for failedTestCases == 0 || currentRerunIterations < expectedRerunIterations and run Ranorex from TestModule
Please let me know what you think about it.

Related

How to setup a reusable Geb test script (to be used by other test scripts)

So I have just created a geb script that tests the creation of a report. Let's call this Script A
I have other test cases I need to run that are dependent on the previous report being created, but I still want the Script A to be a stand alone test. we will call the subsiquent script Script B
Furthermore Script A generates a pair of numbers that will be needed in subsequent scripts (to verify data got recorded accurately)
Is there a way I can setup geb such that Script B executes 'Script Aand is able to pull those 2 numbers fromScript Ato be used inScript B`?
In summary there will be a number a scripts that are dependent on the actions of Script A (which is itself a test) I want to be able to modularize Script A so that it can be executed from other scripts. What would be the best way to do this?
For reuse and not repeating yourself I would put the report creation into a separate method call in a new class such as ReportGenerator, this would generate the report given a set of parameters (if required) and return the report figures for use in whatever test you like.
You could then call that in any spec you want, with no reliance on other specs.

TFS build custom conditions for running a task - check if specific previous task has failed

TFS build allows to specify conditions for running a task: reference.
The condition I would like to define is: a specific task [addressed by name or other mean] has failed.
This is similar to Only when a previous task has failed, but I want to specify which previous task that is.
Looking at the examples I don't see any condition that is addressing a specific task outcome, only the entire build status.
Is it possible? any workaround to achieve this?
It doesn't seem like there's an out-of-the-box solution for this requirement, but I can come up with (an ugly :)) workaround.
Suppose your specific task (the one you examine in regards to its status) is called A. The goal is to call another build task (let's say B) only in case A fails.
You can do the following:
Define a custom build variable, call it task.A.status and set to success
Create another build task, e.g. C and schedule it right after A; condition it to only run if A fails - there's a standard condition for that
The task C should only do one thing - set task.A.status build variable to 'failure' (like this, if we are talking PowerShell: Write-Host "##vso[task.setvariable variable=task.A.status]failure")
Finally, the task B is scheduled sometime after C and is conditioned to run in case task.A.status equals failure, like this: eq(variables['task.A.status'], 'failure')
I might be incorrect in syntax details, but you should get the general idea. Hope it helps.

Machine parseable error messages

(From https://groups.google.com/d/msg/bazel-discuss/cIBIP-Oyzzw/caesbhdEAAAJ)
What is the recommended way for rules to export information about failures such that downstream tools can include them in UIs.
Example use case:
I ran bazel test //my:target, and one of the actions for //my:target fails because there is an unknown variable "usrname" in my/target.foo at line 7 column 10. It would also like to report that "username" is a valid variable and this is a possible misspelling. And thus wants to suggest an addition of an "e" character.
One way I have thought to do this is to have a separate file that my action produces //my:target.errors that is in a separate output group and have it write machine parseable data there in addition to human readable data on stdout.
I can then find all of these files and parse the data in them in downstream tools.
Is there any prior work on this, or does everything just try to parse the human readable output?
I recommend running the error checkers as extra actions.
I don't think Bazel currently has hooks for custom error handlers like you describe. Please consider opening a feature request: https://github.com/bazelbuild/bazel/issues/new

Ant : how to always execute a task at the end of every run (regardless of target)

Is there a way to define a task in Ant that always gets executed at the end of every run? This SO question provides a way to do so, at the start of every run, before any other targets have been executed but I am looking at the opposite case.
My use case is to echo a message warning the user if a certain condition was discovered during the run but I want to make sure it's echoed at the very end so it gets noticed.
use a buildlistener, f.e. the exec-listener which provides a taskcontainer for each build result
( BUILD SUCCESSFUL | BUILD FAILED ) where you can put all your needed tasks in, see :
https://stackoverflow.com/a/6391165/130683
for details.
It's an interesting situation. Normally, I would say you can't do this in an automated way. You could wrap Ant in some shell script to do this, but Ant itself really isn't a full fledge programming language.
The only thing I can think of is to add an <ant> call at the end of each task to echo out what you want. You could set it up, that if a variable isn't present, the echo won't happen. Of course, this means calling the same target a dozen or so times just to get that final <echo>.
I checked through AntXtras and Ant-Contrib for possible methods, but couldn't find any.
Sorry.
Wrap your calls in the sequential container.
http://ant.apache.org/manual/Tasks/sequential.html

How do I "map" certain return value of a script to "yellow" status in Jenkins?

In Jenkins there is a possibility to create free project which can contain a script execution. The build fails (becomes red) when the return level of the script is not 0.
Is there a possibility to make it "yellow"?
(Yellow usually indicates successful build with failed tests)
The system runs on Linux.
Give the Log Parser Plugin a try. That should do the trick for you.
One slightly hacky way do do it, is to alter the job to publish test results and supply fake results.
I've got a job that is publishing the test results from a file called "results.xml". The last step in my build script checks the return value of the build, copies eihter "results-good.xml" or "results-unstable.xml" to "results.xml" and then returns a zero.
Thus if the script fails on one of the early steps, the build is red. But if the build succeeds its green or yellow based on the return code it would have retunred without this hack.

Resources