How can I abort the whole test set's execution from within a script?
I have a library which, if it encounters certain circumstances, comes to the conclusion that further test execution does not make any sense. The "hardest" abort I know is ExitTest, but it only aborts the current test's execution, not the whole test set.
I understand I could map this to test dependencies in the test set, but those should be used only to model business-driven dependencies between tests, to coordinate parallel test execution, as opposed to the global abort I am looking for and which can happen anytime, in any test (i.e. deep, deep in library code). I certainly don't want to depend all tests on their predecessor tests' passed/failed status just for this. And it also would lead to other "branches" of the dependency tree being executed anyways.
So how can I abort the complete test set execution programmatically?
Well.....you could set a flag value as EXIT,before doing exit test.....and either return this flag to the calling function or driver script/function...and if that's not possible, you could write the flag value into a temporary file, and make ur driver script read this file before it moves to the next Test set.....
Related
So I have just created a geb script that tests the creation of a report. Let's call this Script A
I have other test cases I need to run that are dependent on the previous report being created, but I still want the Script A to be a stand alone test. we will call the subsiquent script Script B
Furthermore Script A generates a pair of numbers that will be needed in subsequent scripts (to verify data got recorded accurately)
Is there a way I can setup geb such that Script B executes 'Script Aand is able to pull those 2 numbers fromScript Ato be used inScript B`?
In summary there will be a number a scripts that are dependent on the actions of Script A (which is itself a test) I want to be able to modularize Script A so that it can be executed from other scripts. What would be the best way to do this?
For reuse and not repeating yourself I would put the report creation into a separate method call in a new class such as ReportGenerator, this would generate the report given a set of parameters (if required) and return the report figures for use in whatever test you like.
You could then call that in any spec you want, with no reliance on other specs.
for our End-to-End-Tests, we want to set up a distributed testing environment. That means, that we want a docker hub container, that distributes tests of a test suit by the first in, first serve way to it's docker container workers.
How can we achieve that in Robot Framework. For a better example of what we want to implement, here a short illustration:
Thank you very much!
Following up on #A.Kootstra's Comment.
Pybot allows us to run parrallel execution of suites.
Pabot will split test execution from suite files and not from
individual test level.
In general case you can't count on tests that haven't designed to be
executed parallely to work out of the box when executing parallely.
For example if the tests manipulate or use the same data, you might
get yourself in trouble (one test suite logs in to the system while
another logs the same session out etc.). PabotLib can help you solve
these problems of concurrency.
Example:
test.robot
*** Settings ***
Library pabot.PabotLib
*** Test Case ***
Testing PabotLib
Acquire Lock MyLock
Log This part is critical section
Release Lock MyLock
${valuesetname}= Acquire Value Set
${host}= Get Value From Set host
${username}= Get Value From Set username
${password}= Get Value From Set password
Log Do something with the values (for example access host with username and password)
Release Value Set
Log After value set release others can obtain the variable values
valueset.dat
[Server1]
HOST=123.123.123.123
USERNAME=user1
PASSWORD=password1
[Server2]
HOST=121.121.121.121
USERNAME=user2
PASSWORD=password2
pabot call
pabot --pabotlib --resourcefile valueset.dat test.robot
You can find more info here https://github.com/mkorpela/pabot
TFS build allows to specify conditions for running a task: reference.
The condition I would like to define is: a specific task [addressed by name or other mean] has failed.
This is similar to Only when a previous task has failed, but I want to specify which previous task that is.
Looking at the examples I don't see any condition that is addressing a specific task outcome, only the entire build status.
Is it possible? any workaround to achieve this?
It doesn't seem like there's an out-of-the-box solution for this requirement, but I can come up with (an ugly :)) workaround.
Suppose your specific task (the one you examine in regards to its status) is called A. The goal is to call another build task (let's say B) only in case A fails.
You can do the following:
Define a custom build variable, call it task.A.status and set to success
Create another build task, e.g. C and schedule it right after A; condition it to only run if A fails - there's a standard condition for that
The task C should only do one thing - set task.A.status build variable to 'failure' (like this, if we are talking PowerShell: Write-Host "##vso[task.setvariable variable=task.A.status]failure")
Finally, the task B is scheduled sometime after C and is conditioned to run in case task.A.status equals failure, like this: eq(variables['task.A.status'], 'failure')
I might be incorrect in syntax details, but you should get the general idea. Hope it helps.
Is there a way to define a task in Ant that always gets executed at the end of every run? This SO question provides a way to do so, at the start of every run, before any other targets have been executed but I am looking at the opposite case.
My use case is to echo a message warning the user if a certain condition was discovered during the run but I want to make sure it's echoed at the very end so it gets noticed.
use a buildlistener, f.e. the exec-listener which provides a taskcontainer for each build result
( BUILD SUCCESSFUL | BUILD FAILED ) where you can put all your needed tasks in, see :
https://stackoverflow.com/a/6391165/130683
for details.
It's an interesting situation. Normally, I would say you can't do this in an automated way. You could wrap Ant in some shell script to do this, but Ant itself really isn't a full fledge programming language.
The only thing I can think of is to add an <ant> call at the end of each task to echo out what you want. You could set it up, that if a variable isn't present, the echo won't happen. Of course, this means calling the same target a dozen or so times just to get that final <echo>.
I checked through AntXtras and Ant-Contrib for possible methods, but couldn't find any.
Sorry.
Wrap your calls in the sequential container.
http://ant.apache.org/manual/Tasks/sequential.html
In Jenkins there is a possibility to create free project which can contain a script execution. The build fails (becomes red) when the return level of the script is not 0.
Is there a possibility to make it "yellow"?
(Yellow usually indicates successful build with failed tests)
The system runs on Linux.
Give the Log Parser Plugin a try. That should do the trick for you.
One slightly hacky way do do it, is to alter the job to publish test results and supply fake results.
I've got a job that is publishing the test results from a file called "results.xml". The last step in my build script checks the return value of the build, copies eihter "results-good.xml" or "results-unstable.xml" to "results.xml" and then returns a zero.
Thus if the script fails on one of the early steps, the build is red. But if the build succeeds its green or yellow based on the return code it would have retunred without this hack.