for our End-to-End-Tests, we want to set up a distributed testing environment. That means, that we want a docker hub container, that distributes tests of a test suit by the first in, first serve way to it's docker container workers.
How can we achieve that in Robot Framework. For a better example of what we want to implement, here a short illustration:
Thank you very much!
Following up on #A.Kootstra's Comment.
Pybot allows us to run parrallel execution of suites.
Pabot will split test execution from suite files and not from
individual test level.
In general case you can't count on tests that haven't designed to be
executed parallely to work out of the box when executing parallely.
For example if the tests manipulate or use the same data, you might
get yourself in trouble (one test suite logs in to the system while
another logs the same session out etc.). PabotLib can help you solve
these problems of concurrency.
Example:
test.robot
*** Settings ***
Library pabot.PabotLib
*** Test Case ***
Testing PabotLib
Acquire Lock MyLock
Log This part is critical section
Release Lock MyLock
${valuesetname}= Acquire Value Set
${host}= Get Value From Set host
${username}= Get Value From Set username
${password}= Get Value From Set password
Log Do something with the values (for example access host with username and password)
Release Value Set
Log After value set release others can obtain the variable values
valueset.dat
[Server1]
HOST=123.123.123.123
USERNAME=user1
PASSWORD=password1
[Server2]
HOST=121.121.121.121
USERNAME=user2
PASSWORD=password2
pabot call
pabot --pabotlib --resourcefile valueset.dat test.robot
You can find more info here https://github.com/mkorpela/pabot
Related
I apologize for duplicating my post from sonarsource, but sometimes this gets a different audience.
We’re using SonarQube 7.9.2.
Our Jenkins builds use the pipeline steps “withSonarQubeEnv” and “waitForQualityGate”, and in between we use “mvn sonar:sonar” to run the scan. At the end of the latter, it prints the task id it’s going to be waiting for in “waitForQualityGate”. It also shows that task id in the results of that step.
What WebApi call(s) can I perform in between “mvn sonar:sonar” and “waitForQualityGate” that will let me store into a variable the task id that is going to be polled for? I know the project key at that point. I’ve inspected all of the environment variables in scope at that point.
I know how to find the WebApi documentation, and I’ve scanned through what I think are the relevant operations, but I can’t figure out which operation I need for this particular “task”.
When you run the mvn sonar:sonar, report-task.txt will be created in the workspace folder .scannerwork.
You will get the ceTaskUrl and ceTaskId in report-task.txt. Now, you can use that ceTaskUrl to get the analysisId.
You can use the below api to get the quality gate status using analysisId.
https://localhost:9000/sonarqube/api/qualitygates/project_status?analysisId=$ANALYSIS_ID"
A script generates a properties file to work-space in an Execute shell block in the Build section. This file is available at work-space after script execution and in case of Failed build (Conditional steps (multiple) block in the Build section) this properties file will be injected. My Jenkins job sends an E-mail (Editable Email Notification block) in case if Failed build and it should contains the variable from properties file but it doesn't see this variable. FYI: This block can use other environment variables.
I have cross-checked the properties file and it contains the required variable in every case.
Properties file in work-space:
Environment variable injection from properties file:
This Steps to to run if condition is met block contains more other actions and these work fine. It means the running can reach this block.
Editable Email Notification block in Post-build:
If I check the Environment Variables option in a build, I can see the variable:
But when I get the mail, it doesn't contain the variable:
Any idea how can I solve it or what should I change?
NOTE: The variable is unique and not really related to Gerrit so I cannot use another variable which comes form Gerrit. Just the name of var is a little tricky.
I have found the answer for my question. The Jenkins or the plugin has limitation. It cannot handle the Failure state. If the previous execute shell block is failed then the running won't reach the Conditional steps (multiple) block.
On the other hand, I have found a "workaround" for this problem.
1. step
You need to exit from the Execute shell block with a specific return code. Eg.: 111
2. step
You need to set the Exit code to set build unstable filed to your specific exit code. (You can find this field in advanced option of Execute shell block.) As you can see in the below picture.
3. step
Set the Conditional steps (multiple) block to handle the Unstable state. With this solution the running is able to run into Conditional steps (multiple) block.
4. step
Create an Execute shell block inside the Conditional steps (multiple) block after you prepare everything what you want in case of job failed. It means after this block your job status changes to Failed from Unstable.
Whit this solution you can handle the failed job and in the end you will get a real failed job (not unstable).
Not the most elegant solution but it works.
I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.
How can I abort the whole test set's execution from within a script?
I have a library which, if it encounters certain circumstances, comes to the conclusion that further test execution does not make any sense. The "hardest" abort I know is ExitTest, but it only aborts the current test's execution, not the whole test set.
I understand I could map this to test dependencies in the test set, but those should be used only to model business-driven dependencies between tests, to coordinate parallel test execution, as opposed to the global abort I am looking for and which can happen anytime, in any test (i.e. deep, deep in library code). I certainly don't want to depend all tests on their predecessor tests' passed/failed status just for this. And it also would lead to other "branches" of the dependency tree being executed anyways.
So how can I abort the complete test set execution programmatically?
Well.....you could set a flag value as EXIT,before doing exit test.....and either return this flag to the calling function or driver script/function...and if that's not possible, you could write the flag value into a temporary file, and make ur driver script read this file before it moves to the next Test set.....
I have created a suffix for any test data I use in my test cases, for instance when adding an account you must enter a name for it, I use the static text "test" and then add the suffix to the end of this, this is what I do for all fields.
However, I need to check that this data has been saved correctly and is displayed correctly during these test cases so I need to somehow make Robot Framework remember this suffix I've generated. So far in my test cases I've just been using set variable after generating the data with faker, but obviously this is contained within the keyword, how could I make it so that this generated data is accessible for the duration of my testing session (until all tests, in the folder are finished)?
My code at the minute:
*** Test Cases ***
Valid Login
Open Browser To Login Page
${num}= Random Int
${suffix}= set variable ${num}
Input Text username ${suffix}
To reiterate, I then want to check this ${suffix} value in another test case
Thanks in advance!
Edit:
Test suite file
*** Test Cases ***
Valid Login
Open Browser To Login Page
Fill field a11y-username ${MYNUM}
Resource file
*** Variables ***
${MYNUM}
*** Keywords ***
Suite Setup
${MYNUM}= Random Int
Set Suite Variable ${MYNUM}
There is the "set suite variable" keyword for that. I would create a Suite Setup (in a init.txt file if you have several files in your folders for your different tests) that creates this ${suffix} and make it available for all tests in the suite:
${num}= Random Int
${SUFFIX}= set suite variable ${num}
note: I use capital letters to show that the variable has a bigger scope than just local