Parallel runs Wdio Appium Android - appium

I have a framework(Wdio + Appium) for Android tests.
I want to use maxInstances and run the tests in parallel. How can I run some tests from X suite in an instance and other tests from the same X suite on another instance.
I've tried:
suites: {
X: [
[test1,test2],
test3,
test4
]
suites: {
X: [
[test1,test2],
[test3, test4]
]
In the end what I am trying to achieve is that for a given suite to run certain tests in a single instance and others in another or in multiple. Tried the WDIO documentation and it doesn't work with multiple instance in the same suite.
Is there a possibility to do so?

Related

Specify Taurus test as a Blazemeter Functional test

How do I tell Taurus that my (Postman/Newman) test is a Blazemeter Functional test, and not a Performance test? Below is my bzt.yaml I created with the help of https://gettaurus.org/docs/Postman/.
execution:
- executor: newman
iterations: 1
scenario: functional/simple
scenarios:
functional/simple:
script: my.postman_collection.json
reporting:
- module: blazemeter
modules:
blazemeter:
request-logging-limit: 20240
public-report: false
report-name: my-postman-collection
test: newmantrials
project: test
final-stats:
summary-labels: true
I run it using the taurus Docker image:
docker run --rm -t -v `pwd`:/bzt-configs -v `pwd`/artifacts:/tmp/artifacts blazemeter/taurus:1.14.0 bzt.yaml -o modules.blazemeter.token="${token}"
When I log into the Blazemeter UI, I see that it's listed under the "Performance" tab, and looks like a performance test. I would like it to run as a Functional test to get more details on the request and response payloads.
I do not believe it's possible at the moment, because presently BlazeMeter functional tests are geared toward either straight API functional tests or GUI (Selenium) functional tests.
The problem is that from BlazeMeter's side, the file validator is failing to correctly identify the Postman/Newman JSON file (despite the YAML file referencing it properly). I reported this to the BlazeMeter R&D team fairly recently, so it's being looked into.
In the meantime though, I don't expect this to work in BlazeMeter. It likely won't correctly identify your Newman script unless you run it as a Performance test for the interim.
(Sorry for the bad news on this one -- Hopefully it'll get sorted soon!)
Feel free to bring this up with BlazeMeter support at support#blazemeter.com as well.

How to able CI to continue working after it finishes e2e test

I have this test running on CI but after it finishes it stall the whole process. How can I able to stop or allow to process to continue with Deploy?
[1] Audit Log Viewer - Grid
[1] ✓ should not show extra columns when moving from "asset/alert/cases" to other application
[1]
[1] Executed 1 of 20 specs INCOMPLETE (19 SKIPPED) in 20 secs.
[1] [12:14:11] I/launcher - 0 instance(s) of WebDriver still running
[1] [12:14:11] I/launcher - chrome #01 passed
[1] ng e2e -s false -pc proxy.conf.json exited with code 0
Stack:
1. Angular 5.x
2. Typescript
3. Angular-Cli
Answer: I was using concurrently /dist/app.js && npm run e2e
I separated the calls and killing the /dist/app.js process after the tests are finished running.
That way it allows me to invoke npm run e2e without stalling the process.

How to run execute shell command in jenkins in different terminals

I am running my protractor test through jenkins and my package.json looks like this:
{"name": "ProtractorTest",
"version": "0.0.1",
"main": "conf.js",
"scripts": {
"setup": "npm install && node node_modules/protractor/bin/webdriver-manager update",
"e2e-start": "node node_modules/protractor/bin/webdriver-manager start",
"test": "protractor protractor.conf.js"
}
I am trying to run script using jenkins execute shell build step like below
Jenkins screenshot
but this tries to run setup,e2e-start,test one after other. Since "e2e-start" starts selenium server,so I see in console "INFO - Selenium Server is up and running...." and then npm run test is never run . I think it's because "npm run test" should be run in different terminal because when we run manually, we start server in one terminal and run test in different terminal. So, how can I achieve this using jenkins.
Try to use the bellow mentioned option in the config file then protractor will take care of starting and stopping of server itself, then you wont need to start and stop wendriver manager
directConnect: true
seleniumAddress: 'http://127.0.0.1:4444/wd/hub',
Also you can mention selenium server jar too.
seleniumServerJar: '../utils/selenium-server-standalone-2.53.1.jar',
seleniumPort: 4444,
Another approach would be to use jenkins selenium pluggin
For phantomjs its easy.
Download the selenium server jar and save it in a folder.
in protractor conf.js file comment out directConnect and seleniumAddress.
Include them in your config.
seleniumServerJar: 'location of the jar',
seleniumPort: 4444,
These setting will start the selenium server before tests starts and shut it down gracefully when test finishes. you wont need to bother about starting selenium server explicitly.
Note: use selenium-server-standalone-2.53.1.jar if you are using java 7

Initiate Rascal tests from shell for CI purposes

If I want to start all tests within a module, I simply write:
> import Example;
> :test
and all of the test bool functions run. However, I want to start them using the Rascal .jar for CI purposes. Is there any flag that I can use? For example:
$ rascal.jar TestsModule --test
Or any alternative solution so I can run my Rascal tests for CI purposes?
At the moment, not yet from the command line. For the (almost finished) compiler backend we now have --rascalTests <modules> but for now that won't solve your problem.
If your CI supports JUnit style tests, we have added a JUnit compatible layer around the rascal tests. For example this class runs all the tests in the Rascal modules in the lang::rascal::tests::basic package.
Another way to make it work is to pipe commands into the rascal-shell?
$ echo -e "import IO;\n println(\"Hello World\");\n:quit\n" | java -jar rascal-shell-unstable.jar
Version: 0.8.0.201604121603
rascal>import IO;
ok
rascal> println("Hello World");
Hello World
ok
rascal>:quit
Quiting REPL
$

How to know which tests were run together in parallel?

My question in general is this: when I run automated tests in parallel how can I know which of them were run at the same time?
My issue is this: I'm running tests in parallel (4 tests per time) but I have no idea which of them are executing together (at the same time) and which are waiting.
After my tests are executed there are some failures and it could be that the cause of the failures is other tests that were executing in the same time with the failed test. So I want to know which tests were executing together so I can run them together again to debug the failed test.
Technologies I use: NUnit 3.0, C#, Selenium WebDriver, Jenkins
I'm glad to hear any possible solution (doesn't matter if it's hard to apply)
For debug purposes, you can run the NUnit console with the --trace=Verbose option.
This will write a log to the directory you are executing in, which will look a little like this:
14:56:13.972 Info [13] TestWorker: Worker#4 executing TestsSuite1
14:56:13.973 Debug [13] WorkItemDispatcher: Directly executing TestA
14:56:13.976 Debug [12] WorkItemDispatcher: Directly executing TestB
14:56:13.976 Debug [16] WorkItemDispatcher: Directly executing TestC
14:56:13.980 Debug [12] WorkItemDispatcher: Directly executing TestD
14:56:13.982 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,False,0)
14:56:13.989 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,True,0)
I believe the numbers in the square brackets identify different threads, so here, you can see that workers 12 and 16 where running TestC and TestD simultaneously.
Of course, it's never as simple as just seeing which tests where running concurrently, but this may help narrow issues down.
The simplest approach is to disable parallelism selectively until you find the combination that causes the problem.
Disable all parallelism by running with --workers=0. If the same tests fail, then you don't have a parallelism problem.
Selectively rerun the failed tests together, using the --test or --where option to select them. Edit the fixtures to enable/disable parallelism until you hit the combination that does the job.

Resources