Playwright: Possible to specify either number of workers or the browser for 1 test only? - playwright

I have my suite set up to run against 4 browsers using 3 workers (2 desktops and 2 mobile) I have a test that I need to either not run in parallel or limit to just run in one desktop browser.
Is that possible?
The reason I need to do this is that the test is triggering an event that can take a few seconds to run, when running nobody else can start this event so if 3 tests are running at once only the first one can pass. Also, on mobile, you can't trigger the event, so I skip parts of the test if isMobile is true.
Ideally, I'd like to just run this test in Firefox and leave the rest of the suite running in 3 workers with 4 browsers.
I looked at test.describe.configure({ mode: 'serial' }); but that doesn't seem to make a difference I still see
Running 4 tests using 3 workers
when I try just running this single spec file.

To change the parallelism you have to have a look at the workers config. You can either define it via playwright.config.js or the command line.
What we do at work is to separate commands based on directory or custom annotations.
{
"name": "parallel",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test:parallel": "npx playwright test tests/parallel/",
"test:sequence": "npx playwright test tests/sequence/ --workers=1"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"#playwright/test": "^1.29.2"
}
}
This way you can configure only subsets of tests that run in parallel whereas others run sequentially.
And funnily, I also recorded a video on the topic not too long ago.

There are actually multiple ways to conditionally skip tests based off a condition like browserName being Firefox - which one you might choose depends on your goal, setup, and/or criteria:
Project level:
Separating out those tests into separate directories, and specifying directory in your projects
Separating them into their own files, and telling projects to match or ignore them
Tagging tests within files, using grep to tell projects which to run or invert to not run
File level using annotations with test.skip
Conditionally skipping per test, whether within the test or in the beforeEach
Conditionally skipping all tests in a file or within a describe group, even containing single tests
Probably others
Based on what I understand to be your ideal setup and having just the one test within one file, I would probably recommend the whole file skip, so that not only would any beforeEach not run, but also any custom fixture setup required by the test wouldn’t run I imagine either. So something like the below before your test:
test.skip(({ browserName }) => browserName !== 'firefox');
And in the chance one of your mobile browsers is also Firefox, may need the isMobile check too. (Or actually, since apparently isMobile is not supported by Firefox, could probably check the hasTouch option instead)
Of course, choose whichever approach works best for you and your tests!
Hope that helps!

Related

Bazel clean only a subset of the cached rules

I am currently developing in a monorepo that has a pretty large workspace file.
Right now, I am noting that one of my testing rules, is not getting its dependency rules re-built when I update one of my tests. Here is an example of this:
load("#npm//#grafana/toolkit:index.bzl", "grafana_toolkit")
load("#build_bazel_rules_nodejs//:index.bzl", "copy_to_bin")
APPLICATION_DEPS = glob(
[
# My updated test file is included in this glob
"src/**/*",
],
) + [
"my-config-files.json"
]
RULE_DEPS = [
"#npm//#grafana/data",
"#npm//#grafana/ui",
"#npm//emotion",
"#npm//fs-extra",
]
copy_to_bin(
name = "bin_files",
srcs = APPLICATION_DEPS,
)
grafana_toolkit(
name = "test",
args = [
"plugin:test",
],
chdir = package_name(),
data = RULE_DEPS + [
":bin_files",
],
)
I then have a file called maybe something.test.ts. I run bazel run :test and my test might show that I failed and I see the problem and fix it. The problem is that the next time I run my test, I see from the output that it's still failing because it's running the old test instead of the new test.
The Problem
The way that I normally fix this sort of issue with stale files not updating, is by running bazel clean. The problem is that doing bazel clean means I clean EVERYTHING. And that makes re-running all the build steps take pretty damn long. I'm wondering if there is a way I can specify that I only clean a subset of the cache (maybe only the output of my bin_files rule, for example). That way, rather than starting all over again, I only rebuild what I want to rebuild.
I've actually found a pretty quick and easy way to do what I was originally asking is basically to just go to the bazel-bin directory, and delete the output of whichever rule it is I want to re-run. So maybe in this case, I could delete the bin_files output in my bazel-bin directory then run my bin_files rule again.
With that being said, I think #ahumsky might be right in that if you're needing to do this, it's more likely a bug with something else. In my case, I was running a build version of my rule instead of a test version of my rule. So, cleaning a subset of my cache didn't really have anything to do with my original problem.

How to setup a reusable Geb test script (to be used by other test scripts)

So I have just created a geb script that tests the creation of a report. Let's call this Script A
I have other test cases I need to run that are dependent on the previous report being created, but I still want the Script A to be a stand alone test. we will call the subsiquent script Script B
Furthermore Script A generates a pair of numbers that will be needed in subsequent scripts (to verify data got recorded accurately)
Is there a way I can setup geb such that Script B executes 'Script Aand is able to pull those 2 numbers fromScript Ato be used inScript B`?
In summary there will be a number a scripts that are dependent on the actions of Script A (which is itself a test) I want to be able to modularize Script A so that it can be executed from other scripts. What would be the best way to do this?
For reuse and not repeating yourself I would put the report creation into a separate method call in a new class such as ReportGenerator, this would generate the report given a set of parameters (if required) and return the report figures for use in whatever test you like.
You could then call that in any spec you want, with no reliance on other specs.

How do I get Visual Studio Load Test to show one test per user?

I am tasked with creating load tests for a web application. I'm using Visual Studio 2017's Web Performance and Load Testing tool. I created the project, created a Web Test script, then created the Load Test scenario. What we want is to test running the same script with various concurrent user counts (10, 20, 30, etc.).
Everything runs fine, but there is one small issue. No matter how many concurrent users I set up to run the test, the result page shows only one test was run. While it is true that I only ran one test, it was run N times, where N is the number of concurrent users (I have it set up so that each user runs it once and then stops). I'd like for the final report to reflect this.
The only reason I expect this is possible is that we have a report from an old test that someone ran, which shows 40 Total tests for 40 users, and another result showed 30 Total tests for 30 users. They somehow got it to show one test per user. Unfortunately, all I have is screenshots of the result page; I don't have access to the actual tests or settings (it's a long story, but they are gone, and so is the person who made them). So now I'm basically stuck trying to reverse engineer how they did it.
Here are my settings for the load test:
On-premise
Test iterations: 1
Think times: Normal distribution
Load Pattern: Constant Load: 40 Users (I change this for different loads)
Test Mix Model: I thought this would be it, but I have tried all 4 of these, and they all just show 1 test run.
Test Mix: I add my test here. I thought about trying to add the same test 40 times for 40 users, but it only lets you add it once.
Network Mix: LAN
Browser Mix: IE11
Counter Sets: Nothing (default)
Does anyone have any idea how to make it do what I am trying to do?
The "Test iterations" field in the "Run settings" gives the total number of tests to be executed, provided the "Use test iterations" is set true, otherwise the test runs for the time given by the "Run duration". Setting "Test iterations" to 1 (i.e. one) means that the test will be executed once.
The "Test iterations" gives the total number of tests to be executed. If you want 30 virtual users and want each to execute 4 tests then you would need to set 30*4 = 120 iterations.
The "Scenario" properties include a "Maximum test iterations", this should be left at zero so it does not conflict with the values in the "run settings".
To ensure that each simulated user only executes one test, set the "Percentage of new users" property in the "Scenario" to 100. See "The Effect of Percentage of New Users" section of this page.

Running various ruby scripts from browser

I have created a set of tool that do not necessarily meet requirements to be called "web app"…
Those tools are usually run by me from command line, but sometimes the command needs to be run without my appearance at the crime scene. One option would be to just ssh to the box and run command remotely, second would be to CRON it, and third would be to use rails to run scripts from controllers or something like that. Is it possible to use some sort of rack application?
i.e:
require 'rack'
app = Proc.new do |env|
['200', {'Content-Type' => 'text/html'}, ['Hello world!']]
end
Rack::Handler::WEBrick.run app
Suggestions?
Update: Those are ruby scripts (i.e. something like update_data.rb, generate_report.rb etc.) I would like to have, say, a grid of icons, which would then open a form where the user would have to copy text from 1 field to another to eliminate "misfires" with time-intensive tasks, hit "EXECUTE" and see the output in, say, readonly text area)

Getting test results from Eunit in Erlang

I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.

Resources