I have a Bazel project with lots of unit tests and where a subset requires GPU support.
I would like to map the unit tests that have GPU requirements to remote strategy but keep all other unit tests as sandboxed, running locally. Since all tests share the same mnemonic (TestRunner) I am not sure how to do that using the strategy or the strategy_regexp parameters.
Is this use case supported by Bazel? Or do you have to either map all or no tests to remote execution?
Kind regards!
We had a similar problem (another flavour though: have some //some/test/target run locally while others run as remote/worker/sandboxed) and ended up writing
test --strategy=remote,worker,sandboxed # force sandbox: no local
test --strategy_regexp=//some/test/target=local # allow local
test --worker_sandboxing=true
test --incompatible_legacy_local_fallback=false
unfortunately, this regex can become quite big so consider some clever naming if you have bunch of those "exceptional" tests, e.g. call all of them _special_tag and write a regex like
test --strategy_regexp=//.*_special_tag=local # allow local for `*_special_tag
What can help to debug is to run
bazel aquery //some/test/target 2>/dev/null
to understand what actually matches on that regexp.
Related
I ask you to consult on the following question about allure: I use jenkins + pytest to run the tests. The same tests run on several virtual machines, these machines differ in operating systems (different linux distributions) and test environment. After running the tests, I want to combine the results from all the machines into one report. - here the question arises - if I put all the reports in one directory and generate a report, then the results from different machines will be considered as rerun of the same test and combined into one. How can I get around this? so as not to be combined and so that it was possible to somehow sort out which result from which machine. Thanks.
i have solve this by override the names of tests/suites.
Meaning you have to make some code implementation, work with the before listeners, there you can get the current test name and override it. Set the test name by OS + Browser or something unique.
When you combine reports, they will be unique and properly displayed.
I ran into a similar issue with behave where Allure was treating each parallel build as a retry of the first build. I realize this isn't the same as pytest, but perhaps it'll help.
I was inspired by the previous answer and started experimenting. By changing the scenario name(s) within the feature, I was able to make Allure recognize each parallel build as separate tests. I accomplished this by adding a before_feature method to my environment.py file that simply added the hostname to each scenario name within that feature:
def before_feature(context, feature):
for scenario in feature.scenarios:
scenario.name = f'[{socket.gethostname()}] {scenario.name}'
Originally, I tried to directly change scenario.name in before_scenario but that seemed to have no effect in Allure.
Is there any possibility to write http requests in starlark build rule or via some executable invoked by ctx.actions.run ?
I know it can be done with bazel test (inside test runners), but can it be done in build phase? I know this goes against network sandboxing (but lets say we turn it off)
You can set execution_requirements to include requires-network.
Some notes:
The network requests are only within actions, they can't be run from Starlark itself
Bazel won't know to rerun actions that depend on network requests if the remote information has changed. There would need to be a way to make an action always run, which hasn't been decided on: https://github.com/bazelbuild/bazel/issues/3041
I would like all my scenarios to run, but I'd like to tag some scenarios so they are only excluded when running in certain environments. For example, when a scenario has no tags I want it to run in all environments, but if I tag it with #dev I want it to be excluded from all non-dev environments.
Is there a way to use scope binding to achieve this or is it better implemented with execution flags on the test runner?
Other than flags passed to the test runner, I was thinking maybe a scenario hook would be possible, but not sure how to implement the exclude condition because once the scenario has started I can't find a way to abort it.
When using xUnit, Tags are translated into Traits.
With them, you can filter which scenarios you want to execute.
This runs all tests with the #dev tag:
xunit.console.exe ... -trait "Category=dev"
Brendan Connolly wrote a nice blog post about xUnit traits: http://www.brendanconnolly.net/organizing-tests-with-xunit-traits/
About aborting an already started scenario: This is not possible.
As far as I know, JMeter allows you to send multiple POST request with different parameters (e.g. { "value": "value1"}, {"value": "value2"}, ...) However, I'm more comfortable using a terminal-based interface similar to ab or siege. Basically, I need to load test a server simulating the case in which some requests are not previously cached.
Are there alternatives to JMeter for Linux that are able to use different parameters for a POST request?
UPDATE
As far as I can tell, JMeter requires the creation of a test plan (jmx file) in order to run via the command line. Unfortunately, this test plan needs to be built using the GUI, which is precisely what I want to avoid.
UPDATE 2
I will use JMeter because it offers dynamic parameters for POST requests and most alternatives depend on JMeter. However, if anyone knows of a standalone library that works exclusively from the terminal (similar to ab), please let me know.
you can use JMeter in terminal mode, it's called Non GUI mode.
To variabilize just use CsV dataset to load variables (varName for example )per thread, then use ${varName}
See :
http://jmeter.apache.org/usermanual/get-started.html#non_gui
http://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config
Nice report at end:
http://jmeter.apache.org/usermanual/generating-dashboard.html
If you don't want to use GUI even for building the test, then look at :
https://github.com/flood-io/ruby-jmeter
It allows you to generate the JMX from a DSL file.
Examples here:
https://github.com/flood-io/ruby-jmeter/tree/master/examples
DSL here:
https://github.com/flood-io/ruby-jmeter/blob/master/lib/ruby-jmeter/DSL.md
$LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib'))
require 'ruby-jmeter'
test do
csv_data_set_config name:'MyCsv', filename: '/path to file', variableNames: 'myParam'
threads count: 10 do
visit name: 'Qwant Search', url: 'https://lite.qwant.com/?q=flood.io&t=web&p=${myParam}'
end
end.jmx(file: "path to your output plan")
Save file to ruby-jmeter-csv.rb
You can then generate the plan with:
ruby ruby-jmeter-csv.rb
And run it in non gui mode.
In fact JMeter GUI should be used for tests development and debugging only, when it comes to running the load test - it is recommended to run JMeter in command line mode, via Ant task or Maven plugin. Also there is a couple of more "geek" alternatives, i.e.:
JMeter .jmx scripts are basically XML files so you can use your favourite text editor to create or amend them
You can use JMeter API to create and kick off JMeter tests using Java language
If you're still looking for an alternative, here are few free and open source load testing tools
Grinder - you can write scripts in Jython
Gatling - you can write scripts in Scala-based DSL
Tsung - this guy exists for Linux and Unix-based platforms only, Erlang-based. Scripts are XML files.
Taurus - automation framework which supports all aforementioned tools (and some more), Python based, configuration files have simple YAML syntax.
See Open Source Load Testing Tools: Which One Should You Use? for more information on the above tools and comparison of them with JMeter
Is there a way to mark a fitnesse test such that it will not be run as part of a suite, but it can still be run manually?
We have our FitNesse tests running as part of our continuous integration, so new tests that are not yet implemented cause the build to fail. We'd like a way to allow our testers and BAs to be able to add new tests that will fail while still continuing to validate the existing tests as part of continuous integration.
Any suggestions?
The best way to do this is with suite tags. You can mark tests with a tag from the properties page and then you can filter for the or filter to exclude them.
In this case I would exclude with "NotOnCI" tag. Then add the following argument to the URL:
ExcludeSuiteFilter=NotOnCI
This might look like this then as the full URL:
Http://localhost:8080/FrontPage?test&ExcludeSuiteFilter=NotOnCI
You can select multiple tags by splitting with commas, but they act as "or",Not "and".
Check the FitNesse user guide for more details. http://fitnesse.org/FitNesse.UserGuide.TestSuites.TagsAndFilters
Would it make sense to have multiple Suites, one for regression tests that should always pass, and another one for the tests that are not yet implemented?
Testers and BAs can add tests/suites to the latter suite and the CI server only runs tests in the former suite.
Once a developer believes he has implemented the behavior they can move the test/suite relating to that functionality to the 'regression' suite so that it will be checked in continuous integration.
This might make the status of a test/suite a bit more explicit/obvious than just having a tag. It would also provide a clear handover from development to test/BA to indicate the implementation is finished.
If you just want to have a test/suite not run during an overall run of a suite that contains the particular test/suite you could also just tick 'Skip (Recursive)' in the properties page of that test/suite (below 'Page Type').