combine allure reports from several machines into one without retry - jenkins

I ask you to consult on the following question about allure: I use jenkins + pytest to run the tests. The same tests run on several virtual machines, these machines differ in operating systems (different linux distributions) and test environment. After running the tests, I want to combine the results from all the machines into one report. - here the question arises - if I put all the reports in one directory and generate a report, then the results from different machines will be considered as rerun of the same test and combined into one. How can I get around this? so as not to be combined and so that it was possible to somehow sort out which result from which machine. Thanks.

i have solve this by override the names of tests/suites.
Meaning you have to make some code implementation, work with the before listeners, there you can get the current test name and override it. Set the test name by OS + Browser or something unique.
When you combine reports, they will be unique and properly displayed.

I ran into a similar issue with behave where Allure was treating each parallel build as a retry of the first build. I realize this isn't the same as pytest, but perhaps it'll help.
I was inspired by the previous answer and started experimenting. By changing the scenario name(s) within the feature, I was able to make Allure recognize each parallel build as separate tests. I accomplished this by adding a before_feature method to my environment.py file that simply added the hostname to each scenario name within that feature:
def before_feature(context, feature):
for scenario in feature.scenarios:
scenario.name = f'[{socket.gethostname()}] {scenario.name}'
Originally, I tried to directly change scenario.name in before_scenario but that seemed to have no effect in Allure.

Related

How to skip already-run steps in kubeflow pipeline?

I'm building an ML pipeline in Kubeflow and I have a question. Is there anything out of the box that allows me to configure my pipeline, such that a step is not rerun if its output exists? I've thought of ways to do this manually (either checking for existing outputs as I'm compiling the pipeline, or having an initial step that returns a list of steps to run, or manually configuring which steps to run as an input parameter) but I cannot find a native way of handling this.
The common use case for me would be to rerun the model step without rerunning any pre-processing of the data; but without having to have a specific "model development" pipeline that would differ from the more general prod one that would include the data pre-processing step. Or perhaps I'm iterating on an evaluation phase and I don't even need retraining but I would still like to use the same pipeline. Right now, colleagues are using several pipelines, that each start at a different step, to work around this.
I'm coming at it from a map-reduce perspective where this is trivial - the framework automatically detects which outputs are present and doesn't rebuild them as default, but easily gives you the option to rebuild some or all of them. Maybe this is biasing my way of working with kubeflow?
Any help appreciated!
Ok, I thought I'd put on here what I've found to solve this.
As of September 2019, this is not a feature of Kubeflow (according to people working on it), but there is a caching feature in the works that should not rerun any steps whose outputs exist.
In the meantime I manually implemented it, via a pipelineParam 'startingStep' from which everything needs to be rerun. Something like this:
with dsl.Condition(first_step_to_run == "prep"):
create_ops(StartingStep.prep)
with dsl.Condition(first_step_to_run == "train"):
create_ops(StartingStep.train)
with dsl.Condition(first_step_to_run == "evaluate"):
create_ops(StartingStep.evaluate)
with a create_ops method that understands what order to create steps in and chains them appropriately (we actually have seven steps so I really wanted to avoid copy/pasting all over).

Jenkins plugin for triggering build from Dashboard with input argument?

We have a project with more than 100k regression tests and therefore we'd like to have the alternative to manually trigger and check if a Test Suite(s) have been broken due to a specific change instead of running all tests.
I have a build project configured in a way that everything is in place and it expects only parameter value to be appended i.e. -DwildcardSuites=<Suite Name Pattern>.
Is there a Jenkins plugin that would allow entering an input text to be appended to the Goals and options and that corresponds to the specific Suite pattern we'd like to run?
Extensible Choice? https://wiki.jenkins-ci.org/display/JENKINS/Extensible+Choice+Parameter+plugin
There are a few similar plugins, they tend to have one or more of "Choice", "Active" and "Dynamic" in the name.

how to show quantitative results in Jenkins

Jenkins test results screen shows only pass/failed results.
I would like to show quantitive (number/percentage/time duration etc') results, parsed from logs.
e.g. memory usage, run time of specific methods etc..
What is the best way to do so?
Thanks
good question.
i imagine you're probably looking to extend your Jenkins instance with some plugins that describe more info about the tests you've run. this plugin seems relevant, but requires some experience with JMeter (java-based performance measurment tool) to generate the output that this plugin can then read and display output from a JMeter task that you can run every time your build runs:
https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin
the 'readme' in the above plugin details page specifies how to set up a project to run JMeter( see the 'Configuring a project to run jmeter performance test:' near the bottom.
another way to do similar not so immediately tied to a specific jenkins build is to run resource monitors (like Cacti or collectd) on the machines running the tests and analyze those results post-build, but again, outside of the Jenkins context.
HTH.

configure grid extra (groupon) with jenkins in order to run cucumber tests

I'm struggling with something for quite a while and can't find a solution,
I got a test project (cucumber, maven) I configure jenkins to pull the project from github, build and execute the code (selenium test script) on jenkins slave and that works perfect, I added few more slave, tagged them and I'm able to execute the same job parallel(the same test cases on different machines)
my next step is to use grid extra (https://github.com/groupon/Selenium-Grid-Extras) in order to use some cool features like video recording, browser updating, selenium updates etc...
now, I know that in order to use the grid I need to address it via my code and also define the desired capabilities (browser, os etc...).
currently when I'm running the same job twice, my second request will be queued till the first one ends, if I will run the same code from two developers machine it will run at 2 different nodes and the grid can handle both request.
not sure what is wrong with my jenkins configuration or my grid hub configuration, I checked it again and again and all looks good :-)
so guess I'm missing something
any advice/direction/idea will be highly appreciated.
Thanks
Ronen

Revert snapshot before or after each test method in a TFS build-deploy-test workflow?

I have a scenario that requires me to revert to a clean snapshot before or after each Coded UI test method is executed. I have researched using the TFS Lab Management API (see http://blogs.microsoft.co.il/shair/2011/12/22/tfs-api-part-42-getting-started-with-lab-management-api/) to revert to a specific snapshot as part of the TestInitialize and/or TestCleanup method, but I can only get this to work when executed locally. When executed on a remote machine I get errors authenticating to the TFS service.
My other option is to somehow do a 'foreach test in testrun' into the build process template (LabDefaultTemplate.11.xaml). I have identified the area that I think this would fit best, but cannot find any documentation on running a loop on each test.
Is this something that is possible, or is there somehow a built in method to accomplish this that I have overlooked?
To do what you propose you should switch to Release Management and create a separate test run for each of your groupings, in your case each test. You can use RM to orchestrate looping through each of your runs and executing then.
http://nakedalm.com/execute-tests-release-management-visual-studio-2013/
However running a UI test should not break your application and I would suggest that either your tests are way too long, or there is some flaw in the design of your application.

Resources