Currently I'm developing browser tests but sometimes they are flaky so I want to add retry to those tests, Idk if there is an easy way to add retry to a project with Wallaby and ExUnit by configuration or if there is an library out there. Thanks
Related
I want to run my UI tests in such a way that if any given test within a test suite we're to fail, the entire test run is stopped. I understand there is an option to continue after failure, but this only applies to a single test case.
I plan on running the UI tests on a CI server such as Teamcity, naturally I want the build process to fail if any of the tests fail without having to wait for remaining tests to complete. Does anyone know if this is possible or will I need to somehow parse the output of each test case to detect a failure and stop at that point, possibly with some exit code?
Thanks.
I really recommend you to use fastlane for this. You can configure your tests there as fastlane has a particular flag called stop_after_first_error, which basically does the thing you want!
You can find it here:
https://github.com/fastlane/fastlane
I don't think there's any way to do this natively. You might want to reconsider how you're writing UI tests, because you shouldn't have to do this. That being said, here's something you can do; it might be a bit of a hack.
Create a boolean flag like lastTestPassed. When an assertion is true, set the flag to true. Then, in the setUp method, check if lastTestPassed is true. If it is, then set it to false; if it isn't, then don't allow the test to run (maybe use an XCTFail()).
Again, this is a bit of a hack, but might work until you can refactor how your UI tests work.
I'm struggling with something for quite a while and can't find a solution,
I got a test project (cucumber, maven) I configure jenkins to pull the project from github, build and execute the code (selenium test script) on jenkins slave and that works perfect, I added few more slave, tagged them and I'm able to execute the same job parallel(the same test cases on different machines)
my next step is to use grid extra (https://github.com/groupon/Selenium-Grid-Extras) in order to use some cool features like video recording, browser updating, selenium updates etc...
now, I know that in order to use the grid I need to address it via my code and also define the desired capabilities (browser, os etc...).
currently when I'm running the same job twice, my second request will be queued till the first one ends, if I will run the same code from two developers machine it will run at 2 different nodes and the grid can handle both request.
not sure what is wrong with my jenkins configuration or my grid hub configuration, I checked it again and again and all looks good :-)
so guess I'm missing something
any advice/direction/idea will be highly appreciated.
Thanks
Ronen
I have a scenario that requires me to revert to a clean snapshot before or after each Coded UI test method is executed. I have researched using the TFS Lab Management API (see http://blogs.microsoft.co.il/shair/2011/12/22/tfs-api-part-42-getting-started-with-lab-management-api/) to revert to a specific snapshot as part of the TestInitialize and/or TestCleanup method, but I can only get this to work when executed locally. When executed on a remote machine I get errors authenticating to the TFS service.
My other option is to somehow do a 'foreach test in testrun' into the build process template (LabDefaultTemplate.11.xaml). I have identified the area that I think this would fit best, but cannot find any documentation on running a loop on each test.
Is this something that is possible, or is there somehow a built in method to accomplish this that I have overlooked?
To do what you propose you should switch to Release Management and create a separate test run for each of your groupings, in your case each test. You can use RM to orchestrate looping through each of your runs and executing then.
http://nakedalm.com/execute-tests-release-management-visual-studio-2013/
However running a UI test should not break your application and I would suggest that either your tests are way too long, or there is some flaw in the design of your application.
My Jenkins job runs many tests that create log files. In case of failure, I want to look at the log of the failed test. I'd rather use Jenkins web-server to do it, even have a link in the email it sends me.
Is there any plugin that can do it? Or maybe another way?
You provide few details in your question, so it is impossible to give specific advice. In a general level: this is already possible. When your test framework creates JUnit XML files with test results, the test output can be included between the <failure> and </failure> tags. Usually test frameworks should take care of this automatically, so you are probably not using a test framework and are manually generating the XML files containing test results?
I recommend you adopt some test framework. It is usually well worth the effort.
Is there a way to mark a fitnesse test such that it will not be run as part of a suite, but it can still be run manually?
We have our FitNesse tests running as part of our continuous integration, so new tests that are not yet implemented cause the build to fail. We'd like a way to allow our testers and BAs to be able to add new tests that will fail while still continuing to validate the existing tests as part of continuous integration.
Any suggestions?
The best way to do this is with suite tags. You can mark tests with a tag from the properties page and then you can filter for the or filter to exclude them.
In this case I would exclude with "NotOnCI" tag. Then add the following argument to the URL:
ExcludeSuiteFilter=NotOnCI
This might look like this then as the full URL:
Http://localhost:8080/FrontPage?test&ExcludeSuiteFilter=NotOnCI
You can select multiple tags by splitting with commas, but they act as "or",Not "and".
Check the FitNesse user guide for more details. http://fitnesse.org/FitNesse.UserGuide.TestSuites.TagsAndFilters
Would it make sense to have multiple Suites, one for regression tests that should always pass, and another one for the tests that are not yet implemented?
Testers and BAs can add tests/suites to the latter suite and the CI server only runs tests in the former suite.
Once a developer believes he has implemented the behavior they can move the test/suite relating to that functionality to the 'regression' suite so that it will be checked in continuous integration.
This might make the status of a test/suite a bit more explicit/obvious than just having a tag. It would also provide a clear handover from development to test/BA to indicate the implementation is finished.
If you just want to have a test/suite not run during an overall run of a suite that contains the particular test/suite you could also just tick 'Skip (Recursive)' in the properties page of that test/suite (below 'Page Type').