I have google test based test suite. Since the tests manipulate the filesystem and do other things that I don't want to be left behind in case of a test crash, besides just not playing nicely with running tests on parallel, I want to run each test case in a new container. I am currently using CTest (aka. CMake test) to run the gtest binary, but I am not very attached to either of these, so if the best option is some other tool, I can accept that.
Can anyone suggest a way to automate this? Right now I am adding each individual test case manually to CTest with a call to docker run as part of the test command, but it is brittle and time consuming. Maybe I am doing this wrong?
You can run your GTest runner with --gtest_list_tests to list all tests.
You can then loop through this list and call your GTest runner with --gtest_filter set to the name of specific test.
The format of the list is a bit awkward to parse though, so need some shell scripting skills to get the actual test names.
Check the exit code of the GTest runner the know whether the test succeeded or failed.
I do not know if this integrates well with CTest.
Related
As stated I'm testing a React Native app with Detox (and Jest) and I'd like to have several e2e files with different purposes -e.g.: login, fill a form and so- and run them in a specific order (the log in e2e file should go first). Running them in random order wouldn't do the job.
The goal is to avoid having one huge file.
Note: I'm running the tests on iOS simulator.
Short answer: you can't, but keep reading.
Jest's conceptual model is that each test file is a unit and is totally isolated from the others. It makes things easier to reason about and it allows parallelisation. If your tests need to be run in a specific order then they are logically a single unit so need to be specified in a single test file.
However that doesn't preclude you from splitting your tests into several files. You can have one file which Jest recognises (e.g. full-suite.e2e.js) and have that file include several others (e.g. login.js, forms.js, etc.). That way Jest runs everything as one file, in your specified order, yet you can organise your individual tests in a way that makes logical sense to you.
I do want to run shell script between two XCUITests. But since package is created and installed on the device or simulator, how this can be achieved? Is there a way to execute shell script on the host machine which has device connected(Either for Simulator or Real iPhone) between the tests?
You should probably set up Client-Server communication between device and host machine in order to do things like this.
This exact approach has been already implemented in
https://github.com/Subito-it/SBTUITestTunnelHost
Another option is to move shell code entirely to the test code. For example, if you use a shell script to communicate with a remote server, you should consider doing it on the device.
I think you can try inserting a script inside the test plan.
From Xcode:
Go to tab product
Press Test Plan
Manage Test Plans
Then, from the right side, select Test, and insert your script inside the pre-actions and/or pos-actions.
PS: I'm not sure if you can do that for selected tests. Maybe the scripts will run before/after each test.
We recently added Coded UI Tests to our solution. The tests complete successfully when ran through Test Explorer but when the code is checked in and a build is triggered, all the CUIT's fail (error message is below).
I have gone to each of the links in the error message. The first one details how to set up a test agent to run the tests. We don't really need tests on all our environments as we are setting up a lab environment to run them and adding the test agent would require me submitting a ticket which would probably take months to get a response back, not to mention walking the person through what needs to be done.
The second link is dead and I'm pretty sure I don't want to change the build to be interactive.
I am hoping there is an easy way to change the build definition (which I have access to) so that it can ignore all Coded UI Tests but still run the Unit Tests. Is this possible? Is there an easier way of going about this? Each set of tests have their own project file, all Coded UI Tests in one project and all Unit Tests in another project.
Thanks in advance.
Here is the error that is on all of the coded UI tests:
Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop" (http://go.microsoft.com/fwlink/?LinkId=255012)
If you are running the tests as part of your team build, you must also set up the build agent to run as an interactive process. For more information, see "How to: Configure and Run Scheduled Tests After Building Your Application" (http://go.microsoft.com/fwlink/?LinkId=254735).
One thing you can do is to group all the coded UI tests into a particular test category and exclude running those in the build. We do this with our "integration" tests and only choose to run our true unit tests for speed reasons.
See below link for how to use test categories:
https://msdn.microsoft.com/en-us/library/dd286683.aspx
Also see the below for how to edit your build definition to only run the tests you want:
Excluding tests from tfs build
You can use use some naming patterns and combine that with the Test case Filter option in the execute test step:
https://jessehouwing.net/xaml-build-staged-execution-of-unit-tests/
Something like:
FullyQualifiedName ~ .Ui.
This would require the Ui tests to have .Ui in the namespace
We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.
Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine