XCUITests run shell scripts between tests - ios

I do want to run shell script between two XCUITests. But since package is created and installed on the device or simulator, how this can be achieved? Is there a way to execute shell script on the host machine which has device connected(Either for Simulator or Real iPhone) between the tests?

You should probably set up Client-Server communication between device and host machine in order to do things like this.
This exact approach has been already implemented in
https://github.com/Subito-it/SBTUITestTunnelHost
Another option is to move shell code entirely to the test code. For example, if you use a shell script to communicate with a remote server, you should consider doing it on the device.

I think you can try inserting a script inside the test plan.
From Xcode:
Go to tab product
Press Test Plan
Manage Test Plans
Then, from the right side, select Test, and insert your script inside the pre-actions and/or pos-actions.
PS: I'm not sure if you can do that for selected tests. Maybe the scripts will run before/after each test.

Related

Run each test in a new container

I have google test based test suite. Since the tests manipulate the filesystem and do other things that I don't want to be left behind in case of a test crash, besides just not playing nicely with running tests on parallel, I want to run each test case in a new container. I am currently using CTest (aka. CMake test) to run the gtest binary, but I am not very attached to either of these, so if the best option is some other tool, I can accept that.
Can anyone suggest a way to automate this? Right now I am adding each individual test case manually to CTest with a call to docker run as part of the test command, but it is brittle and time consuming. Maybe I am doing this wrong?
You can run your GTest runner with --gtest_list_tests to list all tests.
You can then loop through this list and call your GTest runner with --gtest_filter set to the name of specific test.
The format of the list is a bit awkward to parse though, so need some shell scripting skills to get the actual test names.
Check the exit code of the GTest runner the know whether the test succeeded or failed.
I do not know if this integrates well with CTest.

Getting the name of the development machine at compile time?

I'm building an iOS app that communicates with a server. We have a test / staging server, a production server and each dev has a local instance of the server for development.
I've added some simple logic which configures the address of the server depending on whether we're running a TestFlight build, an App Store build or a debug build (for development). For the development build, the app tries to hit localhost, which is all well and good if we're running on the Simulator, but not so great if we're running on device.
I'm aware of ngrok, which is a possible solution, but since the exposed URL is partially randomly generated (for the free version at least), it's not a great fit. I was thinking that a workable approach for development could be to check the name of the development machine at compile time and insert this value. But I'm not sure how to achieve this, if it's possible at all. I remember doing compile time variable filtering using ant / maven and environment property files back in my Java days, but I'm wondering if there's a fairly straightforward way to achieve this in Xcode.
Can anybody shed any light on this?
So I carried on digging, and went with the following solution. Elements of this have been touched upon in numerous other posts here.
I added a new header file called HostNameMacroHeader.h to my
project.
I added a 'Run Script' phase to my build, before the
'Compile Sources'phase. The script contains the following:
echo "//***AUTOGENERATED FILE***" > ${SRCROOT}/MyAppName/HostNameMacroHeader.h
echo "#define BUILD_HOST_NAME #\"`hostname`\"" >> ${SRCROOT}/MyAppName/HostNameMacroHeader.h
Then in my implementation, where I want to use the server address, I use the generated BUILD_HOST_NAME macro.
It's a somewhat hacky solution, but it does the job for now. Suggestions and cleaner versions are welcome.

Android Test Sharding with Spoon

I am using Spoon and Espresso to automate UI/Functional instrumentation tests on our android app.
I would like to know if there is a way to distribute instrumentation tests across the multiple connected devices and/or emulators so that i can reduce the test execution time.
Ex: I have say 300 tests that take 15 mins to run on 1 emulator. Is there a way i can add more emulators (say 4), distribute 75 tests to each emulator and reduce the test execution time?
Appreciate your inputs on this.
What you are looking for is called auto-sharding. You have to call the spoon-runner with --shard and add the serials from all connected devices with -serial. You can find the serials with adb devices.
You can choose more than one device in Choose dialog. Press Shift or CTRL key when click.
Another solution is to use Gradle. On the right side of Android Studio choose Gradle, then verification, finally connectedAndroidTest. It would give you the same effect as you would put in console:
./gradlew connectedAndroidTest or gradlew.bat connectedAndroidTest
I mean I would run all test cases on all available devices (physical and emulators). For choosing exactly which test classes you should make tasks in build.gradle.
Learn basics of Groovy programming language to make writing Gradle task scripts more effectively. Here's an example of task written in Groovy: Run gradle task X after "connectedAndroidTest" task is successful
You may also learn about Continuous Integration and its tools like Jenkins, or Travis, which you can configure to run specific test cases on every commit.
As an example please take a look at this build log of my Android project: https://travis-ci.org/piotrek1543/WeatherforPoznan/builds/126944044
and here is a configuration of Travis: https://github.com/piotrek1543/WeatherforPoznan/blob/master/.travis.yml
Have any more question? Please free to ask.
Hope it help

How to run MTM tests on multiple product builds?

We have MTM tests running on Release build of our product (Desktop Application).
Now we want the same tests to run on two product builds: Beta and Release.
When a test run is initiated from MTM (or tcm), we need a way to pass a 'value' to the test run telling it which version/build of the product it needs to test. This 'value' will then be read in the test method and correct decision will be taken while the tests are executing (like installation path, test results file updates etc).
Is there any way to achieve this? in TFS or MTM?
Consider using Test Settings.
If you start an automated tests from MTM you can specify Test Settings to use when running this tests.
In "Advanced" part of Test Settings you can specify scripts to run on your environment before running the tests.
Create two scripts, one for Release and one for Beta version. These scripts could create a file with particular content, set an environment variable or do something else that can then be checked by your test, when it’s running.
Create two Test Settings, one for Release and one for Beta version and
set up appropriate script to run for each Test Settings.
Use one of these Test Settings when starting tests.
This way you could pass information to your test.
We also faced similar problem in our project. We decided to modify the build definition template to take product build type (Beta or RTM or Release) as an input parameter. Using this value during TFS build, we can either update the TFS build name to reflect the product build type or create a file (xml) as part of TFS build process to contain this type detail.
See here for more detail on how to add Arguments and Parameters to build definition: http://www.ewaldhofman.nl/post/2010/04/27/Customize-Team-Build-2010-e28093-Part-2-Add-arguments-and-variables.aspx
Pls take a look at the below link, if it can be used to suit your needs.
http://blogs.infosupport.com/switching-browser-in-codedui-or-selenium-tests-based-on-mtm-configuration/
one question: Are you using Build-Deploy-Test flow to install the product on the environment or doing it any other way?
So, when you select to run a set of automated tests and pick the build from the drop down list this tells MTM which drop folder to go look in. So if your configuration is code, as it should be, then you can set this up to be automatic.
It is not possible to pass additional variables when you start a test run in MTM.
You could setup your tests to run from the Release Management tool instead. You would then be able to configure the environment however you like based on passed in veriables.
http://nakedalm.com/execute-tests-release-management-visual-studio-2013/

Robot Framework use cases

Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine

Resources