Is there a UI Exerciser for iOS like Monkey on Android? - ios

I'm looking for a development tool that will allow me to send randomly generated user inputs (touches, hardkeys, gestures) to an iOS devices (not simulator) like Monkey on Android.

The UI Automation instrument in Instruments allows you to script interaction with your user interface, taking screenshots or testing for valid responses along the way. These testing scripts are written in JavaScript, which lets you run fairly complex tests.
The tests I've run have always been directed, but I don't see a reason why you couldn't use something like a random() function to trigger randomly placed touch events, etc. From this, you could build your own custom Monkey-like tool for hammering on your application. Even better, you could run other instruments at the same time as this one to identify potential memory leaks or CPU hotspots.
I show how UI Automation works as part of the Testing session in my course on iTunes U, for which my notes can be viewed here.

Related

XCUITest: How to jump into app code? How to modify the state of the app under test?

Coming from an Android/Espresso background, I am still struggling with XCUITest and UI testing for iOS.
My question is about two related but distinct issues:
How to compile and link against sources of the app under test?
How to jump into methods of the app under test and modify its state at runtime?
To tackle these questions, we should first understand the differences between XCode's "unit test targets" and "UI test targets".
XCUITests run inside a completely separate process and cannot jump into methods of the app under test. Moreover, by default, XCUITests are not linked against any sources of the app under test.
In contrast, XCode's unit tests are linked against the app sources. There is also the option to do "#testable imports". In theory, this means that unit tests can jump into arbitrary app code. However, unit tests do not run against the actual app. Instead, unit tests run against a stripped-down version of the iOS SDK without any UI.
Now there are different workarounds for these constraints:
Add some selected source files to a UI test target. This does not enable to call into the app, but at least it enables to share selected code between app and UI tests.
Pass launch arguments via CommandLine.arguments from the UI tests to the app under test. This enables to apply test-specific configurations to the app under test. However, those launch arguments need to be parsed and interpreted by the app, which leads to a pollution of the app with testing code. Moreover, launch arguments are only a non-interactive way to change the behavior of the app under test.
Implement a "debug UI" that is only accessible for XCUITest. Again, this has the drawback of polluting app code.
This leads to my concluding questions:
Which alternative methods exist to make XCUI tests more powerful/dynamic/flexible?
Can I compile and link UI tests against the entire app source and all pod dependencies, instead of only a few selected files?
Is it possible to gain the power of Android's instrumented tests + Espresso, where we can perform arbitrary state modifications on the app under test?
Why We Need This
In response to #theMikeSwan, I would like to clarify my stance on UI test architecture.
UI Tests should not need to link to app code, they are designed to
simulate a user tapping away inside your app. If you were to jump into
the app code during these tests you would no longer be testing what
your app does in the the real world you would be testing what it does
when manipulated in a way no user ever could. UI tests should not have
any need of any app code any more than a user does.
I agree that manipulating the app in such a way is an anti-pattern that should be only used in rare situations.
However, I have a very different stance on what should be possible.
In my view, the right approach for UI tests is not black-box testing but gray-box testing. Although we want UI tests to be as black-boxy as possible, there are situations where we want to dig deep into implementation details of the app under test.
Just to give you a few examples:
Extensibility: No UI testing framework can provide an API for each and every use case. Project requirements are different and there are times where we want to write our own function to modify the app state.
Internal state assertions: I want to be able to write custom assertions for the state of the app (assertions that do not only rely on the UI). In my current Android project, we had a notoriously broken subsystem. I asserted this subsystem with custom methods to guard against regression bugs.
Shared mock objects: In my current Android project, we have custom hardware that is not available for UI tests. We replaced this hardware with mock objects. We run assertions on those mock objects right from the UI tests. These assertions work seamlessly via shared memory. Moreover, I do not want to pollute the app code with all the mock implementations.
Keep test data outside: In my current Android project, we load test data from JUnit right into the app. With XCUITest's command line arguments, this would be way more limited.
Custom synchronization mechanisms: In my current Android project, we have wrapper classes around multithreading infrastructure to synchronize our UI tests with background tasks. This synchronization is hard to achieve without shared memory (e.g. Espresso IdlingResources).
Trivial code sharing: In my current iOS project, I share a simple definition file for the aforementioned launch arguments. This enables to pass launch arguments in a typesafe way without duplicating string literals. Although this is a minor use case, it still shows that selected code sharing can be valuable.
For UI tests you shouldn't have to pollute your app code too much. You
could use a single command line argument to indicate UI tests are
running and use that to load up some test data, login a test user, or
pick a testing endpoint for network calls. With good architecture you
will only need to make the adjustment once when the app first launches
with the rest of your code oblivious that it is using test data (much
like if you have a development environment and a production
environment that you switch between for network calls).
This is exactly the thing that I am doing in my current iOS project, and this is exactly the thing that I want to avoid.
Although a good architecture can avoid too much havoc, it is still a pollution of the app code. Moreover, this does not solve any of the use cases that I highlighted above.
By proposing such a solution, you essentially admit that radical black-box testing is inferior to gray-box testing.
As in many parts of life, a differentiated view is better than a radical "use only the tools that we give you, you should not need to do this".
UI Tests should not need to link to app code, they are designed to simulate a user tapping away inside your app. If you were to jump into the app code during these tests you would no longer be testing what your app does in the the real world you would be testing what it does when manipulated in a way no user ever could. UI tests should not have any need of any app code any more than a user does.
For unit tests and integration tests of course you use #testable import … to get access to any methods and properties that are not marked private or fileprivate. Anything marked private or fileprivate will still be inaccessible from test code, but everything else including internal will be accessible. These are the tests where you should intentionally throw data in that can't possibly occur in the real world to make sure your code can handle it. These tests should still not reach into a method and make any changes or the test won't really be testing how the code behaves.
You can create as many unit test targets as you want in a project and you can use one or more of those targets to hold integration tests rather than unit tests. You can then specify which targets run at various times so that your slower integration tests don't run every time you test and slow you down.
The environment unit and integration tests run in actually has everything. You can create an instance of a view controller and call loadViewIfNeeded() to have the entire view setup. You can then test for the existence of various outlets and trigger them to send actions (Check out UIControl's sendActions(for: ) method). Provided you have setup the necessary mocks this will let you verify that when a user taps button A, a call gets sent to the proper method of thing B.
For UI tests you shouldn't have to pollute your app code too much. You could use a single command line argument to indicate UI tests are running and use that to load up some test data, login a test user, or pick a testing endpoint for network calls. With good architecture you will only need to make the adjustment once when the app first launches with the rest of your code oblivious that it is using test data (much like if you have a development environment and a production environment that you switch between for network calls).
If you want to learn more about testing Swift Paul Hudson has a very good book you can check out https://www.hackingwithswift.com/store/testing-swift. It has plenty of examples of the various kinds of tests and good advice on how to split them up.
Update based on your edits and comments:
It looks like what you really want is Integration Tests. These are easy to miss in the world of Xcode as they don't have their own kind of target to create. They use the Unit Test target but test multiple things working together.
Provided you haven't added private or fileprivate to any of your outlets you can create tests in a Unit Test target that makes sure the outlets exist and then inject text or trigger their actions as needed to simulate a user navigating through your app.
Normally this kind of testing would just go from one view controller to a second one to test that the right view controller gets created when an action happens but nothing says it can't go further.
You won't get images of the screen for a failed test like you do with UI Tests and if you use storyboards make sure to instantiate your view controllers from the storyboard. Be sure that you are grabbing any navigation controllers and the such that are required.
This methodology will allow you to act like you are navigating through the app while being able to manipulate whatever data you need as it goes into various methods.
If you have a method with 10 lines in it and you want to tweak the data between lines 7 and 8 you would need to have an external call to something mockable and make your changes there or use a breakpoint with a debugger command that makes the change. This breakpoint trick is very useful for debugging things but I don't think I would use it for tests since deleting the breakpoint would break the test.
I had to do it for a specific app. What we did is creating a kind of debug menu accessible only for UI tests (using launch arguments to make it available) and displayed using a certain gesture (two taps with two fingers in our case).
This debug menu is just a pop-up view appearing over all screens. In this view, we add buttons which allow us updating the state of the app.
You can then use XCUITest to display this menu and interact with buttons.
I've come across this same problem OP. Coming from the Android ecosystem and attempting to leverage solutions for iOS will have you banging your head as to why Apple does things this way. It makes things difficult.
In our case we replicated a network mocking solution for iOS that allows us to control the app state using static response files. However, Using a standalone proxy to do this makes running XCUITest difficult on physical devices. Swift provides some underlying Foundations.URLSession features that allow you to do the same thing from inside of the configured session objects. (See URLProtocolClasses)
Now, our UI tests are having an IPC problem since the runner app is in its own process. Prior the proxy and UI tests lived in the same process so it was easy to control responses returned for certain requests.
This can be done using some odd bridges for interprocess communication, like CFMessaging and some others (see NSHipster here)
Hope this helps.
how exactly does the unit test environment work
The unit tests are in a bundle which is injected into the app.
Actually several bundles are injected. The overarching XCTest bundle is a framework called XCTest.framework and you can actually see it inside the built app:
Your tests are a bundle too, with an .xctest suffix, and you can see that in the built app as well:
Okay, let's say you ask for one or more tests to run.
The app is compiled and runs in the normal way on the simulator or the device: for example, if there is a root view controller hierarchy, it is assembled normally, with all launch-time events firing the way they usually do (for instance, viewDidLoad, viewDidAppear, etc.).
At last the launch mechanism takes its hands off. The test runner is willing to wait quite a long time for this moment to arrive. When the test runner sees that this moment as arrived, it executes the test bundle's executable and the tests begin to run. The test code is able to see the main bundle code because it has imported the main bundle as a module, so it is linked to it.
When the tests are finished, the app is abruptly torn down.
So what about UI tests?
UI tests are completely different.
Nothing is injected into your app.
What runs initially is a special separate test runner app, whose bundle identifier is named after your test suits with Runner appended; for example, com.apple.test.MyUITests-Runner. (You may even be able to see the the test runner launch.)
The test runner app in turn backgrounds itself and launches your app in its own special environment and drives it from the outside using the Accessibility framework to "tap" buttons and "see" the interface. It has no access to your app's code; it is completely outside your app, "looking" at its interface and only its interface by means of Accessibility.
We use GCDWebServer to communicate between test and the app under test.
The test asks the app under test to start this local server and then test can talk to app using this server. You can make requests on this server to fetch some data from app as well as to tell app to modify some behavior by providing data.

Can you turn only one page into an app in meteor?

I have just tried to run
meteor run ios
That command emulates my application as an app. But there is just one page that would be interesting to have as an app. Can you control this in some way?
I don't think this is possible. The whole app gets exported regardless of platform, hence the universal/isomorphic apps concept. And in the universal app concept is one that I'm starting to find fault in. That said there is a better middle ground.
We'll call it sudo-universal apps. (probably a horrible name, but whatever :D)
Essentially the concept is that you have 3 codebases, for each device (web/ios/andriod) but share many of the same modules via something like npm, or potentially some other way of sharing code.
Then you can focus on the ui for each device and its strengths and weaknesses, but keep all the important logic you've built.
Check out the following:
https://voice.kadira.io/say-no-to-isomorphic-apps-b7b7c419c634#.3bn5ovts1
https://forums.meteor.com/t/say-no-to-universal-apps/16813/7
Hope this helps!
You can check whether the client code is executed on iOS or not, and change the app accordingly:
if(navigator.userAgent.match(/(iPad|iPhone|iPod)/g)) {
// Disable the links, and redirect to which page you want
}
But Justin's answer is great, a new platform usually needs more than just some tweaks. A quickly developed app has very low value for the user.

Is there any way or tool to check whether a particular iOS app is checking Jailbreak detection?

I'm just learning the Iphone security out of curiosity. This is completely a beginner question.
I've seen the posts on Stack Overflow,
How do I detect that an iOS app is running on a jailbroken phone?
How to detect that the app is running on a jailbroken device?
Those answers are providing the information on "if the app is running on a jail broken device or not". But I need to check "whether the app is running a JailBreak detection or not." ( Not in the programmer point view, but more of a Pentester point of view ). Are there any tools, methods ?
I'd achieve this by downloading Flex 2. With this tool you can view all of the variables, functions and procedures that are in an app.
Go to the patches tab, press the '+' symbol and locate the app to create a patch for. Then process the app by tapping it - don't worry about adding patch name.
Next, when you're inside the processed app you need to press "Add units". This will allow you to add overrides so that you can change what functions return and such.
Anyway, from here, you need to select a class to look inside. The jailbreak detection functions and variables are always stored in the executable. Tap the app name again at the top of all the classes under the 'executable' tab. Then just search.
Just search for "jailbreak" or "jailbroken" and if the app is running checks then it will return functions and vars related to this. I am yet to see an app that runs this check with a different function name that does not include "jailbreak" or "jailbroken".
If you'd like, I can show you how to override this detection.
I suggest you try the app "highway rider", because that has detection and you can easily see, and override it nice and easy and get the startup warning to go away if you want!

Is there any way to automate scanning of barcode from appium?

I require my app to Scan barcode automatically , i have the barcodes, i have the app required, how can i make the App read physical bar codes using automation in appium,
In manual i can scan the code by pointing out the camera to an bar code.
I dont know how to do it while executing an test suite.
i got idea of placing Mobile device on an Stand, tripod and placing barcode in front of it.
But the problem is we can test only one barcode. i want to run for about 100-200 barcodes ans see app performance does not decrease, can any one suggest some ways?
This is a very interesting case. If you really want to test your app scanning the bar codes through camera then I think instead of looking for a solution through appium you have to look for a solution to exactly match your manual process.
You can click scan button using appium(I assume) - for example you can write a script to click on this button every 10 seconds.
Challenge is to point the camera to the next barcode as soon as first scan is complete. Possible solutions- I believe that all the bar codes can be captured in a file in a pc. Copy these bar code images in a ppt or using any other program so that these images can automatically displayed one by one.
Put your device in front of this pc as you are already planning to use tripod stand etc. Focus them on screen(may be first time you might need to do all these adjustments). Run your script. Do some trial runs. Synchronize the process with correct time outs. I think this should be feasible though really not the best way to automate this scenario.
I haven't tested it, but this blog post can be your answer http://www.mobileqazone.com/profiles/blogs/simulating-camera-in-android-emulator. If not, you can try bypass it by creating API to upload an image to your server instead of reading it from the camera. I think the impact on your QA will not change dramatically (besides, it's very easy and fast to check that part manually)
We do have an app that scans plenty of items such as bar codes plus tracking the dimensions of objects through the camera.
I read the idea of synchronizing images into a slideshow which is absolutely hilarious. The way I do it, is by using my own node server app with websockets that will toggle images through http requests. When this app is hosted in a laptop/ipad positioned exactly in front of the AUT, the test will have full control on which barcode to be shown at particular time frame.
No synchronization required at all and does the job.
It is a modified version of https://github.com/JangoSteve/websockets-demo

Visual/look (not functional!) unit testing in iOS

I'm developing some custom UI controls, and I'd like to unit test them visually; that is, the driving test data should simply be user interaction (taps, swipes, etc.), and the passing the test means the correct thing is rendered on the device.
I've looked at some existing UI test environments (e.g. UIAutomation scripts), but they solve a different problem -- they generate a sequence of UI events and test against app states they cause. That's one level up from what I'm talking about -- they assume the UI controls work as designed, and don't take the look of the UI into account at all.
I'd like to generate a user interaction as a series of taps and swipes, record the result on the device, and compare the result against a desired output (an image or video).
Is there anything out there which does this?

Resources