In Calabash you can implement a backdoor, i.e., a way to call a method in the app delegate during test case execution. This makes it possible to simulate non-UI/external events like a Bluetooth device to appear/disappear, etc. in simulator.
(I understand the backdoor concept is possible because Calabash-server is linked into the executable. Appium executes tests on the an untouched executable.)
Is a similar concept available in Appium? If not what are the alternatives then?
(I couldn't find anything in the Appium documentation.)
I have written a library in order to get a backdoor concept in my Appium setup. It is called Insider.
It includes the following features:
Send a message to the app from a remote script;
Send a message to the app from a remote script and wait for response;
Send local notifications through NSNotificationCenter with given parameters from a remote script;
Get device system state information (CPU, memory, IP address, etc) while running tests;
Manage files / directories in application sandbox (Documents, Library, tmp) while running tests;
GitHub: https://github.com/alexmx/Insider
API reference: http://alexmx.github.io/Insider/
We support this in a framework called Illuminator (full disclosure, I am the author), with an RPC channel called the automation bridge.
When instruments runs, we use the performTaskWithPathArgumentsTimeout function to call our script, which relays our supplied arguments to the running app (on simulator or on hardware). Any data returned from the app is relayed back to UIAutomation.
I just finished a project to do just this. In my case I needed to simulate some Bluetooth events. My apps are for Android and iOS, and they are written in C# using the Xamarin framework. The backdoor is accessed with Python or Robot Framework. But in the end it is just a MQTT server, so it would be possible to implement in other languages. I am sharing it here because others might find it useful:
https://github.com/sebnil/appium-mqtt-backdoor
It is not really a pure Appium solution, and more like a workaround. I found no robust way to implement a backdoor in Appium and make it work for both Android and iOS. But principally it works just the same when it comes to writing test cases in python or Robot Framework.
I had a very similar problem when developing iOS and Android apps in Xamarin, and testing them in Appium. Basically I found no robust way to implement a backdoor that would work the same in both Android and iOS.
You can do a backdoor into Android applications if you use the Espresso driver:
https://appiumpro.com/editions/51
I went with the solution of adding the backdoor into the test framework itself. So instead of implementing the backdoor in Appium, I implemented it in Python and Robot Framework (both of which can be used to control Appium). Basically the backdoor is implemented via a MQTT broker. More information here:
https://sebastiannilsson.com/blog/appium-mqtt-backdoor/
Related
I'm currently working on a project that relies heavily on appium / selenium for automation. These frameworks are great for starting out, but the robustness isn't quite there, and requires a lot of extra hardware / software to run the automation. Such as macOS, xcode, adb, appium, selenium, usb connection or connected through WIFI ( what we currently have to use ). There is just a lot of dependencies in this automation stack, and it would be nice to have a cleaner, more reliable, and scalable solution.
So I'm wondering. Does any one know a way to run automation for iOS and Android via REST api using a server that lives directly on the device allowing us to communicate to the device like curl -POST <device_ip>:<port>/session/{sessionId}/openApp.
Think of the WebDriverAgent that Facebook built, but instead of being built with xcodebuild, that Agent just lives on the device. Essentially when you build this framework it starts up that server I'm describing, but its reliant on xcode, and i ultimately would like to remove xcode from having to be in the picture. I know that there are so many issues today I see with people having issues with both the WDA server and xcode. Specially with new versions, and how the WebDriverAgent is now archived by facebook.
Can't we just create an app that can act as the WebDriverServer running at all time, and will just use the same logic as today.. via start session, find elements by Id, click on them, and move on. This would also remove the need to running Appium on your computer, and rely on it to proxy your commands to that WDA server with iOS.
I know android is a much simpler picture, and I'm currently a little more focused on how to solve this with iOS at the moment.
I would appreciate any insight into this issue / question, and if anyone has suggestions on Appium, iOS automation, android automation, or other points that can be made please send me your feedback.
We do run our automation using real devices!
I have use Appium / Selenium to access from Rest API . In my opinion it is easier than building your own.
One of the solution was is we are currently working in a project using Flutter - that still need your Xcode to sign in the dependencies. You will also need to make sure libimobiledevice and ideviceinstaller are installed, and lastly modify Flutter. And we call the real device from rest api . We are currently scaling and monitoring the performance.
Another viable alternative which I can think of, doing it through XCTest will be easier than Appium . Just provide the end point in your app and create a wrapper in XCTest. Call the rest api and run the test in XCTest. It is more stable and faster in long run.
But for bulk of projects , I am still using Appium to test for iOS while we are evaluating this solution.
If you're open to using commercial tools, you can consider using xcuitrunner. It installs a recent version of Appium's fork of WebDriverAgent on your iOS device, launches it (and keeps it alive), and returns you a HTTP endpoint which you can use to interact directly with the WebDriverAgent.
It helps remove a couple of dependencies (such as Xcode), which may be difficult to manage. You'll still need a code signing certificate and provisiong profile from Apple, though.
I see AWS Device Farm support for Calabash to run Ruby tests, meanwhile, unable find any documentation regarding Appium Ruby combination for WebApp automation.
AWS Device Farm still provides only server-side way of executing tests, and its limited to Java/Python support in combination with Appium.
If you already have set of tests in Ruby, the best option is to look for device clouds with client-side execution (there are many of them today), then you won't be dependent in what language to code your tests, just set host/capabilities of cloud service.
Thanks for reaching out. We do not support appium/Ruby combination on the public fleet as of today. We do have plans to support more frameworks on the public fleet in 2018.
However, we recently launched a Feature called Direct Device Access where customer tests runs on the client side. You can use Direct Device Access to run your appium/ruby tests on Device Farm. However Direct Device Feature is only available on private devices.
To know more about Direct Device Access and Private Devices, please see the links below
http://docs.aws.amazon.com/devicefarm/latest/developerguide/direct-device-access.html
https://aws.amazon.com/device-farm/pricing/#privateDevices
You could get a private device pool in device farm. Then you could run any testing framework you want.
https://aws.amazon.com/about-aws/whats-new/2017/10/amazon-device-farm-launches-direct-device-access-for-private-devices/
I have explored couple of tools like Appium,KIF, for these tools we need to own the app (to enable Automation Instrument) to automate. And also tried with .IPA files available on the internet (Gmail email client) on iOS simulator, not got any success yet.
I have requirement to automate iOS default Email app, is there any tool/approach to do this?
Yup, Apple has it pretty locked down for apps that are not yours. For that situation, you can try using Sikuli, which uses a computer vision approach to automation. (Sikuli uses OpenCV under the hood).
If you also want to automate an app on a real device, you can use Sikuli combined with a camera and Tapster, a robot for interacting with a device. (Disclosure: I started the Appium and Tapster projects.)
I am working on an iOS application in which automation testing will be done on OS level( can open any application through script). I have searched a lot, all I found is we are allowed to automate the test script within our own application only. Okay, My question is How the EggPlant is able to automate the test on OS level( can open contacts, phone application through scripts)?
Note: This will be an in-House enterprise application and not destined for apple app store.
Mobile test automation tools use vendor provided APIs to interact with operating system. For iOS it's UI Automation.
You can only automate what is within the binary of your application. For instance, if your app uses a web view you can access the internet, but you cannot exit your application from the automation test and open Safari. EggPlant doesn't operate based on the binary of your application, it "looks" at the screen and executes gestures or processes based on what it sees.
We were exploring various test suites for mobile automated testing and ran into this company called Perfecto Mobile. One of the features that blew me away was they are able to (without jailbreaking) effectively perform a "Remote desktop" on a physical iPad.
So, the iPad's screen is mirrored within a web application, it can register touch / swipe events on the web app and perform them on the device. The only relevant technical detail I have is that all this is being performed using commands sent over the USB cable.
I'm really curious as to how this is implemented and details on relevant Private APIs if any.
Thanks,
Teja
I'm not familiar with PerfectoMobile, but I can give you a few pointers on how this can be accomplished:
For the mirroring, one way would be to look at using AirPlay, the APIs are pretty well documented, but not to do what we're talking about which would require some serious reverse engineering, but it's definitely possible, these guys have done it. A different approach would be to run a background app that would periodically take snapshots of the main screen, and send them over a socket connection to a client. You could do this as a VNC server, and to incorporate the remote view in a web app, you could use noVNC. As far using a USB connection, in the case of the background app talking to a client over TCP, you could to a port forward.
To actually perform on the device the touch events sent from your remote viewer, most people have been using the GSEvent group of functions from the GraphicsServices private framework without needing to jailbreak the device. Again, a background app would receive over a socket an instruction such as "Tap there", instantiate the GSEvent, and inject it so it gets processed in the run loop of the most front app.
These few possibilities, at least, have been implemented successfully in different iOS apps up to iOS 6.1 (iOS7 is a different animal). You won't find any such app in the App Store, since Apple clearly prohibits the use of private frameworks in 3rd party apps, instead people deploy them in-house using Enterprise and ad-hoc provisioning profile. On Android however, there's VMLite available in the Play Store.
If you looking to share screen from ios / android, check out skreen.me. They have sample apps you can try out, also they provide libs for mobile app integration.