We are trying to migrate from UIAutomation to XCUITests and did use the captureScreenWithName() API to programmatically generate screen shots.
What is the replacement in the XCUITests ?
(I know that automatically screenshots are taken in case of error, but we have a special test which runs forever in a loop and evaluates QA click,tap commands via network similar to the appium-xcuitest-driver https://github.com/appium/appium-xcuitest-driver)
Do I need to rip out private headers (XCAXClient_iOS.h) like the appium guys did in order to get a screenshot API?
Edit
I used the actual code line for the accepted solution from
https://github.com/fastlane/fastlane/blob/master/snapshot/lib/assets/SnapshotHelper.swift and its just this for IOS
XCUIDevice.sharedDevice().orientation = .Unknown
or in objC
[XCUIDevice sharedDevice].orientation =UIInterfaceOrientationUnknown;
I use a process on the host to lookup in the "Logs/Test/Attachments" directory all Screenshot_*.png files before the call , and find the new shot then after the call as the new file added in this directory.
Gestures (taps, swipes, scrolls...) cause screenshots, and screenshots are also often taken while locating elements or during assessing expectations.
Fastlane's snapshot tool uses a rotation to an unknown orientation to trigger a screenshot event (which has no effect on the app): https://github.com/fastlane/fastlane/tree/master/snapshot - you can use this if you want to be in control of some screenshots.
Related
I'm working on automated test using Appium with Robot framework on Android device. I create schedule run on Jenkins. My test flow is entering some data in page A and submit, then switch to page B to check the result and switch to page A to enter a new data. I repeat this loop for around 10+ time. Everything works fine in around 4-5 rounds but after that there show up an error :
StaleElementReferenceException: Message: Cached element 'By.xpath:
//android.widget.TextView[#text='Limit']' do not exists in DOM anymore
The TextView is in the page A. I monitored the robot and saw that the TextView was shown up but the robot did not see it. I tried restart the device but the problem is not solved. I search through the internet and found some who facing the same issue but they use different programming language like Java or Python. I have no idea what I have to do next.
Development Tools :
Appium version: 1.21.0
Robot Framework version: 4.1.2 (Python 3.10.0 on win32)
First I do not use Robot Framework, but the code should be similar according to this https://robocorp.com/docs/languages-and-frameworks/robot-framework/try-except-finally-exception-catching-and-handling.
Second, I'm not sure if this is the best way to get around this. I think there is something you can do with the expected conditions class to get around this in a "cleaner way" but I'm not quite familiar with it enough to show/tell you. Instead what I've done is something like this...
from selenium.common.exceptions import StaleElementReferenceException
while some_limiting_factor:
try:
# logic for submitting page A, assertions for page B
except StaleElementReferenceException:
element = driver.find_element('By.xpath: //android.widget.TextView[#text='Limit']' )
As much as I want to cache elements in appium, it seems that the service itself does NOT want you to, at least not in my experience. Getting a fresh element(s) every time seems to ensure a "slow but steady" test. Hopefully someone can show me the deep appium secrets one day.
I wish to develop a unit test runner extension for VSCode. The extension should display discovered tests grouped into expandable hierarchy, annotate run status, display output and errors for each test, provide run/debug commands on different levels, and of course the red/green bar.
Roughly spearating this into "model" and "view", I plan to implement the model in the extension process, and I plan to implement the view as HTML preview based on a TextDocumentContentProvider. (Is there a better approach?)
Now, the model and the view should communicate with each other. I want to implement the view as a single-page application. The view will send commands to the model, and the model will send events to the view (or the view will poll the model for events). The view will update itself according to received events.
My question is, what communication technique should I use? Can HTML page inside the HTML preview access VSCode/Atom/Electron/Node APIs? Can I share object instances, or do some lightweight IPC? By far I didn't figure out.
I've found that I can invoke VSCode commands or refresh the entire page, when the user clicks a link with href set to specific scheme (command:// or the one I registered for my TextDocumentContentProvider).
I do succeed to open an HTTP listener (http.createServer) in the extension process, and communicate through XMLHttpRequest on the HTML preview side. But it looks to me like a heavy overkill.
I wonder if there are more appropriate ways to do this?
Almenon is referring to the currently proposed Webview API that was released in version 1.21 (Feb 2018). For the time being, this appears to be a much better approach for HTML previews. But in order to use the API, there are special instructions. From the release notes:
These APIs are still proposed, so in order to use it, you must opt into it by adding a "enableProposedApi": true to package.json and you'll have to copy the vscode.proposed.d.ts into your extension project.
What isn't clarified (and probably should be) is how to add the downloaded declaration file to a project. One way to do it is place the file in $/node_modules/vscode, next to vscode.d.ts, which is generated during postinstall. Then add the following line to the top of vscode.d.ts:
/// <reference path="vscode.proposed.d.ts" />
That will link the type declaration files. To make this part of the installation process, write a build task to do it and then call it in the vscode:postinstall script in package.json.
VSCode has a new API that makes this easier.
https://github.com/Microsoft/vscode/issues/43713
You can find the new API here
To try the new API:
Add "enableProposedApi": true to your package.json
Manually download vscode.proposed.d.ts and add it to your project: https://raw.githubusercontent.com/Microsoft/vscode/master/src/vs/vscode.proposed.d.ts
Run your extension with the latest VS Code insiders build
I need to add my custom view to input call view. I have got jail broken device with iOS 9.3.2. I've installed Theos to my MacBook. I've installed mobile substrate to iOS. And now I don't know what I need to do.
I found that I have to modify InCallService.app. But I cannot find needed class for tweak.
Also I don't understand how can I write logs. I tried to use NSLog(#"aaa") and %log(#"aaa") but I cannot find file with logs.
Thank you.
If you want to add something to the app, modifying the .app isn't the easiest way. If you have MobileSubstrate installed you can hook a method from the Phone application and using basic iOS paradigms like MVC you can find the views you need to modify and go from there. If you need the header files you can either dump them yourself with class-dump-z or see if these are still valid.
Logging data is also quite easy with Ryan Petrich's deviceconsole
Just run the command deviceconsole --process < YOUR HOOKED PROCESSES' NAME > in your console after installing deviceconsole onto your Mac, and anything in your code using %log(); will show up in the console.
Is there a way to view the iphone console logs without having a mac ?
It used to be possible using the iPhone Configuration Utility but it does not seem to be available any longer.
I saw a tool called iTools but it seems to require a 32bit version of itunes which is also not available any more.
Given an iPhone device + windows + linux, Is there any workaround / tool to view the iphone console logs?
Realizing it is over half a year ago you asked this, but since it does not have an accepted answer yet:
I ran into this very same issue over and over, and got fed up with it, so I decided to have a go at writing a script that displays console messages in HTML, so you can just view everything in the webpage itself, without having to resort to a console-replacement or a tedious remote debugger (for which you, indeed, require a Mac), without having to modify each console call in existing code.
The key lies in 'replacing' the four main functions in window.console: log, warn, error and trace. This is done by redifining each method, adding own code to that, and calling the original method in the end. Jakub Fiala wrote the basic script for that, on which I built the rest: https://gist.github.com/jakubfiala/8fe3461ab6508f46003d
I dubbed it 'MobileConsole'. It is quite unobtrusive and will 'catch' all console.log (or .warn, .error or .trace) events, and even bind to window.onerror.
I have created a separate page for this script with an elaborate explanation on how it works, including a demo, over here.
Download this from the app store onto the iphone, you can then view logs directly on the phone:
https://itunes.apple.com/us/app/console/id317676250?mt=8
Please note, this is an old app, it will crash when launched, then on reopening it will show you the device logs.
If that fails, here is a link to the iPhone configuration utility for windows:
http://download.cnet.com/iPhone-Configuration-Utility-for-Windows/3000-20432_4-10969175.html
Is there any way to simulate touch events in ADL? If not, how do you properly debug an application that heavily relies on touch events?
Using a device seems to be the best way, though it also appears this overlaps a previous SO question that describes the same issue in the normal Android Emulator: Is there any way to test multi-touch on the Android Emulator?
Also with regard to code testing you can still write unit tests to test out your objects/methods and verify they have the appropriate input and output. If you're so inclined as to do so you could even have it dispatch events from UI components using code like
//in your code
Multitouch.inputMode=MultitouchInputMode.GESTURE;
someComponent.addEventListener(GestureEvent.GESTURE_TWO_FINGER_TAP, someHandler);
//and in your test
someComponent.dispatchEvent(new GestureEvent(GestureEvent.GESTURE_TWO_FINGER_TAP));
//verify appropriate change occured after a timeout or something of that nature
and should be able to get the appropriate reaction to the event.
more on gesture events here:
http://help.adobe.com/en_US/as3/dev/WS1ca064e08d7aa93023c59dfc1257b16a3d6-7ffd.html
more on multi-touch/gestures here as well:
http://www.adobe.com/devnet/flash/articles/multitouch_gestures.html
You can create multitouch app , run it on Your mobile and send Touch from device using WIFI .
This is how Im testing this .
But You can also write emulator that will read MouseEvents from stage and dispatch TouchEvents .