While using perf-annotate, I can hit Enter to jump to target assembly code position that jmp/jne/je points to, I want to know how can I jump back? I'm using perf 4.15.18.
Related
And(/^I click OK button in popup$/) do
#Appium::TouchAction.new.tap(x:0.64, y:0.57, count: 1).perform
Appium::TouchAction.new.tap(x: 270, y: 506,count: 1).perform
end
And(/^I click Allow button in popup$/) do
#Appium::TouchAction.new.tap(x:0.64, y:0.57, count: 1).perform
Appium::TouchAction.new.tap(x: 270, y: 506,count: 1).perform
end
Given the next code, I work with Appium 1.9.1, Ruby 2.3.7 and Cucumber to automate iOS app, if I'm passing relative coordinates (percentage) - then appium doesn't perform any taps, but, if I comment out lines with absolute coordinates and comment lines with relative coordinates - all taps will work. The strangest thing is that if I use relative coordinates in 1st line and absolute coordinates in second line - it will perform first tap, but won't perform second.
My goal is to use relative coordinates everywhere, so tests will be usable on devices with any screen resolution, please advice, if there are any known solutions to use relative coordinates (or if I'm doing smth wrong)
After going through your code snippet, I assume you are dealing with alert pop-up in iOS device.
In iOS, with Appium Java client, I can deal with pop-up using traditional driver.switchTo().alert();.
Here driver refers to IOSDriver.
I'm sure there must be equivalent to this in Ruby as well.
Try using Alert class to accept the alerts instead of tapping on the coordinates.
As, you can see in the above snippet, when i try to locate value e.g.. '16' in this case or i would like to scroll to select any other value. I am unable to select or scroll from this window. Is it possible to select value using robot framework with appium library. suggestions are most welcome.
One approach you can follow to do this is as follows:
First get the position of the element which is visible(In your case 16).
If you want to scroll down click on the element above 16 by substracting some pixels from the location you get in step 1.Verify for the element you want is highlighted or not.
I have implemented the iPhone-AR toolkit app found in this link.
https://github.com/a1phanumeric/iPhone-AR-Toolkit
I want to re plot the radar points everytime device starts moving i.e. Everytime the current lat long values change, the existing set of values should change their position on the radar . But it does not move the points present on the radar. Can anyone tell me where I have to change to implement this?
You can try Calonso's ar-kit project which is similar to yours since they are both forked from same repository: https://github.com/calonso/ios-arkit
My application performs several rendering operations on the first frame (I am using Metal, although I think the same applies to GLES). For example, it renders to targets that are used in subsequent frames, but not updated after that. I am trying to debug some of draw calls from these rendering operations, and I would like to use the 'GPU Capture Frame' functionality to do so. I have used it in the past for on-demand GPU frame debugging, and it is very useful.
Unfortunately, I can't seem to find a way to capture the first frame. For example, this option is unavailable when broken in the debugger (setting a breakpoint before the first frame). The Xcode behaviors also don't seem to allow for capturing the frame once debugging starts. There also doesn't appear to even be an API for performing GPU captures, in Metal APIs or the CAMetalLayer.
Has anybody done this successfully?
I've come across this again, and figured it out properly now. I'll add this as a separate answer, since it's a completely different approach from my other answer.
First, some background. There are three components to capturing a GPU frame:
Telling Xcode that you want to capture a GPU frame. In typical documented use, you do this manually by clicking the GPU Frame Capture "camera" button in Xcode.
Indicating the start of the next frame to capture. Normally, this occurs at the next occurrence of MTLCommandBuffer presentDrawable:, which is invoked to present the framebuffer to the underlying view.
Indicating the end of the frame being captured. Normally, this occurs at the next-but-one occurrence of MTLCommandBuffer presentDrawable:.
In capturing the first frame, or activity before the first frame, only the third of these is available, so we need an alternate way to perform the first two items:
To tell Xcode to begin capturing a frame, add a breakpoint in Xcode at a line in your code somewhere before the point at which you want to start capturing a frame. Right-click the breakpoint, select Edit Breakpoint... from the pop-up menu, and add a Capture GPU Frame action to the breakpoint:
To indicate the start of the frame to capture, before the first occurrence of MTLCommandBuffer presentDrawable:, you can use the MTLCommandQueue insertDebugCaptureBoundary method. For example, you could invoke this method as soon as you instantiate the MTLCommandQueue, to immediately begin capturing everything submitted to the queue. Make sure the breakpoint in item 1 will be triggered before the point this code is invoked.
To indicate the end of the captured frame, you can either rely on the first normal occurrence of MTLCommandBuffer presentDrawable:, or you can add a second invocation of MTLCommandQueue insertDebugCaptureBoundary.
Finally, the MTLCommandQueue insertDebugCaptureBoundary method does not actually cause the frame to be captured. It just marks a boundary point, so you can leave it in your code for future debugging use. Wrap it in a DEBUG compilation conditional if you want it gone from production code.
Try...
[myMTLCommandEncoder insertDebugSignpost: #"com.apple.GPUTools.event.debug-frame"].
To be honest, I haven't tried it myself, but it's analogous to the similar
glInsertEventMarkerEXT(0, "com.apple.GPUTools.event.debug-frame")
documented for OpenGL ES, and there is some mention on the web of it working for Metal.
First, in Metal, I usually use Metal to do parallel compute, then GPU Capture frame is alway grey. So, there are two ways until now I found is Ok.
In iOS 11
you can use the [[MTLCaptureManager alloc] startCaptureWithDevice:m_Device]; to capture frame so you can profile the compute shader performance
lower than iOS 11 (MTLCaptureManager && MTLCaptureScope are new in iOS 11.0 )
you can use the breakpoint, then edit the Action.Capture GPU Frame
Is there a way to change the position of the window that pops up when cv::imshow is called?
For me, the window seems to appear partially off-screen, so I have to drag it around before I can see entire image. It's very annoying to have to do this every single time.
I had a look at the reference manual -- it seems you have control over what goes into the title of the window, but I can't see anything relating to window position.
Oh, and the behavior is the same if I use the old C interface (cvShowImage).
Any ideas?
As of OpenCV 2.1 this is possible also in C++ API using the moveWindow function:
cv::moveWindow(std::string winName, int x, int y)
For example:
cv::namedWindow("WindowName");
cv::moveWindow("WindowName", 10, 50);
Using C++ API it is not possible at the moment.
You can use C API instead; it is cvMoveWindow().
UPDATE: Now it is possible in C++ with cv::moveWindow()