How to perform a UI test gesture in voiceover(accessibility) mode? - appium

example: When voiceover is turned on expected behaviour while performing a swipe right is that the next element should be highlighted. But this gesture swipes to the next screen. (This is the expected behaviour of normal mode).
springboardApp.swipeRight()
Had seen a few methods in #protocol XCTestManager_ManagerInterface
like
- (void)_XCT_performAccessibilityAction:(int)arg1 onElement:(XCAccessibilityElement *)arg2 withValue:(id)arg3 reply:(void (^)(NSError *))arg4;
But since its implementation is private, not able to figure out how to use this.

Related

UISearchBar shows wrong UIMenuItems due to use of UIMenuController elsewhere in app

I have MyRootViewController which presents MyModalViewController modally on a button press. MyModalViewController contains a UISearchBar and I want this to display the usual [Cut|Copy|Paste|Select All] text options on a long press in it’s text field.
MyRootViewController presents custom UIMenuItems (via UIMenuController) on a long press, hence overrides - (BOOL)canBecomeFirstResponder and - (BOOL)canPerformAction:(SEL)action withSender:(id)sender and implements -(void)duplicate:(id)sender and -(void)delete:(id)sender.
The problem I have is that MyRootViewController is being asked which UIMenuItems to display for the UISearchBar inside MyModalViewController, when the user long presses inside it and is therefore displaying irrelevant menu items, instead of the usual [Cut|Copy|Paste|Select All] options for a text field.
My understanding is that this is happening because MyRootViewController is still in the responder chain, even though it is not currently visible.
The best solution I’ve come up with so far is to subclass UISearchBar and override - (BOOL)canPerformAction:(SEL)action withSender:(id)sender, returning YES for cut, copy, paste or Select All. This stops iOS going looking further down the responder chain. But this feels like a hack - I shouldn’t have to resort to this just to make a UISearchBar behave consistently with the rest of the system, just because I’m using UIMenuController elsewhere.
Does anyone know of a more technically correct solution to this problem?
You may choose to set menuItems property of [UIMenuController sharedController] only on long press. And set it to nil whenever UIMenuControllerWillHideNotification is thrown.

Prevent reordering of elements in gestureRecognizers array

I'm experiencing a bug in my app that is causing gestures to stop working that I previously added to a UITextField via addGestureRecognizer:. Essentially, I add a tap and long press gesture recognizer to the UITextField (which already has 7 gesture recognizers applied from iOS). When logging self.textField.gestureRecognizers, it shows the existing 7 gestures and then the two I added at the end of the array. The gestures work just like I expected.
However, when I present a modal view controller and then dismiss it, my two gestures stop working on the text field. I'm not sure exactly why, but the view does disappear and it resignsFirstResponder (the keyboard is always up when the modal VC is presented) which may be related. But I discovered the gestures aren't removed from the text field, but the order of the gestures in the array has changed. My custom gestures are now located at index 0 and 1 instead of 7 and 8. I believe the 7 default gestures are conflicting/overriding my custom ones (I assume later placement in the array overrides those before it) which explains why they stop working even though they're still applied.
My questions are:
- Do you know why it is reordering the elements in self.textField.gestureRecognizers?
- How do I prevent that from occurring to ensure my custom gestures always work, without breaking the default gestures for UITextField?
My current solution is to add the two gestures for the first time then store the array of total (9) gestures, then in viewDidAppear I change the gestureRecognizers array (yes it is settable) to my stored array. This guarantees the array will be the 7 built-in gestures followed by my two custom gestures in that order. But I discovered my gestures are overriding the default gestures (that bring up the popup to Cut, Copy, etc), so I have to reset the gestures back to the default 7 after my custom gesture occurs (which is just fine - I only need to trigger the action a single time after recognizing my custom gesture). Simple enough to do - I store the original gestures in a property as well. But this doesn't feel like the best solution. I'd prefer to figure out the cause and address that or go about the situation differently instead of duct-taping the code together.
My first solution was to always add my two gestures in viewDidAppear
viewDidAppear: is called when your view controller's view first appears, but it is also called again later when the presented view controller is dismissed.
Thus you are adding the gesture recognizers twice.
The simplest solution is to use a BOOL instance variable (we call this a "flag") which you set to YES the first time and test afterwards:
if (!self.addedGestures) {
self.addedGestures = YES;
// ... add them! ...
}
Now you will only add them once.
(On the other hand it might be argued that if you care about the order of the gesture recognizers in the array you are already doing something wrong. Use delegate methods to resolve conflicts between gesture recognizers - that's what they are for.)

How to emulate the timing of highlighting the background image on a UIButton

I've created a custom button because I wanted to be able to compose it of multiple different images. It actually subclasses UIControl instead of UIButton. This led to the issue of highlighting the images while it was being tapped.
So, I followed the advice in this question by creating a category on UIImage to emulate the highlighting of a standard UIButton: How to implement highlighting on UIImage like UIButton does when tapped?
I'm triggering the image tinting based on the UIControlEventTouchDown, UIControlEventTouchUpInside, and UIControlEventTouchUpOutside events.
This mostly works, except the timing is a bit off. With a standard UIButton, no matter how long or short the user taps down, the highlighting always happens, but with my implementation, if the user taps very quickly (which is most times), the highlighting doesn't happen.
I'm assuming this may be because the screen isn't getting re-drawn between the time the user taps down and up, but I'm not totally sure.
What I've tried:
Calling setNeedsDisplay right after swapping out the image - doesn't help
Over-riding touchesBegan and touchesEnded and putting the image swapping code there - doesn't help
Executing the image swapping code asynchronously inside a dispatch_async call - doesn't help
At this point, the only other thing I can think of is to set up a timer that manually fires off the image change after a slight delay if it detects that the user hasn't pressed down for longer than a certain time period.
This feels wrong, and I'm wondering if there's a better way to achieve this. Is there some other event I should be over-riding?

Update loop and buttons in cocos2d

I have a simple lunar lander game.
I compute positions and everything by integration - e.g. each turn I take vectors and combine them and then apply resulting vector to my lander.
Here comes the question, I have a button that I want to use for thrust.
How do I check if it is on during update method? I guess i will have some BOOL flag that gets set to YES when the button is pressed, but when do i set it to NO?
Some practical implementation would be great.
I use cocos2d-iphone and iOS.
Well, the pseudo code goes as follows:
We shall not use Buttons (aka CCMenuItem), since they provide callbacks only on touch up events. We want touch down, touch exit/entered, touch ended.
In your CCScene that you are displaying, either add a new child that is a subclass of CCLayer or even use one of the CCLayers already present in the CCScene.
In the init of your CClayer subclass, set isTouchEnabled to YES.
Implement the usual methods:
- (void)ccTouchesBegan:...
- (void)ccTouchesMoved:...
- (void)ccTouchesEnded:...
- (void)ccTouchesCancelled:...
Finally, do your magic in these methods.
Get the touch location
Check using CGRectContainsPoint whether the touch is within the thrust area.
and so on, and so forth...

Custom UIGestureRecognizer Not Working As Expected

I have a UITableView which I present in a UIPopoverController. The table view presents a list of elements that can be dragged and dropped onto the main view.
When the user begins a pan gesture that is principally vertical at the outset, I want the UITableView to scroll as usual. When it's not principally vertical at the outset, I want the application to interpret this as a drag-and-drop action.
My unfortunately lengthy journey down this path has compelled me to create a custom UIGestureRecognizer. In an attempt to get the basics right, I left this custom gesturer as an empty implementation at first, one that merely calls the super version of each of the five custom methods Apple says should be overridden:
(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;
(void)reset;
This results in nothing happening, i.e. the custom gesture's action method is never called, and the table view scrolls as usual.
For my next experiment, I set the gesture's state to UIGestureRecognizerStateBegan in the touchesBegan method.
This caused the gesture's action method to fire, making the gesture appear to behave just like the standard UIPanGestureRecognizer. This obviously suggested I was responsible for managing the gesture's state.
Next up, I set the gesture's state to UIGestureRecognizerStateChanged in the touchesMoved method. Everything still fine.
Now, instead, I tried setting the gesture's state to UIGestureRecognizerStateFailed in the touchesMoved method. I was expecting this to terminate the gesture and restore the flow of events to the table view, but it didn't. All it did was stop firing the gesture's action method.
Lastly, I set the gesture's state to UIGestureRecognizerStateFailed in the touchesBegan method, immediately after I had set it to UIGestureRecognizerStateBegan.
This causes the gesture to fire its action method exactly once, then pass all subsequent events to the table view.
So...sorry for such a long question...but why, if I cause the gesture to fail in the touchesBegan method (after first setting the state to UIGestureRecognizerStateBegan), does it redirect events to the table view, as expected. But if I try the same technique in touchesMoved (the only place I can detect that a move is principally vertical), why doesn't this redirection occur?
Sorry for making this more complicated than it actually was. After much reading and testing, I've finally figured out how to do this.
First, creating the custom UIGestureRecognizer was one of the proper solutions to this issue, but when I made my first test of the empty custom recognizer, I made a rookie mistake: I forgot to call [super touches...:touches withEvent:event] for each of the methods I overrode. This caused nothing to happen, so I set the state of the recognizer to UIGestureRecognizerStateBegan in touchesBegan, which did result in the action method being called once, thus convincing me I had to explicitly manage states, which is only partially true.
In truth, if you create an empty custom recognizer and call the appropriate super method in each method your override, your program will behave as expected. In this case, the action method will get called throughout the dragging motion. If, in touchesMoved, you set the recognizer's state to UIGestureRecognizerStateFailed, the events will bubble up to the super view (in this case a UITableView), also as expected.
The mistake I made and I think others might make is thinking there is a direct correlation between setting the gesture's state and the chronology of the standard methods when you subclass a gesture recognizer (i.e. touchesBegan, touchesMoved, etc.). There isn't - at least, it's not an exact mapping. You're better off to let the base behavior work as is, and only intervene where necessary. So, in my case, once I determined the user's drag was principally vertical, which I could only do in touchesMoved, I set the gesture recognizer's state to UIGestureRecognizerStateFailed in that method. This took the recognizer out of the picture and automatically forwarded a full set of events to the encompassing view.
For the sake of brevity, I've left out a ton of other stuff I learned through this exercise, but would like to point out that, of six or seven books on the subject, Matt Neuburg's Programming IOS 4 provided the best explanation of this subject by far. I hope that referral is allowed on this site. I am in no way affiliated with the author or publisher - just grateful for an excellent explanation!
That probably happens because responders expect to see an entire touch from beginning to end, not just part of one. Often, -touchesBegan:... sets up some state that's then modified in -touchesMoved..., and it really wouldn't make sense for a view to get a -touchesMoved... without having previously received -touchesBegan.... There's even a note in the documentation that says, in part:
All views that process touches,
including your own, expect (or should
expect) to receive a full touch-event
stream. If you prevent a UIKit
responder object from receiving
touches for a certain phase of an
event, the resulting behavior may be
undefined and probably undesirable.

Resources