XCode7 UITests how to test screen edge pan gestures? - ios

I have the following gesture recognizer in my app. I've looked at xCode7 UI and see that it has swipe up/down/left/right, but no pan or edge pan gestures.
How can one test or initiate screen edge pan gesture for UITesting purposes?
UIScreenEdgePanGestureRecognizer *leftEdgeGesture = [[UIScreenEdgePanGestureRecognizer alloc] initWithTarget:self action:#selector(show)];
leftEdgeGesture.edges = UIRectEdgeLeft;
leftEdgeGesture.delegate = self;
[self.view addGestureRecognizer:leftEdgeGesture];

I spent a while trying to figure this out, navigating around the element hierarchy. Lots and lots of googling and finding nothing.
I gave up twice, then figured it out.
We just need two coords on the main app screen, and drag one to the other.
Works a treat!
XCUIApplication *app = [[XCUIApplication alloc] init];
[app launch];
// Set a coordinate near the left-edge, we have to use normalized coords
// so you set using percentages, 1% in on the left, 15% down from the top
XCUICoordinate *coord1 = [app coordinateWithNormalizedOffset:CGVectorMake(0.01, 0.15)];
// Then second coordinate 40 points to the right
XCUICoordinate *coord2 = [coord1 coordinateWithOffset:CGVectorMake(40, 0)];
// Perform a drag from coord1 to coord2
// Simulating swipe in from left edge
[coord1 pressForDuration:0.5f thenDragToCoordinate:coord2];
Hopefully this will help everyone else who has been struggling to simulate an edge swipe.

Piggybacking on Chris' awesome answer, here's the swift version:
func pressCoordinate(x xCoordinate: Double, y yCoordinate: Double, app2call: XCUIApplication=XCUIApplication())
{
let normalized = app2call.coordinate(withNormalizedOffset: CGVector(dx: 0.01, dy: 0.5))
let coordinate = normalized.withOffset(CGVector(dx: xCoordinate, dy: yCoordinate))
normalized.press(forDuration: 0.5, thenDragTo: coordinate)
}

Related

Is it possible to simulate two finger swipe in iOS UI Testing?

I want to perform a two finger swipe in my UI Test. I am using XCUITest framework. I tried all pinch and rotate methods but Seems like there is no built in support for this functoinality.
I don't know if it will work, but you can try to simulate two different fingers dragging screen simultaneously. Something like this:
XCUIApplication *app = [[XCUIApplication alloc] init];
[app launch];
// Set a coordinate near the left-edge, we have to use normalized coords
// so you set using percentages, 1% in on the left, 15% down from the top
XCUICoordinate *coord1 = [app coordinateWithNormalizedOffset:CGVectorMake(0.01, 0.15)];
// Then second coordinate 40 points to the right
XCUICoordinate *coord2 = [coord1 coordinateWithOffset:CGVectorMake(40, 0)];
// Third coordinate 100 points down from the first
XCUICoordinate *coord3 = [coord1 coordinateWithOffset:CGVectorMake(0, 100)];
// Last one is 100 points down from the second
XCUICoordinate *coord4 = [coord2 coordinateWithOffset:CGVectorMake(0, 100)];
// Perform a drag from coord1 to coord3
[coord1 pressForDuration:0.5f thenDragToCoordinate:coord3];
// Perform a drag from coord2 to coord4
[coord2 pressForDuration:0.5f thenDragToCoordinate:coord4];

Scroll the cells using UI Testing

Is there a method like
- (void)scrollByDeltaX:(CGFloat)deltaX deltaY:(CGFloat)deltaY;
for iOS?
I think the above method is only for OSX.
I would like to scroll my tableview according to the deltavalues provided.
Thanks in advance.
On iOS, you can use XCUIElement.press(forDuration:thenDragTo:) if you want to move in terms of elements.
To move in terms of relative co-ordinates, you can get the XCUICoordinate of an element, and then use XCUICoordinate.press(forDuration:thenDragTo:).
let table = XCUIApplication().tables.element(boundBy:0)
// Get the coordinate for the bottom of the table view
let tableBottom = table.coordinate(withNormalizedOffset:CGVector(dx: 0.5, dy: 1.0))
// Scroll from tableBottom to new coordinate
let scrollVector = CGVector(dx: 0.0, dy: -30.0) // Use whatever vector you like
tableBottom.press(forDuration: 0.5, thenDragTo: tableBottom.withOffset(scrollVector))
Or in Objective-C:
XCUIApplication *app = [[XCUIApplication alloc] init];
XCUIElement *table = [app.tables elementBoundByIndex: 0];
// Get the coordinate for the bottom of the table view
XCUICoordinate *tableBottom = [table coordinateWithNormalizedOffset:CGVectorMake(0.5, 1.0)];
// Scroll from tableBottom to new coordinate
CGVector scrollVector = CGVectorMake(0.0, -30.0); // Use whatever vector you like
[tableBottom pressForDuration:0.5 thenDragToCoordinate:[tableBottom coordinateWithOffset:scrollVector]];
Oletha's answer was exactly what I was looking for but there are a couple of minor mistakes in the Objective-C example. Since the edit was rejected, I'll include it here as a reply for anyone else that comes along:
XCUIApplication *app = [[XCUIApplication alloc] init];
XCUIElement *table = [app.tables elementBoundByIndex: 0];
// Get the coordinate for the bottom of the table view
XCUICoordinate *tableBottom = [table
coordinateWithNormalizedOffset:CGVectorMake( 0.5, 1.0)];
// Scroll from tableBottom to new coordinate
CGVector scrollVector = CGVectorMake( 0.0, -30.0); // Use whatever vector you like
[tableBottom pressForDuration:0.5 thenDragToCoordinate:[tableBottom coordinateWithOffset:scrollVector]];
This Swift4 version that worked for me. Hope it helps someone in the future.
let topCoordinate = XCUIApplication().statusBars.firstMatch.coordinate(withNormalizedOffset: .zero)
let myElement = XCUIApplication().staticTexts["NameOfTextLabelInCell"].coordinate(withNormalizedOffset: .zero)
// drag from element to top of screen (status bar)
myElement.press(forDuration: 0.1, thenDragTo: topCoordinate)

Why am I unable to detect when my UIView I push off screen using UIKit Dynamics is no longer visible?

I'm using UIKit Dynamics to push a UIView off screen, similar to how Tweetbot performs it in their image overlay.
I use a UIPanGestureRecognizer, and when they end the gesture, if they exceed the velocity threshold it goes offscreen.
[self.animator removeBehavior:self.panAttachmentBehavior];
CGPoint velocity = [panGestureRecognizer velocityInView:self.view];
if (fabs(velocity.y) > 100) {
self.pushBehavior = [[UIPushBehavior alloc] initWithItems:#[self.scrollView] mode:UIPushBehaviorModeInstantaneous];
[self.pushBehavior setTargetOffsetFromCenter:centerOffset forItem:self.scrollView];
self.pushBehavior.active = YES;
self.pushBehavior.action = ^{
CGPoint lowestPoint = CGPointMake(CGRectGetMinX(self.imageView.bounds), CGRectGetMaxY(self.imageView.bounds));
CGPoint convertedPoint = [self.imageView convertPoint:lowestPoint toView:self.view];
if (!CGRectIntersectsRect(self.view.bounds, self.imageView.frame)) {
NSLog(#"outside");
}
};
CGFloat area = CGRectGetWidth(self.scrollView.bounds) * CGRectGetHeight(self.scrollView.bounds);
CGFloat UIKitNewtonScaling = 5000000.0;
CGFloat scaling = area / UIKitNewtonScaling;
CGVector pushDirection = CGVectorMake(velocity.x * scaling, velocity.y * scaling);
self.pushBehavior.pushDirection = pushDirection;
[self.animator addBehavior:self.pushBehavior];
}
I'm having an immense amount of trouble detecting when my view actually completely disappears from the screen.
My view is setup rather simply. It's a UIScrollView with a UIImageView within it. Both are just within a UIViewController. I move the UIScrollView with the pan gesture, but want to detect when the image view is off screen.
In the action block I can monitor the view as it moves, and I've tried two methods:
1. Each time the action block is called, find the lowest point in y for the image view. Convert that to the view controller's reference point, and I was just trying to see when the y value of the converted point was less than 0 (negative) for when I "threw" the view upward. (This means the lowest point in the view has crossed into negative y values for the view controller's reference point, which is above the visible area of the view controller.)
This worked okay, except the x value I gave to lowestPoint really messes everything up. If I choose the minimum X, that is the furthest to the left, it will only tell me when the bottom left corner of the UIView has gone off screen. Often times as the view can be rotating depending on where the user pushes from, the bottom right may go off screen after the left, making it detect it too early. If I choose the middle X, it will only tell me when the middle bottom has gone off, etc. I can't seem to figure out how to tell it "just get me the absolute lowest y value.
2. I tried CGRectIntersectsRect as shown in the code above, and it never says it's outside, even seconds after it went shooting outside of any visible area.
What am I doing wrong? How should I be detecting it no longer being visible?
If you take a look on UIDynamicItem protocol properties, you can see they are center, bounds and transform. So UIDynamicAnimator actually modifies only these three properties. I'm not really sure what happens with the frame during the Dynamics animations, but from my experience I can tell it's value inside the action block is not always reliable. Maybe it's because the frame is actually being calculated by CALayer based on center, transform and bounds, as described in this excellent blog post.
But you for sure can make use of center and bounds in the action block. The following code worked for me in a case similar to yours:
CGPoint parentCenter = CGPointMake(CGRectGetMidX(self.view.bounds), CGRectGetMidY(self.view.bounds));
self.pushBehavior.action = ^{
CGFloat dx = self.imageView.center.x - parentCenter.x;
CGFloat dy = self.imageView.center.y - parentCenter.y;
CGFloat distance = sqrtf(dx * dx + dy * dy);
if(distance > MIN(parentCenter.y + CGRectGetHeight(self.imageView.bounds), parentCenter.x + CGRectGetWidth(self.imageView.bounds))) {
NSLog(#"Off screen!");
}
};

New foursquare venue detail map

I really love the way foursquare designed venue detail view. Especially the map with venue location in the "header" of view ... How was it done? Details are obviously some uiscrollview (maybe uitableview?) and behind it (in the header) there is a map so when you scroll up the map is beeing uncovered as the scroll view bounces... does anyone has an idea how to do this?
Here's the way I manage to reproduce it:-
You need a UIViewController with a UIScrollView as its view. Then, the content of the UIView you add to your scrollview should look like this :-
- The frame of the MKMapView have a negative y position. In this case, we can only see 100pts of the maps in the default state (before dragging).
- You need to disable zooming and scrolling on your MKMapView instance.
Then, the trick is to move down the centerCoordinate of the MKMapView when you drag down, and adjust its center position.
For that, we compute how much 1point represent as a delta latitude so that we know how much the center coordinate of the map should be moved when being dragged of x points on the screen :-
- (void)viewDidLoad {
[super viewDidLoad];
UIScrollView* scrollView = (UIScrollView*)self.view;
[scrollView addSubview:contentView];
scrollView.contentSize = contentView.frame.size;
scrollView.delegate = self;
center = CLLocationCoordinate2DMake(43.6010, 7.0774);
mapView.region = MKCoordinateRegionMakeWithDistance(center, 1000, 1000);
mapView.centerCoordinate = center;
//We compute how much latitude represent 1point.
//so that we know how much the center coordinate of the map should be moved
//when being dragged.
CLLocationCoordinate2D referencePosition = [mapView convertPoint:CGPointMake(0, 0) toCoordinateFromView:mapView];
CLLocationCoordinate2D referencePosition2 = [mapView convertPoint:CGPointMake(0, 100) toCoordinateFromView:mapView];
deltaLatFor1px = (referencePosition2.latitude - referencePosition.latitude)/100;
}
Once those properties are initialized, we need to implement the behavior of the UIScrollViewDelegate. When we drag, we convert the move expressed in points to a latitude. And then, we move the center of the map using the half of this value.
- (void)scrollViewDidScroll:(UIScrollView *)theScrollView {
CGFloat y = theScrollView.contentOffset.y;
// did we drag ?
if (y<0) {
//we moved y pixels down, how much latitude is that ?
double deltaLat = y*deltaLatFor1px;
//Move the center coordinate accordingly
CLLocationCoordinate2D newCenter = CLLocationCoordinate2DMake(center.latitude-deltaLat/2, center.longitude);
mapView.centerCoordinate = newCenter;
}
}
You get the same behavior as the foursquare app (but better: in the foursquare app, the maps recenter tends to jump, here, changing the center is done smoothly).
The example above is nice. If you need more help, I think they're using something very similar to RBParallaxTableViewController. https://github.com/Rheeseyb/RBParallaxTableViewController
It's essentially the same effect that Path uses for its header photo.
Yonel's answer is nice, but I found a problem as I have a pin at the center of the map. Because the negative Y, the point is hidden under my UINavigationBar.
Then, I didn't set the Negative Y, and I correct my mapView.frame according the scroll offset.
My mapView is 320 x 160
_mapView.frame = CGRectMake(0, 160, 320, -160+y);
Hope this helps someone.

Panning UIView after RotationGesture Causes view to collapse

I'm going to try to describe with words something that might only be describable with video.
I have created a simple iOS app with a storyboard containing a single image view. I have added two gesture recognizers: a UIPanGestureRecognizer and a UIRotationGestureRecognizer along with their corresponding IBActions.
When I first start the application in the simulator, the image view pans correctly. The image view also rotates correctly. After a rotation, however, any subsequent pan fails. When I try to pan after a rotation, regardless of the direction of the pan, the image rapidly scales to zero and disappears, i.e., it collapses or implodes to a point that disappears.
The gesture recognizers are created using the following code. myImageView is set up as an IBOutlet UIImageView.
UIPanGestureRecognizer *panRec = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(processPan:)];
[myImageView addGestureRecognizer:panRec];
UIRotationGestureRecognizer *rotRec = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(processRotation:)];
[myImageView addGestureRecognizer:rotRec];
I've written the associated actions as best I know how. They are basically slight modifications of the methods I found in the iOS documentation. These are shown below.
-(IBAction)processPan:(UIPanGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
CGRect newFrame = myImageView.frame;
newFrame.origin.x += translation.x; 
newFrame.origin.y += translation.y;
myImageView.frame = newFrame;
[sender setTranslation:CGPointMake(0, 0) inView:self.view];
}
}
-(IBAction)processRotation:(UIRotationGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateChanged)
{
myImageView.transform = CGAffineTransformRotate(myImageView.transform, sender.rotation);
[sender setRotation:0];
}
}
So what am I missing? I am new at this, so hopefully my ignorance will be tolerated.
I am running Xcode version 4.2.1 on OS X version 10.7.3 on a MacBook if that helps. Thank you so much for taking the time to read my question. Stack Overflow is an unbelievable resource!
-Dave
Well, I don't know if I've come up with a solution or if I've come up with a kludge. Basically, the pan code wasn't working for me. Any time the view was rotated or scaled, the panning code would seriously distort or collapse the view being translated. I stared at transform matrices and frame coordinate systems until I just about went blind.
The translation code I listed in my first post was basically copied from Listing 3-2, "Handling pinch, pan, and double-tap gestures" from the Gesture Recognizers section out of Apple's Event Handling Guide for iOS, so I figured it would do the trick for me. Well, I ended up writing my own code for it using the UIImageView center and not messing with the frame at all. Here is what worked for me.
CGPoint translation = [sender translationInView:self.superview];
CGPoint newCenter = CGPointMake(self.myImageView.center.x + translation.x, self.myImageView.center.y + translation.y);
[self.myImageView setCenter:newCenter];
[sender setTranslation:CGPointMake(0, 0) inView:self.superview];
I used the superview as a reference for the translation in case it was rotated. It seems to work now.
This effort probably reveals something about how my understanding of frames isn't correct. If someone can tell me how to correct my understanding, I'd appreciate it.
-Dave

Resources