Gesture-driven animation between frames - ios

I am trying to create an interactive animation between a thumbnail frame and a fullscreen frame. Panning upwards could cause the frame to go grow in both dimensions (until reaching fullscreen) while panning downwards does the opposite. Very similar to how Youtube animates their video player.
I tried using a UIPanGestureRecognizer, and a POPSpringAnimation that activates when the recognizers state is UIGestureRecognizerStateEnded, like so:
- (void)didPan:(UIPanGestureRecognizer *)recognizer
{
CGPoint point = [recognizer translationInView:self.view.superview];
self.view.center = CGPointMake(self.view.center.x, self.view.center.y + point.y);
[recognizer setTranslation:CGPointZero inView:self.view.superview];
if (recognizer.state == UIGestureRecognizerStateEnded)
{
CGPoint velocity = [recognizer velocityInView:self.view.superview];
// Initaite POPSpringAnimation using velocity and target frame
// either fullscreen or thumbnail
}
}
The effect of this is that the view's center gets updated while panning, but its size will only start changing after you stop panning. I do not want to manually resize the frame in didPan: without knowing at what ratio to do it.
How can I create a panning-driven animation that simply goes from frame A to frame B?
This framework does almost the same thing, but the animation is not interactive. Same thing goes for this answer.

I suggest using a UIViewPropertyAnimator, a class addd in iOS 10. It lets you pretty easily pause, reverse, or scrub animations back and forth.
I have a demo project on GitHub (Written in Swift, which might be harder for you to follow, since you're using Objective-C:
https://github.com/DuncanMC/UIViewPropertyAnimator-test
The class is actually pretty easy to use. You should be able to figure it out from the docs, and the sample project could at least serve as a roadmap on the APIs to use even if you can't follow the code line-by-line

Related

Do I have to use a UIPanGestureRecognizer instead of a UISwipeGestureRecognizer if I want to track the movement?

I'm trying to implement a paging interaction for one of my views. I thought I would just use UISwipeGestureRecognizers. But through trial and error as well as examination of the documentation it appears that
A swipe is a discrete gesture, and thus the associated action message
is sent only once per gesture.
So while I could trigger the page, I wouldn't be able to hook up animation that occurred during the drag.
Is my only alternative to use a UIPanGestureRecognizer and reimplement the basic filtering/calculations of the swipe?
Update/Redux
In hindsight, what I really should have been asking is how to implement a "flick" gesture. If you're not going to roll your own subclass (may bite that off in a bit), you use a UIPanGestureRecognizer as #Lyndsey 's answer indicates. What I was looking for after that (in the comments) was how to do the flick part, where the momentum of the flick contributes to the decision of whether to carry the motion of the flick through or snap back to the original presentation.
UIScrollView has behavior like that and it's tempting to mine its implementation for details on how one decelerates the momentum in a way that would be consistent, but alas the decelerationRate supplied for UIScrollView is "per iteration" value (according to some). I beat my head on how to properly apply the default value of 0.998 to the end velocity of my pan.
In the end, I used code pulled from sites about "flick" computation and did something like this in my gesture handler:
...
else if (pan.state == UIGestureRecognizerStateEnded) {
CGFloat v = [pan velocityInView: self.view].x;
CGFloat a = -4000.0; // 4 pixels/millisecond, recommended on gavedev
CGFloat decelDisplacement = -(v * v) / (2 * a); // physics 101
// how far have we come plus how far will momentum carry us?
CGFloat totalDisplacement = ABS(translation) + decelDisplacement;
// if that is (or will be) half way across our view, finish the transition
if (totalDisplacement >= self.view.bounds.size.width / 2) {
// how much time would we need to carry remainder across view with current velocity and existing displacement? (capped)
CGFloat travelTime = MIN(0.4, (self.view.bounds.size.width - ABS(translation)) * 2 / ABS(v));
[UIView animateWithDuration: travelTime delay: 0.0 options: UIViewAnimationOptionCurveEaseOut animations:^{
// target/end animation positions
} completion:^(BOOL finished) {
if (finished) {
// any final state change
}
}];
}
else { // put everything back the way it was
...
}
}
Yes, use a UIPanGestureRecognizer if you want the specific speed, angle, changes, etc. of the "swipe" to trigger your animations. A UISwipeGestureRecognizer is indeed a single discrete gesture; similar to a UITapGestureRecognizer, it triggers a single action message upon recognition.
As in physics, the UIPanGestureRecognizer's "velocity" will indicate both the speed and direction of the pan gesture. Here are the docs for velocityInView: method which will help you calculate the horizontal and vertical components of the changing pan gesture in points per second.

Rotation and Setting center of UIView in animation block doesn't work sometimes

What I want to do is to rotate and bounce my UIView simultaneously. So currently, I am using this code in my UIViewAnimation block.
[view setTransform:CGAffineTransformRotate(CGAffineTransformIdentity, -M_PI*2)];
CGPoint center = CGPointMake( view.center.x - x , view.center.y - y);
[view setCenter:center];
And using this, its rotation is happening fine but the view bouncing is not happening i.e not any kind of movement is happening at all. Guidance needed on what I am doing wrong here of if I am missing something.
Use the transform to do both things by applying a rotation and a translation (be careful of the order you apply them in). When you apply a transform to a view you shouldn't then try to change its frame (or center).

How do I map the x co-ordinate of a pan gesture to the rotation of cube?

I'm developing an iPhone app where the main views are presented the user on the surface of a cube. Users switch views by rotating the cube with a pan gesture.
To achieve this I am using the GKLCubeController class from this GitHub project.
In terms of adding views to a cube and rotating, it works fine. However the angular rotation of the cube doesn't map correctly to the current x position of the finger as it pans across the screen.
The problem is that the cube rotation lags behind the finger movement by about ½ second making the cube feel ‘heavy’ as illustrated in this short screencast.
The code handling the rotation is shown below:
-(void)panHandler:(UIPanGestureRecognizer*)panner{
CGPoint translatedPoint = [panner translationInView:self.view.window];
CGFloat halfWidth = self.view.bounds.size.width / 2.0;
// save our starting points
if([panner state] == UIGestureRecognizerStateBegan) {
startingX = translatedPoint.x;
if (!transformLayer) {
transformLayer = [[CATransformLayer alloc] init];
transformLayer.frame = self.view.layer.bounds;
for (UIView *viewToTranslate in views) {
[viewToTranslate removeFromSuperview];
[transformLayer addSublayer:viewToTranslate.layer];
}
// add in this new layer
[self.view.layer addSublayer:transformLayer];
}
} else if([panner state] == UIGestureRecognizerStateEnded) {
...
} else {
// instantly adjust our transformation layer
CATransform3D transform = CATransform3DIdentity;
transform.m34 = kPerspective;
double percentageOfWidth = (translatedPoint.x - startingX) / self.view.frame.size.width;
transform = CATransform3DTranslate(transform, 0, 0, -halfWidth);
double adjustmentAngle = percentageOfWidth * M_PI_2 + startingAngle;
transform = CATransform3DRotate(transform, adjustmentAngle, 0, 1, 0);
transform = CATransform3DTranslate(transform, 0, 0, halfWidth);
transformLayer.transform = transform;
finishingAngle = adjustmentAngle;
}
}
I've a suspicion the problem is something to do with the conversion of the CGPoint.x returned by UIPanGestureRecognizer translationInView: to a rotation angle. Can anyone confirm whether this is the case, and suggest what the correct maths should be for mapping the touch position x to the rotation of a cube such that the cube edge tracks the finger motion as it pans across the screen?
There are two issues here:
The major performance issue here is the way this class is performing the transform of the sides of the cube. It's giving each side of the cube a complicated transform, and then as you're dragging the cube around, it's taken the relevant sides of the cube, added them to a CATransformLayer, and performing a complicated transform upon that layer (thus, when you look at the individual sides of the cube, you're doing a transform of a transform).
I pulled out that CATransformLayer logic, and updated the transform for the individual sides, and it was dramatically more responsive.
By the way, you may might want to still employ something like this CATransformLayer logic when you animate the letting go of the rotated cube, as that's an excellent way of synchronizing the animation of the individual sides of the cube (otherwise you get some separation in the sides of the cube during the animation). But while dragging, there's too much of a performance hit.
As you continue to refine this, there are possibly other optimizations that can be done, but my testing suggests that getting rid of a transformation on a complicated transformation made a huge impact on performance.
And, by the way, make sure to test this on a device, not the simulator, as the simulator's graphics performance is very different than that of the device.
A minor factor that might contribute a slight initial delay in responsiveness may be the inherent delay in UIPanGestureRecognizer (which looks for a certain amount of movement before recognizing the gesture as a pan, so that other gestures such as taps and the like can trigger if appropriate). It's a modest delay and a very small part of your performance problem, but for the quickest of response times, you might not want to use the UIPanGestureRecognizer. Either subclass your own, or use a UILongPressGestureRecognizer with a minimumPressDuration of 0.0, and you can get instantaneous response to the gesture.
You'll see this respond more quickly to movement (but it's also a gesture that doesn't play well with others, that if you have tap gestures or the like inside the view, they won't be triggered).

Panning UIView after RotationGesture Causes view to collapse

I'm going to try to describe with words something that might only be describable with video.
I have created a simple iOS app with a storyboard containing a single image view. I have added two gesture recognizers: a UIPanGestureRecognizer and a UIRotationGestureRecognizer along with their corresponding IBActions.
When I first start the application in the simulator, the image view pans correctly. The image view also rotates correctly. After a rotation, however, any subsequent pan fails. When I try to pan after a rotation, regardless of the direction of the pan, the image rapidly scales to zero and disappears, i.e., it collapses or implodes to a point that disappears.
The gesture recognizers are created using the following code. myImageView is set up as an IBOutlet UIImageView.
UIPanGestureRecognizer *panRec = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(processPan:)];
[myImageView addGestureRecognizer:panRec];
UIRotationGestureRecognizer *rotRec = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(processRotation:)];
[myImageView addGestureRecognizer:rotRec];
I've written the associated actions as best I know how. They are basically slight modifications of the methods I found in the iOS documentation. These are shown below.
-(IBAction)processPan:(UIPanGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
CGRect newFrame = myImageView.frame;
newFrame.origin.x += translation.x; 
newFrame.origin.y += translation.y;
myImageView.frame = newFrame;
[sender setTranslation:CGPointMake(0, 0) inView:self.view];
}
}
-(IBAction)processRotation:(UIRotationGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateChanged)
{
myImageView.transform = CGAffineTransformRotate(myImageView.transform, sender.rotation);
[sender setRotation:0];
}
}
So what am I missing? I am new at this, so hopefully my ignorance will be tolerated.
I am running Xcode version 4.2.1 on OS X version 10.7.3 on a MacBook if that helps. Thank you so much for taking the time to read my question. Stack Overflow is an unbelievable resource!
-Dave
Well, I don't know if I've come up with a solution or if I've come up with a kludge. Basically, the pan code wasn't working for me. Any time the view was rotated or scaled, the panning code would seriously distort or collapse the view being translated. I stared at transform matrices and frame coordinate systems until I just about went blind.
The translation code I listed in my first post was basically copied from Listing 3-2, "Handling pinch, pan, and double-tap gestures" from the Gesture Recognizers section out of Apple's Event Handling Guide for iOS, so I figured it would do the trick for me. Well, I ended up writing my own code for it using the UIImageView center and not messing with the frame at all. Here is what worked for me.
CGPoint translation = [sender translationInView:self.superview];
CGPoint newCenter = CGPointMake(self.myImageView.center.x + translation.x, self.myImageView.center.y + translation.y);
[self.myImageView setCenter:newCenter];
[sender setTranslation:CGPointMake(0, 0) inView:self.superview];
I used the superview as a reference for the translation in case it was rotated. It seems to work now.
This effort probably reveals something about how my understanding of frames isn't correct. If someone can tell me how to correct my understanding, I'd appreciate it.
-Dave

IOS drag, flick, or fling a UIView

Was wondering how I would go about flicking or flinging a UIView, such as http://www.cardflick.co/ or https://squareup.com/cardcase demo videos.
I know how to drag items, but how do you give them gravity/inertia. Is this handled by IOS?
The kind of effects that you are describing as simulating a king of gravity/inertia can be produced by means of ease-out (start fast, end slow) and ease-in (start slow, end fast) timing functions.
Support for easing out and easing in is available in iOS, so I don't think you need any external library nor hard work (although, as you can imagine, your effect will need a lot of fine tuning).
This will animate the translation of an object to a given position with an ease-out effect:
[UIView animateWithDuration:2.0 delay:0
options:UIViewAnimationOptionCurveEaseOut
animations:^ {
self.image.center = finalPosition;
}
completion:NULL];
}
If you handle your gesture through a UIPanGestureRecognizer, the gesture recognizer will provide you with two important information to calculate the final position: velocity and translation, which represent respectively how fast and how much the object was moved.
You can install a pan gesture recognizer in your view (this would be the object you would like to animate, I guess) like this:
UIPanGestureRecognizer* panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanFrom:)];
[yourView addGestureRecognizer:panGestureRecognizer];
[panGestureRecognizer release];
And then handle the animation in your handler:
- (void)handlePanFrom:(UIPanGestureRecognizer*)recognizer {
CGPoint translation = [recognizer translationInView:recognizer.view];
CGPoint velocity = [recognizer velocityInView:recognizer.view];
if (recognizer.state == UIGestureRecognizerStateBegan) {
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
<track the movement>
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
<animate to final position>
}
}
iOS doesn't have any built-in API for gravity/inertia. Your best options are:
Simulate the effect with animations: if you are flicking an object toward a target (it doesn't need to bounce off things), you could probably get a pretty good result just by sending it to the target and tweaking the animation timing curve.
Use a physics library. Chipmunk is a high quality one and there is a lot of iOS support available. You set up objects, assign them weights, describe their shapes, and then the library code will give you updated (x,y) coordinates so you can update your objects on screen.
There's nothing built into iOS that does this for you, but you could easily implement it yourself. I'd start by creating a gesture recognizer that recognizes the "flick" gesture that you want to use, possibly something that looks for constant direction and increase in velocity. When you recognize such a gesture, you just have to animate the affected view's position appropriately.

Resources