I am fairly new to iOS and I am trying to figure out how to use a Pinch gesture to zoom with inertia and overhead (I do not know if the word overhead is correct in this context, in german it would be called "Überschwingen").
Basically what it shall do: It should have a max and min scale (in my case 1.0 to 4.0) in which you can zoom. When the gesture is finished it should take the given velocity and make a curve out animation, also allowing the view to over- and underflow the given scales and then move back to the min or max like with tension.
I got the Gesture Recognizer running for this and also managed to get it make use of my minimum and maximum scale (using examples from stackoverflow). This is what I got so far:
- (void)handle_pinch:(UIPinchGestureRecognizer *)recognizer
{
if([recognizer state] == UIGestureRecognizerStateBegan) {
previousScale = 1.0;
lastPoint = [recognizer locationInView:[recognizer view]];
}
if ([recognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[recognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 4.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (previousScale - [recognizer scale]); // new scale is in the range (0-1)
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
scale = newScale;
CGAffineTransform transform = CGAffineTransformScale([[recognizer view] transform], newScale, newScale);
[recognizer view].transform = transform;
CGPoint point = [recognizer locationInView:[recognizer view]];
CGAffineTransform transformTranslate = CGAffineTransformTranslate([[recognizer view] transform], point.x-lastPoint.x, point.y-lastPoint.y);
[recognizer view].transform = transformTranslate;
NSLog(#"Transformed");
}
}
But I do now know how I can add the animation here. Thanks for any help!
You should be using a UIScrollView to achieve the pinch to zoom effect as UIScrollView is already integrated along with animation. Just add your UIView inside a UIScrollView.
Here is great tutorial on UIScrollView. He is using a UIImageView but UIView will behave in a similar way.
https://www.raywenderlich.com/122139/uiscrollview-tutorial
Related
I have a UIImageView (and a UITextView) and I am changing the hight and with of them both using a plus and minus button. However I wanted to do like you can do in most programs, where a box appears round my views, and the user can drag a corner of it to resize. Only one corner needs to be dragged. The opposite is fixed. How do you do this?
Another way - is by GestureRecognizer. In some task i resized image like this:
- (void)resizeImage:(UIPinchGestureRecognizer *)recognizer
{
if ([recognizer state] == UIGestureRecognizerStateBegan)
previousScale = [recognizer scale];
UIView *viewToResize = recognizer.view;
if ([recognizer state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[viewToResize.layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (previousScale - [recognizer scale]);
newScale = MIN(newScale, MAX_SCALE / currentScale);
newScale = MAX(newScale, MIN_SCALE / currentScale);
CGAffineTransform transform = CGAffineTransformScale([viewToResize transform], newScale, newScale);
viewToResize.transform = transform;
previousScale = [recognizer scale];
}
}
ios objective-c
I never done this before but i think you would have to override the touchesBegan and touchesMoved functions of the view. When you initiate the touchesBegan method make sure you are touching the correct view, (Imageview or UItextView) set a boolen in their stating. imTouchingOneOrTheOther. So when you now hit the touchesMoved function you can adjust the size of the frame accordingly. I would try to adjust the frame first with UIView block based animations and if that doesn't look ok then i would play around with coreAnimation. Let me know how it works out.
I am trying to zoom a quad in my iOS application. It needs to zoom not based on the center of the quad, but based on the centroid of the pinch.
I am able to do this correctly - however only for the first pinch gesture. On subsequent pinch gestures, it works, but it drifts a little bit and doesn't quite seem accurate. I am unable to figure out what to do.
There are a few SO questions around this, and I've been through most, if not all of them. None of them accurately address my problem.
Also note that I'm scaling and translating a quad (which is rendered into a GLKView), and not the view itself. Most solutions I've seen deal with transforming the views directly.
Here's the code for the pinch gesture and handling:
First in viewDidLoad:
UIPinchGestureRecognizer *pinchRecognizer = [[UIPinchGestureRecognizer alloc]
initWithTarget:self action:#selector(respondToPinchGesture:)];
pinchRecognizer.cancelsTouchesInView = YES;
pinchRecognizer.delaysTouchesEnded = NO;
[glView addGestureRecognizer:pinchRecognizer];
Where glView is a GLKView object.
And the handler:
- (IBAction)respondToPinchGesture:(UIPinchGestureRecognizer *)recognizer{
if (recognizer.state == UIGestureRecognizerStateEnded || [recognizer numberOfTouches] < 2) return;
if (recognizer.state == UIGestureRecognizerStateBegan) {
point = [recognizer locationInView:glView];
point.x *= glView.contentScaleFactor;
point.y *= glView.contentScaleFactor;
point.y = height - point.y;
anchor = GLKVector3Make(point.x, point.y, 0);
lastScale = 1.0;
}
if (fabs(recognizer.scale - lastScale) > 0.01){
GLfloat scale = 1.0 - (lastScale - recognizer.scale);
lastScale = recognizer.scale;
new_anchor_point = anchor;
new_anchor_point = GLKVector3MultiplyScalar(new_anchor_point, scale);
GLKVector3 translate = GLKVector3Subtract(anchor, new_anchor_point);
path.transform = GLKMatrix4TranslateWithVector3(path.transform, translate);
path.transform = GLKMatrix4Scale(path.transform, scale, scale, 0);
cumulative_translate = GLKVector3Add(cumulative_translate, translate);
}
}
Any pointers appreciated. I am 2 days into this and even a vague suggestion might be helpful.
You have to
remember the previous transformation matrix upon RecognizerStateBegan,
construct your new transformation matrix for pinch zoom assuming the view or object have not been transformed before.
Then, concatenate two matrices together. This will be your final matrix for transforming your object or view.
I managed to solve this using:
-(GLKVector3)get_touch_point_on_view:(UIGestureRecognizer *)recognizer{
CGRect bounds = [glView bounds];
CGPoint point = [recognizer locationInView:glView];
point.y = bounds.size.height - point.y;
return GLKVector3Make((point.x * glView.contentScaleFactor - total_translation.x)/total_scale,
(point.y * glView.contentScaleFactor - total_translation.y)/total_scale, 0);
}
- (void)respondToPinchGesture:(UIPinchGestureRecognizer *)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan) {
lastScale = 1.0;
}
[self get_touch_point_on_view:recognizer];
if (fabs(recognizer.scale - lastScale) > 0.01){
GLfloat scale = 1.0 - (lastScale - recognizer.scale);
lastScale = recognizer.scale;
total_scale *= scale;
path.transform = GLKMatrix4TranslateWithVector3(path.transform, centroid);
path.transform = GLKMatrix4Scale(path.transform, scale, scale, 0);
path.transform = GLKMatrix4TranslateWithVector3(path.transform, GLKVector3Negate(centroid));
total_translation = [self get_total_translation];
}
}
I'm getting images from remote server and displaying in UIImageView then doing pinch gesture to this imageview. But when i pinching image, i'm getting image stretching. It's loosing original resolution and quality.
mmageView=[[UIImageView alloc]initWithFrame:CGRectMake(50,50,150,150)];
[self.view addSubview:mmageView];
UIPinchGestureRecognizer *dbpinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(dbhandlePinch:)];
[mmageView addGestureRecognizer:dbpinchGesture];
UIPinchGesture:
-(void)dbhandlePinch:(UIPinchGestureRecognizer*)recognizer {
if([recognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
LastScale = [recognizer scale];
}
if ([recognizer state] == UIGestureRecognizerStateBegan ||
[recognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[recognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
// const CGFloat kMaxScale = 2.0;
const CGFloat kMinScale = 0.8;
CGFloat newScale = 1 - (LastScale - [recognizer scale]);
// newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[recognizer view] transform], newScale, newScale);
[recognizer view].transform = transform;
LastScale = [recognizer scale]; // Store the previous scale factor for the next pinch gesture call
}
}
For pinch zoom add your imageView in a scrollView and Import UIScrollViewDelegate
- (void)viewDidLoad
{
[super viewDidLoad];
//for pinch gesture
_scrollView.minimumZoomScale = 0.5;
_scrollView.maximumZoomScale = 6.0;
_scrollView.contentSize = CGSizeMake(_imageView.frame.size.width, _imageView.frame.size.height);
_scrollView.delegate = self;
}
-(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return _imageView;
}
UIScrollView makes supporting the pinch gestures for zooming easy. Better solution in Apple Documentation.
I am currently having a map(tilemap) within a layer that i would like to pan/zoom using the following code:
- (void) pinchGestureUpdated: (UIPinchGestureRecognizer *) recognizer {
if([recognizer state] == UIGestureRecognizerStateBegan) {
_lastScale = [recognizer scale];
CGPoint touchLocation1 = [recognizer locationOfTouch:0 inView:recognizer.view];
CGPoint touchLocation2 = [recognizer locationOfTouch:1 inView:recognizer.view];
CGPoint centerGL = [[CCDirector sharedDirector] convertToGL: ccpMidpoint(touchLocation1, touchLocation2)];
_pinchCenter = [self convertToNodeSpace:centerGL];
}
else if ([recognizer state] == UIGestureRecognizerStateChanged) {
// NSLog(#"%d", recognizer.scale);
CGFloat newDeltaScale = 1 - (_lastScale - [recognizer scale]);
newDeltaScale = MIN(newDeltaScale, kMaxScale / self.scale);
newDeltaScale = MAX(newDeltaScale, kMinScale / self.scale);
CGFloat newScale = self.scale * newDeltaScale;
//self.scale = newScale;
[self scale: newScale atCenter:_pinchCenter];
_lastScale = [recognizer scale];
}
}
- (void) scale: (CGFloat) newScale atCenter: (CGPoint) center {
CGPoint oldCenterPoint = ccp(center.x * self.scale, center.y * self.scale);
// Set the scale.
self.scale = newScale;
// Get the new center point.
CGPoint newCenterPoint = ccp(center.x * self.scale, center.y * self.scale);
// Then calculate the delta.
CGPoint centerPointDelta = ccpSub(oldCenterPoint, newCenterPoint);
// Now adjust your layer by the delta.
self.position = ccpAdd(self.position, centerPointDelta);
}
my issue is that the zoom is not taking in effect at the center of the pinch even though i am trying to change it at the same time i am zooming in through this method: (void) scale: (CGFloat) newScale atCenter: (CGPoint) center. Is there any reason this might not happening properly? Also how do i convert to the center location of the pinch into the coordinates system for my scene/layer?
Everything was actually fine in my approach. The problem i was having though is that the layer anchor point was different from the map one i defined, which was introducing an offset during scaling. I had to set both anchor to ccp(0,0).
The concersion from the screen coordinates of the pinch gesture's center to the layer is correct and can be achived by the following instructions when using UIKIt gesture recognizers:
CGPoint centerGL = [[CCDirector sharedDirector] convertToGL: ccpMidpoint(touchLocation1, touchLocation2)];
_pinchCenter = [self convertToNodeSpace:centerGL];
First of all, you cannot do ([recognizer state] == UIGestureRecognizerStateBegan) because state is a bitfield! So you have to do:
([recognizer state] & UIGestureRecognizerStateBegan)
The center location of your pinch is going to be in coordinates on the screen basically. As far as how you convert that into your own coordinate system, you need to figure out what the bounds on the device's screen are of the part of your scene/layer that is shown at the time the gesture starts. That's going to be coordinates like 10,10 x 200,200 or something, representing the pixel grid of the screen. Then you will have to figure out, in the coordinate system of your own app's scene/layer, what 10,10 maps to, and what 200,200 maps to. From there, you can derive a factor to apply to the screen coordinates of the pinch gesture's center, that would translate the pinch gesture's screen coordinates into the scene/layer coordinates.
What you're trying to do is tricky because as you scale the scene/layer, your centering that scaling around a point that's not in the center of the view. I'm sure if you look through some of Apple's sample code in one of the map-related apps, you can probably find some examples of methods that have this kind of pinch zooming.
I hope this helps.
I'm using UIPinchGestureRecognizer in my app to zoom in on a view (and yes, there's a reason I'm not using UIScrollView). When I pinch outwards with my fingers, the view zooms in as expected, and if I then reverse pinch without taking my fingers off the screen, it also zooms right. However, if I initiate the zoom by pinching inwards, the rate at which the view zooms is dramatically slower. I'm guessing this is because of how UIPinchGestureRecognizer works - the scale of the UIPinchGestureRecognizer is >1 when pinching outwards, and <1 when pinching inwards. Unfortunately, I do not know how to accurately reflect this in my code.
- (IBAction)didDetectPinchGesture:(id)sender {
UIPinchGestureRecognizer *gestureRecognizer = (UIPinchGestureRecognizer *)sender;
CGFloat scale = [gestureRecognizer scale];
switch ([gestureRecognizer state]) {
case UIGestureRecognizerStateBegan:
_lastScale = [gestureRecognizer scale];
break;
case UIGestureRecognizerStateChanged:
CGFloat currentScale = [[self.imageView.layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 5.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (_lastScale - scale); // new scale is in the range (0-1)
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
NSLog(#"%f", newScale);
CGAffineTransform transform = CGAffineTransformScale([self.imageView transform], newScale, newScale);
self.imageView.transform = transform;
_lastScale = scale; // Store the previous scale factor for the next pinch gesture call
break;
default:
_lastScale = [gestureRecognizer scale];
break;
}
}
A very simple solution to this is to reset the gestureRecognizer scale back to 1 when you're finished:
...
default:
_lastScale = [gestureRecognizer scale];
// Add this:
[gestureRecognizer setScale:1];
break;
}