UIPinchGestureRecognizer in UIView is not working properly - ios

I am trying to do zoom-in and zoom-out in a UIView using UIPinchGestureRecognizer. But when I do pinch on my trackpad, it is not recognising the pinch and the control is not going to my twoFingerPinch function. I am using the following code.
- (void)viewDidLoad {
//.......
UIPinchGestureRecognizer *twoFingerPinch = [[UIPinchGestureRecognizer alloc]
initWithTarget:self
action:#selector(twoFingerPinch:)];
[myview addGestureRecognizer:twoFingerPinch];
//.....
}
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)recognizer
{
NSLog(#"Pinch scale: %f", recognizer.scale);
if (recognizer.scale >1.0f && recognizer.scale < 2.5f) {
CGAffineTransform transform = CGAffineTransformMakeScale(recognizer.scale, recognizer.scale//);
myview.transform = transform;
}
}
Why it is not recognising the pinch from trackpad? Is there any other method to do the same?

First click on the Option button. you will get 2 gray spots which you can move using the mouse or trackpad. in older versions you need to press shift+option.
for more details check this.

Make sure that userInteractionEnabled is set to yes for your myview,
myview.userInteractionEnabled = YES;

Related

uibutton on camera preview layer jumps back to its initial location after dragged (UIPanGestureRecognizer gesture)

I have tried many things but could not resolve this issue. I have buttons on top of AVCaptureVideoPreviewLayer defined as follow:
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.previewView.layer;
For the buttons, I defined a gesture recognizer for dragging:
UIPanGestureRecognizer *focPanRecognizer;
focPanRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(wasDragged:)];
[focPanRecognizer setMinimumNumberOfTouches:1];
[focPanRecognizer setMaximumNumberOfTouches:1];
focPanRecognizer.cancelsTouchesInView = YES;
[self.focusButton addGestureRecognizer:focPanRecognizer];
[self.focusButton setUserInteractionEnabled:YES];
The dragging is handled by the following method:
- (void)wasDragged:(UIPanGestureRecognizer *)recognizer
{
if (recognizer.state == UIGestureRecognizerStateChanged ||
recognizer.state == UIGestureRecognizerStateEnded) {
UIButton *draggedButton = (UIButton *) recognizer.view;
CGPoint translation = [recognizer translationInView:self.previewView];
CGRect newButtonFrame = draggedButton.frame;
newButtonFrame.origin.x += translation.x;
newButtonFrame.origin.y += translation.y;
draggedButton.frame = newButtonFrame;
[recognizer setTranslation:CGPointZero inView:self.previewView];
}
}
When I drag the button, it drags properly if I close the camera lens. If the camera lens is open, the button immediately jumps back to its original location.
What could be the problem? Helps is much appreciated.
The problem was about the constraints used in the layout. Updating the constraints helped as explained in the jkanter's answer at iOS >> Dragged View is Jumping Back to Original Position >> Auto Layout Combined with UIPanGestureRecognizer Issue

Crop UIImageView after manipulated through gesture recognizer

I'm having issues figuring this out! All of the examples I have seen have to do with a scrollView and I am not using one. I need to crop an image within a predetermined CGRect area after the image has been manipulated through pinch, pan, and rotate. The code I have below crops the unmanipulated image in the upper left hand corner.
To Clarify pendantCanvasView is the view (container) and pendantImageView is a subclass of pendantCanvasView. PendantFrame is just a CGRect that has the coords of the rect in pendantImageView I want to crop.
Can someone help me?
Here is the code i have so far:
- (void)addMoveImageToolbox {
UIPinchGestureRecognizer *pinchRec = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(handlePinch:)];
[self.pendantCanvasView addGestureRecognizer:pinchRec];
UIRotationGestureRecognizer *rotateRec = [[UIRotationGestureRecognizer alloc]initWithTarget:self action:#selector(handleRotate:)];
[self.pendantCanvasView addGestureRecognizer:rotateRec];
UIPanGestureRecognizer *panRec = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:)];
[self.pendantCanvasView addGestureRecognizer:panRec];
}
- (void)handlePinch:(UIPinchGestureRecognizer*)pinch {
self.pendantImageView.transform = CGAffineTransformScale(self.pendantImageView.transform, pinch.scale, pinch.scale);
self.zoomScale = pinch.scale;
pinch.scale = 1;
}
- (void)handleRotate:(UIRotationGestureRecognizer*)rotate {
self.pendantImageView.transform = CGAffineTransformRotate(self.pendantImageView.transform, rotate.rotation);
rotate.rotation = 0;
}
- (void)handlePan:(UIPanGestureRecognizer *)pan {
CGPoint translation = [pan translationInView:self.pendantCanvasView];
self.pendantImageView.center = CGPointMake(self.pendantImageView.center.x + translation.x,
self.pendantImageView.center.y + translation.y);
[pan setTranslation:CGPointMake(0, 0) inView:self.pendantImageView];
}
Crop Method:
- (UIImage *)captureScreenInRect {
UIGraphicsBeginImageContextWithOptions(pendantFrame.size, NO, [UIScreen mainScreen].scale);
[self.pendantCanvasView drawViewHierarchyInRect:pendantFrame afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();self.pendantImageView.constraints
return image;
}
I believe drawViewHierarchyInRect re-renders the view hierarchy so it may be losing your alterations to the imageView.
you could try using snapshotViewAfterScreenUpdates instead.
I'm not sure what the relationships between pendantCanvasView, pendantImageView, and pendantFrame are but it may be possible to use the renderInContext method of the layer property of one or the other to get the cropped image.

Resetting the Zoom on a UIPageViewController

I am building up a simple application that is made up of a UITableViewController with languages and when a specific cell is clicked, a UIPageViewController is brought up to represent the images for that selected language. The user can scroll through the images and everything works as desired. The next step was to build a zooming capability into the UIPageViewController so the user could zoom into the images with a pinch gesture.
I have achieved this with the following code:
- (void)viewDidLoad
{
[super viewDidLoad];
self.leafletImages = [NSMutableArray arrayWithObjects:[[ImageModel alloc] initWithImageName:#"3facts-chinese-page1.jpg"], [[ImageModel alloc] initWithImageName:#"3facts-chinese-page2.jpg"], [[ImageModel alloc] initWithImageName:#"3facts-chinese-page3.jpg"], [[ImageModel alloc] initWithImageName:#"3facts-chinese-page4.jpg"], [[ImageModel alloc] initWithImageName:#"3facts-chinese-page5.jpg"], [[ImageModel alloc] initWithImageName:#"3facts-chinese-page6.jpg"], nil];
// Lots of code for the building up of the UIPageViewController
LeafletImageSizeViewController *imageViewController = [[LeafletImageSizeViewController alloc] init];
imageViewController.model = [_modelArray objectAtIndex:0];
NSArray *viewControllers = [NSArray arrayWithObject:imageViewController];
[self.pageViewController setViewControllers:viewControllers
direction:UIPageViewControllerNavigationDirectionForward
animated:NO
completion:nil];
// Gesture
UIPinchGestureRecognizer *pinchRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchDetected:)];
[self.view addGestureRecognizer:pinchRecognizer];
pinchRecognizer.delegate=self;
}
The class creating the image and the size is:
- (void)useThreeFactsSize
CGRect insetFrame;
insetFrame = CGRectMake(310, 70, self.view.frame.size.width-615, self.view.frame.size.height-85);
_imageView = [[UIImageView alloc] initWithFrame:insetFrame];
[_imageView setAutoresizingMask:UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight];
[_imageView setImage:[UIImage imageNamed:_model.imageName]];
[[self view] addSubview:_imageView];
[self.view setBackgroundColor:[UIColor clearColor]];
}
The pinchDetection method is:
-(void)pinchDetected:(UIPinchGestureRecognizer *)pinchRecognizer
{
CGFloat scale = pinchRecognizer.scale;
self.pageViewController.view.transform = CGAffineTransformScale(self.pageViewController.view.transform, scale, scale);
pinchRecognizer.scale = 1.0;
}
Now, I can zoom into the images of the UIPageViewController without any issues and it works really well.
What I want to do however is two things:
Not allow the image to be zoomed out beyond the original scale
Create a double tap gesture to bring the image back to it's original scale
With feature 1, the user can zoom into the image, but also completely zoom out of the image which shrinks the image and the UIPageViewController pageIndicators. There's no reason the user should be able to zoom out of the image, so I'd like to allow the user to zoom in to any scale, but not to zoom out beyond what the original size of the image on screen in the UIPageViewController.
With feature 2, I'd like to implement a gesture to double tap the screen and for the zoomed image to go back to it's original scale (like the Photos.app).
Update
With reference to the answer, I have updated the question to reflect how I'm going about doing the images. With point 2 and the double tap gesture, the following code almost works:
- (void)scrollViewTwoFingerTapped:(UITapGestureRecognizer *)recognizer
{
NSLog(#"Double Tap");
self.pageViewController.view.transform = CGAffineTransformIdentity;
}
What it's currently doing is if I zoom in with a pinch and pan around, and then double tap, it centres the image to the point of where I tapped, so sometimes the borders are being shown, etc, rather than making the image centre to where it's supposed to be.
For point 1:
if (pinchRecognizer.scale > 1) {
CGFloat scale = pinchRecognizer.scale;
self.pageViewController.view.transform = CGAffineTransformScale(self.pageViewController.view.transform, scale, scale);
pinchRecognizer.scale = 1.0;
}
If I have self.imageview, it doesn't work because it's nil and even if I make a call to the class setting the size, it's nil as well.
I suspect I have a number of things wrong with my code!
For reference, I have panning working with:
- (void)panGestureDetected:(UIPanGestureRecognizer *)recognizer
{
UIGestureRecognizerState state = [recognizer state];
if (state == UIGestureRecognizerStateBegan || state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [recognizer translationInView:recognizer.view];
[recognizer.view setTransform:CGAffineTransformTranslate(recognizer.view.transform, translation.x, translation.y)];
[recognizer setTranslation:CGPointZero inView:recognizer.view];
}
else if(state==UIGestureRecognizerStateEnded){
UIView *imageView = recognizer.view;
UIView *container = imageView.superview;
CGFloat targetX = CGRectGetMinX(imageView.frame);
CGFloat targetY = CGRectGetMinY(imageView.frame);
if(targetX>0){
// targetX = 0;
}else if(CGRectGetMaxX(imageView.frame)<CGRectGetWidth(container.bounds)){
targetX = CGRectGetWidth(container.bounds)-CGRectGetWidth(imageView.frame);
}
if(targetY>0){
// targetY = 0;
}else if(CGRectGetMaxY(imageView.frame)<CGRectGetHeight(container.bounds)){
// targetY = CGRectGetHeight(container.bounds)-CGRectGetHeight(imageView.frame);
}
// imageView.frame = CGRectMake(targetX, targetY, CGRectGetWidth(imageView.frame), CGRectGetHeight(imageView.frame));
[UIView animateWithDuration:0.3 animations:^{
imageView.frame = CGRectMake(targetX, targetY, CGRectGetWidth(imageView.frame), CGRectGetHeight(imageView.frame));
}];
}
}
That's working very well at the moment, but there's definitely a conflict with everything else.
I'd really appreciate any guidance in the right direction on this.
1.) To not allow the image to be zoomed out beyond it's original scale you first just need to check if the scale you're about to set it to is greater than 1 or not. If it's less than one, you don't want to rescale your image as that would mean it gets smaller. So...
#IBAction func doPinch(sender: UIPinchGestureRecognizer) {
if sender.scale > 1 {
let transform = CGAffineTransformMakeScale(sender.scale, sender.scale)
imageView.transform = transform
}
}
2.) You seem to be changing the view's frame to try and change it's scale, but you never adjusted the view's frame yourself in the first place. You're adjusting the view's transform. That means in order to return it to it's original size you must remove whatever transform you put on it. To do this you put it back to it's identity. So...
#IBAction func doDoubleTap(sender: UITapGestureRecognizer) {
imageView.transform = CGAffineTransformIdentity
}
My code samples are in Swift but you should be able to adjust it to Objective-C yourself. Also, you seem to be adjusting the transform/scale of the entire page view itself. I would suggest you change the scale of only the image view. That makes more sense as that's actually what you're trying to zoom into.

#selector in UIPinchGesture is not called

I had detected the two finger touch for imageview and given condition that if two finger is touched then pinchGesture have to perform the selector for imageview.
if ([[event allTouches]count] == 2)
{
imageView.multipleTouchEnabled=YES;
imageView.userInteractionEnabled =YES;
twoFingerPinch = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(twoFingerPinch:)];
}
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)recognizer
{
CGFloat scale = recognizer.scale;
imageView.transform = CGAffineTransformScale(imageView.transform, scale, scale);
recognizer.scale = 1.0;
}
But my twoFingerPinch method is not called. Anybody help me!! Thanks in advance.
As per your code it is hard to say on which view you want to apply pinch gesture,I am considering you are applying it on imageView, here is how you should do it:-
-(void)viewDidLoad
{
UIPinchGestureRecognizer * pinch = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(twoFingerPinch:)];
[imageView addGestureRecognizer:pinch];
[imageView setUserInteractionEnabled:YES];
pinch.delegate = self;
}
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)recognizer
{
CGFloat scale = recognizer.scale;
imageView.transform = CGAffineTransformScale(imageView.transform, scale, scale);
recognizer.scale = 1.0;
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return NO;
}
Don't sure what are you trying to do, but if you want to pinch just add that directly to the view. Its default behavior for pinch to use 2 fingers.

UIScrollView detect user taps

I'm using UIScrollView with PagingEnabled, inside UIScrollView I added three UIImage. It's work fine.
I'm wondering how can I detect if the user taps between two squares in UIImage,for example: in the attached image how can I detect if the user taps between squares 1 and 2 or if the user taps between squares 2 and 3?
Any ideas?
Thanks.
Add Gestures to image view
imageView.userInteractionEnabled = YES;
UIPinchGestureRecognizer *pgr = [[UIPinchGestureRecognizer alloc]
initWithTarget:self action:#selector(handlePinch:)];
pgr.delegate = self;
[imageView addGestureRecognizer:pgr];
[pgr release];
:
:
- (void)handlePinch:(UIPinchGestureRecognizer *)pinchGestureRecognizer
{
//handle pinch...
}
For detecting single or multiple taps use UITapGestureRecognizer, its a subclass of UIGestureRecognizer. You should not forget to set the userInteractionEnabled property to YES, because UIImageView- class changes the default value to NO.
self.imageView.userInteractionEnabled = YES;
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc]initWithTarget:self action:#selector(handleTap:)];
// Set the number of taps, if needed
[tapRecognizer setNumberOfTouchesRequired:1];
// and add the recognizer to our imageView
[imageView addGestureRecognizer:tapRecognizer];
- (void)handleTap:(UITapGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateEnded) {
// if you want to know, if user tapped between two objects
// you need to get the coordinates of the tap
CGPoint point = [sender locationInView:self.imageView];
// use the point
NSLog(#"Tap detected, point: x = %f y = %f", point.x, point.y );
// then you can do something like
// assuming first square's coordinates: x: 20.f y: 20.f width = 10.f height: 10.f
// Construct the frames manually
CGRect firstSquareRect = CGRectMake(20.f, 20.f, 10.f, 10.f);
CGRect secondSquareRect = CGRectMake(60.f, 10.f, 10.f, 10.f);
if(CGRectContainsPoint(firstSquareRect, point) == NO &&
CGRectContainsPoint(secondSquareRect, point) == NO &&
point.y < (firstSquareRect.origin.y + firstSquareRect.size.height) /* the tap-position is above the second square */ ) {
// User tapped between the two objects
}
}
}

Resources