I have an image as a background and I want to make certain parts of this image clickable with zooming in and out , Is there any way to do something like that ??
Don't create a new view for your gesture recognizer. The recognizer implements a locationInView: method. Set it up for the view that contains the sensitive region. On the handleGesture, hit-test the region you care about like this:
0) Do all this on the view that contains the region you care about. Don't add a special view just for the gesture recognizer.
1) Setup mySensitiveRect
#property (assign, nonatomic) CGRect mySensitiveRect;
#synthesize mySensitiveRect=_mySensitiveRect;
self.mySensitiveRect = CGRectMake(0.0, 240.0, 320.0, 240.0);
2) Create your gestureRecognizer:
gr = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[self.view addGestureRecognizer:gr];
// if not using ARC, you should [gr release];
// mySensitiveRect coords are in the coordinate system of self.view
- (void)handleGesture:(UIGestureRecognizer *)gestureRecognizer {
CGPoint p = [gestureRecognizer locationInView:self.view];
if (CGRectContainsPoint(mySensitiveRect, p)) {
//Add your zooming code here
} else {
NSLog(#"got a tap, but not where i need it");
}
}}
The sensitive rect should be initialized in myView's coordinate system, the same view to which you attach the recognizer.
Apple has a demo app called PhotoScroller that implements a zoomable, scrollable set of images (in a page view controller, but you don't need that.) That would be a good starting point for what you need.
Their sample apps used to be built into the Xcode docs. Since Xcode 6 I haven't seen them linked in the docs any more.
You can download PhotoScroller from Apple's online iOS Developer Library. (link)
I got an UIImageView composed of two images with transparancy channel activated.
The view looks something like this:
image
I would like to be able to detect precisely the touches within the center circle and distinguish them from the ones in the outern circle.
I am thinking of a collision detection algorithm based the difference between two circles. First test in the outern layer to see if there is a collision at all and then in the inner layer. If in the inner layer then activate inner botton otherwise active outern button.
Any help or suggestion in this?
Shall I create a github repo so everyone could contribute in it?
Here something than can help you:
UIImageView *myImageView;
// In viewDidLoad, the place you are created your UIImageView place this:
myImageView.userInteractionEnabled = YES;
UITapGestureRecognizer *tapInView = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapInImageView:)];
[myImageView addGestureRecognizer:tapInView];
}
-(void)tapInImageView:(UITapGestureRecognizer *)tap
{
CGPoint tapPoint = [tap locationInView:tap.view];
CGPoint centerView = tap.view.center;
double distanceToCenter = sqrt((tapPoint.x - centerView.x)*(tapPoint.x - centerView.x) + (tapPoint.y - centerView.y)*(tapPoint.y - centerView.y) );
if (distanceToCenter < RADIUS) {
// It's in center
} else {
// Touch outside
}
I have a main view in my program with a draggable view in it. This view can be dragged around with pan gestures. Currently though it uses a lot of code which I want to put in a subclass to reduce complexity. (I eventually want to increase functionality by allowing the user to expand with view with further pan gestures. This means there will be lots more code clogging up my view controller if I can't sort this out first)
Is it possible to have the code for a gesture recogniser in the subclass of a class and still interact with views in the parent class.
This is the current code I am using to enable pan gesture in the parent class:
-(void)viewDidLoad {
...
UIView * draggableView = [[UIView alloc]initWithFrame:CGRectMake(highlightedSectionXCoordinateStart, highlightedSectionYCoordinateStart, highlightedSectionWidth, highlightedSectionHeight)];
draggableView.backgroundColor = [UIColor colorWithRed:121.0/255.0 green:227.0/255.0 blue:16.0/255.0 alpha:0.5];
draggableView.userInteractionEnabled = YES;
[graphView addSubview:draggableView];
UIPanGestureRecognizer * panner = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panWasRecognized:)];
[draggableView addGestureRecognizer:panner];
}
- (void)panWasRecognized:(UIPanGestureRecognizer *)panner {
UIView * draggedView = panner.view;
CGPoint offset = [panner translationInView:draggedView.superview];
CGPoint center = draggedView.center;
// We want to make it so the square won't go past the axis on the left
// If the centre plus the offset
CGFloat xValue = center.x + offset.x;
draggedView.center = CGPointMake(xValue, center.y);
// Reset translation to zero so on the next `panWasRecognized:` message, the
// translation will just be the additional movement of the touch since now.
[panner setTranslation:CGPointZero inView:draggedView.superview];
}
(Thanks to Rob Mayoff for getting me this far)
I have now added a subclass of the view but can't figure out how or where I need to create the gesture recogniser as the view is now being created in the subclass and added to the parent class.
I really want the target for the gesture recogniser to be in this subclass but when I try to code it nothing happens.
I have tried putting all the code in the subclass and adding the pan gesture to the view but then I get a bad access crash when I try to drag it.
I am currently trying to use
[graphView addSubview:[[BDraggableView alloc] getDraggableView]];
To add it to the subview and then setting up the view (adding the pan gesture etc) in the function getDraggableView in the subclass
There must be a more straight forward way of doing this that I haven't conceptualised yet - I am still pretty new dealing with subclasses and so am still learning how they all fit together.
Thanks for any help you can give
I think I might of figured this one out.
In the parent class I created the child class variable:
BDraggableView * draggableViewSubClass;
draggableViewSubClass = [[BDraggableView alloc] initWithView:graphView andRangeChart: [rangeSelector getRangeChart]];
This allowed me to initialise the child class with the view I wanted to have the draggable view on: graphView
Then in the child view I set up the pan gesture as I normally would but added it to this view carried through:
- (id)initWithView:(UIView *) view andRangeChart: (ShinobiChart *)chart {
self = [super initWithNibName:nil bundle:nil];
if (self) {
// Custom initialization
parentView = view;
[self setUpViewsAndPans];
}
return self;
}
- (void)setUpViewsAndPans {
draggableView = [[UIView alloc]initWithFrame:CGRectMake(highlightedSectionXCoordinateStart, highlightedSectionYCoordinateStart, highlightedSectionWidth, highlightedSectionHeight)];
draggableView.backgroundColor = [UIColor colorWithRed:121.0/255.0 green:227.0/255.0 blue:16.0/255.0 alpha:0.5];
draggableView.userInteractionEnabled = YES;
// Add the newly made draggable view to our parent view
[parentView addSubview:draggableView];
[parentView bringSubviewToFront:draggableView];
// Add the pan gesture
UIPanGestureRecognizer * panner = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panWasRecognized:)];
[draggableView addGestureRecognizer:panner];
}
- (void)panWasRecognized:(UIPanGestureRecognizer *)panner {
UIView * draggedView = panner.view;
CGPoint offset = [panner translationInView:draggedView.superview];
CGPoint center = draggedView.center;
CGFloat xValue = center.x + offset.x;
draggedView.center = CGPointMake(xValue, center.y);
// Reset translation to zero so on the next `panWasRecognized:` message, the
// translation will just be the additional movement of the touch since now.
[panner setTranslation:CGPointZero inView:draggedView.superview];
}
It took me a while to straighten it in my head that we want to do all the setting up in our subclass and then add this view with its characteristics to the parent view.
Thanks for all the answers provided they got me thinking along the right lines to solve it
I think you want to subclass UIView and make your own DraggableView class. Here, you can add swipe and pan gesture recognizers. This would be in the implementation of a subclass of UIView
- (id)initWithFrame:(CGRect)frame
{
if (self = [super initWithFrame:frame]) {
UIGestureRecognizer *gestRec = [[UIGestureRecognizer alloc] initWithTarget:self
action:#selector(detectMyMotion:)];
[self addGestureRecognizer:gestRec];
}
return self;
}
- (void)detectMyMotion:(UIGestureRecognizer *)gestRect
{
NSLog(#"Gesture Recognized");
// maybe even, if you wanted to alert your VC of a gesture...
[self.delegate alertOfGesture:gestRect];
// your VC would be alerted by delegation of this action.
}
While I was playing on my phone, I noticed that my UISegmentedControl was not very responsive. It would take 2 or more tries to make my taps register. So I decided to run my app in Simulator to more precisely probe what was wrong. By clicking dozens of times with my mouse, I determined that the top 25% of the UISegmentedControl does not respond (the portion is highlighted in red with Photoshop in the screenshot below). I am not aware of any invisible UIView that could be blocking it. Do you know how to make the entire control tappable?
self.segmentedControl = [[UISegmentedControl alloc] initWithItems:[NSArray arrayWithObjects:#"Uno", #"Dos", nil]];
self.segmentedControl.selectedSegmentIndex = 0;
[self.segmentedControl addTarget:self action:#selector(segmentedControlChanged:) forControlEvents:UIControlEventValueChanged];
self.segmentedControl.height = 32.0;
self.segmentedControl.width = 310.0;
self.segmentedControl.segmentedControlStyle = UISegmentedControlStyleBar;
self.segmentedControl.tintColor = [UIColor colorWithWhite:0.9 alpha:1.0];
self.segmentedControl.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin;
UIView* toolbar = [[UIView alloc] initWithFrame:CGRectMake(0, 0, self.view.width, HEADER_HEIGHT)];
toolbar.autoresizingMask = UIViewAutoresizingFlexibleWidth;
CAGradientLayer *gradient = [CAGradientLayer layer];
gradient.frame = CGRectMake(
toolbar.bounds.origin.x,
toolbar.bounds.origin.y,
// * 2 for enough slack when iPad rotates
toolbar.bounds.size.width * 2,
toolbar.bounds.size.height
);
gradient.colors = [NSArray arrayWithObjects:
(id)[[UIColor whiteColor] CGColor],
(id)[[UIColor
colorWithWhite:0.8
alpha:1.0
] CGColor
],
nil
];
[toolbar.layer insertSublayer:gradient atIndex:0];
toolbar.backgroundColor = [UIColor navigationBarShadowColor];
[toolbar addSubview:self.segmentedControl];
UIView* border = [[UIView alloc] initWithFrame:CGRectMake(0, HEADER_HEIGHT - 1, toolbar.width, 1)];
border.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin;
border.backgroundColor = [UIColor colorWithWhite:0.7 alpha:1.0];
border.autoresizingMask = UIViewAutoresizingFlexibleWidth;
[toolbar addSubview:border];
[self.segmentedControl centerInParent];
self.tableView.tableHeaderView = toolbar;
http://scs.veetle.com/soget/session-thumbnails/5363e222d2e10/86a8dd984fcaddee339dd881544ecac7/5363e222d2e10_86a8dd984fcaddee339dd881544ecac7_20140509171623_536d6fd78f503_68_896x672.jpg
As already written in other answers, UINavigationBar grabs the touches made near the nav bar itself, but not because it has some subviews extended over the edges: this is not the reason.
If you log the whole view hierarchy, you will see that the UINavigationBar doesn't extends over the defined edges.
The reason why it receives the touches is another:
in UIKit, there are many "special cases", and this is one of them.
When you tap the screen, a process called "hit testing" starts. Starting from the first UIWindow, all views are asked to answer two "questions": is the point tapped inside your bounds? what is the subviews that must receive the touch event?
this questions are answered by these two methods:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event;
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event;
Ok, now we can continue.
After the tap, UIApplicationMain starts the hit testing process. The hit test starts from the main UIWindow (and is executed even on the status bar window and the alert view window, for example), and goes through all subviews.
This process is executed 3 times:
two times starting from UIWindow
one times starting from _UIApplicationHandleEvent
If you tap on the Navigation Bar, you will see that hitTest on UIWindow will return the UINavigationBar (all three times)
If you tap on the area below the Navigation Bar however, you will se something strange:
the first two hitTest will return your UISegmentedControl
the last hitTest will return UINavigationBar
why this?
If you swizzle and subclass UIView, overriding hitTest, you will see that the first two times the tapped point is correct. The third time, something changes the point doing something like point - 15 (or a similar number)
After a lot of searching, I have found where this is happening:
UIWindow has a (private) method called
-(CGPoint)warpPoint:(CGPoint)point;
debugging it, I saw that this method changes the tapped point if it is immediately below the status bar.
Debugging more, I saw that the stack calls that make this possible, are only 3:
[UINavigationBar, _isChargeEnabled]
[UINavigationBar, isEnabled]
[UINavigationBar, _isAlphaHittableAndHasAlphaHittableAncestors]
So, at the end, this warpPoint method checks if the UINavigationBar is enabled and hittable, if yes it "warps" the point. The point is warped of a number of pixel between 0 and 15, and this "warp" increases when you get closer to the Navigation Bar.
Now that you know what happens behind the scenes, you have to know how to avoid it (if you want).
You can't simply override warpPoint: if the application must go on the AppStore: it's a private method and your app will be rejected.
You have to find another system (like as suggested, overriding sendEvent, but I'm not sure if it will work)
Because this question is interesting, I will think about a legal solution tomorrow and update this answer (one good starting point can be subclassing UINavigationBar, overriding hitTest and pointInside, returning nil/false if, given the same event over multiple calls, the point changes. But I must test if it works tomorrow)
EDIT
Ok, I've tried many solutions but it's not simple to find a legal and stable one.
I've described the actual behavior of the system, that could vary on different versions (hitTest called more or less than 3 times, the warpPoint warping the point of about 15px that can change ecc ecc).
The most stable is obviously the illegal override of warpPoint: in a UIWindow subclass:
-(CGPoint)warpPoint:(CGPoint)point;
{
return point;
}
however, I've found that a method like this (in UIWindow subclass) it's stable enough and does the trick:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
// this method is not safe if you tap the screen two times at the same x position and y position different for 16px, because it moves the point
if (self.lastPoint.x == point.x)
{
// the points are on the same vertical line
if ((0 < (self.lastPoint.y - point.y)) && ((self.lastPoint.y - point.y) < 16) )
{
// there is a differenc of ~15px in the y position?
// if so, the point has been changed
point.y = self.lastPoint.y;
}
}
self.lastPoint = point;
return [super hitTest:point withEvent:event];
}
This method records the last point tapped, and if the subsequent tap is at the same x, and an y different for max 16px, then uses the previous point.
I've tested a lot and it seems stable.
If you want, you can add more controls to enable this behavior only in particular controllers, or only on a defined portion of the window, ecc ecc.
If I find another solution, I'll update the post
I believe the problem is because the buttons in the UINavigationBar have a larger than normal touch area. See this SO post. You can also find plenty of discussion on this with a 'UINavigationBar touch area' Google search.
As a possible solution, you could put the segmented control IN the navigation bar, but you would know better than I if that fits your use cases or not.
I've come up with an alternate solution that to me seems safer than LombaX's. It uses the fact that both events come in with the same timestamp to reject the subsequent event.
#interface RFNavigationBar ()
#property (nonatomic, assign) NSTimeInterval lastOutOfBoundsEventTimestamp;
#end
#implementation RFNavigationBar
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
// [rfillion 2014-03-28]
// UIApplication/UIWindow/UINavigationBar conspire against us. There's a band under the UINavigationBar for which the bar will return
// subviews instead of nil (to make those tap targets larger, one would assume). We don't want that. To do this, it seems to end up
// calling -hitTest twice. Once with a value out of bounds which is easy to check for. But then it calls it again with an altered point
// value that is actually within bounds. The UIEvent it passes to both seem to be the same. However, we can't just compare UIEvent pointers
// because it looks like these get reused and you end up rejecting valid touches if you just keep around the last bad touch UIEvent. So
// instead we keep around the timestamp of the last bad event, and try to avoid processing any events whose timestamp isn't larger.
if (point.y > self.bounds.size.height)
{
self.lastOutOfBoundsEventTimestamp = event.timestamp;
return nil;
}
if (event.timestamp <= self.lastOutOfBoundsEventTimestamp + 0.001)
{
return nil;
}
return [super hitTest:point withEvent:event];
}
#end
You might want to check which view is recording the touches. Try this method-
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
[touch locationInView:self.view];
if([touch.view isKindOfClass:[UISegmentedControl class]])
{
NSLog(#"This is UISegment");
}
else if([touch.view isKindOfClass:[UITabBar class]])
{
NSLog(#"This is UITabBar");
} else if(...other views...) {
...
}
}
Once you figure that out you maybe able to narrow down your problem.
It looks as if you're using a category extension to set width/height on views, as well as center them in their parent. Perhaps there is a hidden issue here - can you refactor to do your layout w/out this category?
I copied your code into a clean project and ran it in a UITableViewController's viewDidLoad method - it works fine and I have no dead spots like you report. I had to change your code slightly since I don't have the same category extension that you're using.
Also, if you're running this code in viewDidLoad, you should verify that your view has a defined size (you access your view.width). If you're creating your UITableViewController programmatically (vs from a nib/storyboard) then the frame may be CGRectZero. Mine was loaded from a nib so the frame was preset.
I'd also try temporarily removing your border view to see if it's the culprit.
I recommend that you avoid having touch-sensitive UI in such close proximity to the nav bar or toolbar. These areas are typically known as "slop factors" making it easier for users to perform touch events on buttons without the difficulty of performing precision touches. This is also the case for UIButtons for example.
But if you want to capture the touch event before the navigation bar or toolbar receives it, you can subclass UIWindow and override: -(void)sendEvent:(UIEvent *)event;
An easy way to debug this is to try using DCIntrospect in your project. It's a very easy to use/implement library that makes finding out what views are where when in the simulator a breeze.
Install the library and configure it
Run the application in the simulator and navigate to the screen with the issue
Press spacebar on the keyboard (the computer keyboard, not the simulator's
keyboard)
Click on the 25% area and see what gets highlighted.
If what's highlighted isn't the segmented view controller, that view could be what's covering up the touch event.
Create a protocol for UINavigationBar: (add new file and paste below code)
/******** file: UINavigationBar+BelowSpace.h*******/
"UINavigationBar+BelowSpace.h"
#import <Foundation/Foundation.h>
#interface UINavigationBar (BelowSpace)
#end
/*******- file: UINavigationBar+BelowSpace.m*******/
#import "UINavigationBar+BelowSpace.h"
#implementation UINavigationBar (BelowSpace)
-(UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
int errorMargin = 5;// space left to decrease the click event area
CGRect smallerFrame = CGRectMake(0 , 0 - errorMargin, self.frame.size.width, self.frame.size.height);
BOOL isTouchAllowed = (CGRectContainsPoint(smallerFrame, point) == 1);
if (isTouchAllowed) {
self.userInteractionEnabled = YES;
} else {
self.userInteractionEnabled = NO;
}
return [super hitTest:point withEvent:event];
}
#end
Hope this help ^ ^
Try this
self.navigationController!.navigationBar.userInteractionEnabled = false;
I've been trying to figure this out for hours, completely at a loss here. I'm trying to implement a UIPinchGestureRecognizer for some of the custom UIImageViews in my game, but it doesn't work. Everything thing I've researched says it should work, yet it doesn't. Pinch works fine if I add it to my view controller, or to a custom UIView, but not the UIImageViews. I've tried all the common fixes and tweaks, to no success. I have userInteractionEnabled and multipleTouchEnabled set to YES. I have the delegate and selectors set up properly. I have shouldRecognizeSimultaneouslyWithGestureRecognizer set to return YES.
The gesture recognizer is getting added to the UIImageView, I've been able to access its properties later in my update loop, but the NSLog in the selector never gets called for the UIImageView when I try to pinch. I've adjusted the z-position of the views to ensure they are on top but no dice.
My UIImageViews are stored in a NSMutableDictionary and are updated by looping through it during each update loop of the game. Could this have an effect on the UIPinchGestureRecognizer not getting called?... I can't think of anything else and posting the code probably won't help - because the same exact code works when it's used for the UIView or view controller.
I do have touch handling code in the view controller's touchesBegan and touchedMoved events... but I've turned that off but the problem still persists, and the pinch worked for other elements with it on anyway.
Any ideas what could prevent a gesture selector from firing on an UIImageView? The dictionary? Something to do with being constantly updated in the game loop? Any ideas would be welcome, this seems so simple to implement...
Edit: Here's the code for the UIImageView and what I'm doing with it... not sure if this helps.
Extended UIImageView class Paper.m (prp is a struct of properties used to initialize my custom variables:
NSString *tName = [NSString stringWithUTF8String: prp.imagePath];
UIImage *tImage = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",tName]];
self = [self initWithImage: tImage];
self.userInteractionEnabled = YES;
self.multipleTouchEnabled = YES;
self.center = CGPointMake(prp.spawnX, prp.spawnY);
if (prp.zPos != 0) { self.layer.zPosition = prp.zPos; }
// other initialization excised
Then I have a custom class called ObjManager that holds the NSMutableDictionary and initializes all UIImageView objects like so, where addObj is called in a loop to add each object:
- (ObjManager*) initWithBlank {
// create an array for our objects
self = [super init];
if (self) {
objects = [[NSMutableDictionary alloc] init];
spawnID = 100; // start of counter for dynamically spawned object IDs
}
return self;
}
- (void) addObj:(Paper *)paperPiece wasSpawned:(BOOL)spawned {
// add each paper piece, assign spawnID if dynamically spawned
NSNumber *newID;
if (spawned) { newID = [NSNumber numberWithInt:spawnID]; spawnID++; }
else { newID = [NSNumber numberWithInt:paperPiece.objID]; }
[objects setObject:paperPiece forKey:newID];
}
My view controller calls the initialization of the ObjManager (called _world in my VC). Then it loops through _world like so:
// Populate additional object managers and add all subviews
for (NSNumber *key in _world.objects) {
_eachPiece = [_world.objects objectForKey:key];
// Populate collision object manager
if (_eachPiece.collision) {
[_world_collisions addObj:_eachPiece wasSpawned:NO];
}
// only add pinch gesture if the object flag is set
if (_eachPiece.pinch) {
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchPaper:)];
pinchGesture.delegate = self;
[_eachPiece addGestureRecognizer:pinchGesture];
NSLog(#"Added pinch recognizer scale: %#", pinchGesture.view.description);
}
// Add each object as a subview
[self.view addSubview:_eachPiece];
}
_eachPiece is an object in my view controller, declared in the .h file (as is _world):
#property (nonatomic, strong) ObjManager *world;
#property (nonatomic, strong) Paper *eachPiece;
Then I have an NSTimer object that updates all moveable Paper objects (the UIImageViews) in _world (ObjManager) every frame like so:
// loop through each piece and update
for (NSNumber *key in _world.objects) {
eachPiece = [_world.objects objectForKey:key];
// only update moveable pieces
if ((eachPiece.moveType == Move_Touch) || (eachPiece.moveType == Move_Auto)) {
CGPoint paperCenter;
paperCenter = eachPiece.center;
// a bunch of code to update paperCenter x & y for the object's new position based on velocity and user input
// determine image direction and transformation matrix
[_world updateDirection:eachPiece];
CGAffineTransform transformPiece = [_world imageTransform:eachPiece];
if (transformEnabled) {
eachPiece.transform = transformPiece;
}
// finally move it
[eachPiece setCenter:paperCenter];
}
}
And the pinch selector:
- (void)pinchPaper:(UIPinchGestureRecognizer *)recognizer {
NSLog(#"Pinch scale: %f", recognizer.scale);
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
As far as I can tell, the pinch should work. If I take the same pinch gesture code and set it to add to the view controller, it works for the entire view. I also have a custom UIView class that acts as a border (simply a rectangle drawn around the view), and moving the pinch gesture code to that allows me to pinch the border only.
Alright, so apparently gesture recognizers don't fire on views where the position is being animated. So to make it work I had to put the recognizer on the view controller, then perform a hit test and apply pinch/zoom on the touched view if it's one I want to pinch/zoom. Info on that here:
http://iphonedevsdk.com/forum/iphone-sdk-tutorials/100982-caanimation-tutorial.html
For my particular case, I kept track of which animated views I wanted to pinch, in a variable/array at the View Controller level. Then I used this code in the selector (essentially from the link above, all credit to them):
- (void)pinchPaper:(UIPinchGestureRecognizer *)recognizer {
CALayer *pinchLayer;
id layerDelegate;
CGPoint touchPoint = [recognizer locationInView:self.view];
pinchLayer = [self.view.layer.presentationLayer hitTest: touchPoint];
layerDelegate = [pinchLayer delegate];
//_pinchView is the UIView I want to pinch
if (layerDelegate == _pinchView) {
_pinchView.transform = CGAffineTransformScale(_pinchView.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
}
Only tricky thing is if you have other scale transforms (like changing directions in mine) going on as part of the existing UIView animation, you have to account for that, by using the current transform during each update loop.
For any gesture recognizer to work on imageViews, userInteraction must be enabled on it.
So, it should be,
yourImageView.userInteractionEnabled = YES;
Or, if you are using storyboards, you can check that option in storyboard's inspector window too.
Hope it helps..:)