I have an image as a background and I want to make certain parts of this image clickable with zooming in and out , Is there any way to do something like that ??
Don't create a new view for your gesture recognizer. The recognizer implements a locationInView: method. Set it up for the view that contains the sensitive region. On the handleGesture, hit-test the region you care about like this:
0) Do all this on the view that contains the region you care about. Don't add a special view just for the gesture recognizer.
1) Setup mySensitiveRect
#property (assign, nonatomic) CGRect mySensitiveRect;
#synthesize mySensitiveRect=_mySensitiveRect;
self.mySensitiveRect = CGRectMake(0.0, 240.0, 320.0, 240.0);
2) Create your gestureRecognizer:
gr = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[self.view addGestureRecognizer:gr];
// if not using ARC, you should [gr release];
// mySensitiveRect coords are in the coordinate system of self.view
- (void)handleGesture:(UIGestureRecognizer *)gestureRecognizer {
CGPoint p = [gestureRecognizer locationInView:self.view];
if (CGRectContainsPoint(mySensitiveRect, p)) {
//Add your zooming code here
} else {
NSLog(#"got a tap, but not where i need it");
}
}}
The sensitive rect should be initialized in myView's coordinate system, the same view to which you attach the recognizer.
Apple has a demo app called PhotoScroller that implements a zoomable, scrollable set of images (in a page view controller, but you don't need that.) That would be a good starting point for what you need.
Their sample apps used to be built into the Xcode docs. Since Xcode 6 I haven't seen them linked in the docs any more.
You can download PhotoScroller from Apple's online iOS Developer Library. (link)
Related
I have multiple controllers in my PageViewController and in one controller I have a few sliders. Now there is a problem that user must touch exactly slider circle (I am not sure about right expression, thumb? - that moving part) and I would like to increase area in which reacts slider and not the whole PageViewController. I tried these solutions but it doesn't help:
thumbRectForBounds:
- (CGRect)thumbRectForBounds:(CGRect)bounds trackRect:(CGRect)rect value:(float)value
{
return CGRectInset ([super thumbRectForBounds:bounds trackRect:rect value:value], 15, 15);
}
Increase hitTest area:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
if (CGRectContainsPoint(CGRectInset(self.frame, 200, 200), point) || CGRectContainsPoint(CGRectInset(self.frame, 200, 200), point)) {
return self;
}
return [super hitTest:point withEvent:event];
}
I have these methods in my custom slider class because I would like to reuse this. Last thing what I found and not tried yet is create some object layer over slider which "takes" gesture and disable PageViewController but I am not sure how to do it and I am not sure if it's good/best solution.
I am not a big fan of the UISlider component because as you noticed, it is not trivial to increase the hit area of the actual slider. I would urge you to replicate the UISlider instead using a pan gesture for a much better user experience:
i. create a slider background with a seperate UIImageView with a slider image.
ii. create the PanGesture:
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:);
[imageView addGestureRecognizer:pan];
iii. implement handlePan Method:
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
//pan (slide) begins
CGPoint translation = [recognizer locationInView:self.view];
translation.y = self.slideImage.center.y;
self.slideImage.center = translation;
if(recognizer.state == UIGestureRecognizerStateEnded) {
LTDebugLog(#"\n\n PAN, with spot: %f\n\n", self.slideImage.center.x);
//do something after user is done sliding
}
}
The big benefit of this method is that you will have a much better user experience as you can make the responsive UIImageView as big as you want.
Alternatively, you could subclass a UISlider and increase the hit space there, although in my experience this gives mixed results.
Hope this helps
In your CustomSlider class override thumbRectForBounds method:
Simply return rect value as you required:
- (CGRect)thumbRectForBounds:(CGRect)bounds trackRect:(CGRect)rect value:(float)value
{
return CGRectMake (bounds.origin.x, bounds.origin.y, yourWidthValue, yourHeightValue );
}
Change yourWidthValue and yourHeightValue as per your requirement. And then while using
Create object like below:
CustomSlider *slider = [[CustomSlider alloc] initWithFrame:CGRectMake(0, 0, 300, 20)];
[slider thumbRectForBounds: slider.bounds trackRect:slider.frame value:15.f]; // change values as per your requirements.
Hope this helps.
Create a custom thumb image which has a large empty margin and set that on your slider, like this:
[theSlider setThumbImage:[UIImage imageNamed:#"slider_thumb_with_margins"] forState:UIControlStateNormal];
To make the image, get a copy of the system thumb image using any one of a number of UIKit artwork extractors (just search the web for one). Open the thumb image in Photoshop and increase the canvas size by twice the margin you want to add. Make sure you change the canvas size and not the image size, as the image size will stretch the image to fill the new size. This will put empty space around the thumb which will be part of the hit-test area but since it is all transparent it won't change the look of the slider.
In iPad when you put your finger outside top or bottom edge of screen and then drag it on screen a menu is revealed. How can I implement that?
There is specifically a Gesture Recogniser class for this, introduced in iOS 7. It's the UIScreenEdgePanGestureRecognizer. The documentation for it is here. Check it out.
To test this in the simulator, just start the drag from near the edge (~15 points).
Also, you will have to create a gestureRecognizer for each edge. You can't OR edges together, so UIRectEdgeAll won't work.
There is a simple example here. Hope this helps!
Well you can do something like this, this example is the case where you want you pan gesture to work only when the user swipes 20px inside from the right hand side of the screen
First of all add the gesture to your window
- (void)addGestures {
if (!_panGesture) {
_panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)];
[_panGesture setDelegate:self];
[self.view addGestureRecognizer:_panGesture];
}
}
After adding the check whether the touch you recieved is a pan gesture and then perform your action accordingly
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch {
CGPoint point = [touch locationInView:self.view];
if (gestureRecognizer == _panGesture) {
return [self slideMenuForGestureRecognizer:gestureRecognizer withTouchPoint:point];
}
return YES;
}
Here is how you can check whether your touch is contained in the region where you want it to be
-(BOOL)isPointContainedWithinBezelRect:(CGPoint)point {
CGRect leftBezelRect;
CGRect tempRect;
//this will be the width between CGRectMaxXEdge and the screen offset, thus identifying teh region
CGFloat bezelWidth =20.0;
CGRectDivide(self.view.bounds, &leftBezelRect, &tempRect, bezelWidth, CGRectMaxXEdge);
return CGRectContainsPoint(leftBezelRect, point);
}
It seems you cannot rotate the map view using user's two finger gesture anymore. It has been a while since I did any iOS development but pre-ios6 it was automatically enabled.
Is this the case or is it me being ridiculous? It seems to me that its a very basic requirement for developers to be able to allow their users to rotate the map.
Any links to documentation that specifically says we can't rotate or some clarification would be much appreciated.
Try UIRotationGestureRecognizer to rotate map.Following code will help you.
UIRotationGestureRecognizer *rgrr = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotateMap:)];
[mapView addGestureRecognizer:rgrr];//mapView -->your mapview
rgrr.delegate = self;
////////
- (void) rotateMap:(UIRotationGestureRecognizer *)gestureRecognizer{
gestureRecognizer.view.transform = CGAffineTransformRotate(gestureRecognizer.view.transform, gestureRecognizer.rotation);
gestureRecognizer.rotation = 0; }
I have an iPad app (XCode 4.6, Storyboards, iOS 6.2). I have a scene made up like this:
UIView (subViewData - covers the right quadrant of #2, below and to the right not covering the times and names of #2- contains appointment info (cust name, and the duration is shaded)
UIView (subViewGrid - covers bottom half (in image, it contains times on the left margin and names across the top margin.)
UIScrollView (covers the bottom half view)
UIView ------- UIView (one on the top half of the window, the other on the bottom half)
UIViewController (named CalendarViewController)
This is the code to initialize the UITapGestureRecognizer found in -viewDidLoad of the CalendarViewController:
// setup for tap recognizer
UITapGestureRecognizer *fingerTap = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(singleFingerTap:)];
fingerTap.numberOfTapsRequired = 1;
fingerTap.numberOfTouchesRequired = 1;
[subViewData addGestureRecognizer:fingerTap];
This is the code in subViewData (#1) that is NOT getting executed:
- (void)singleFingerTap:(UITapGestureRecognizer*)gesture {
CGPoint pt = [gesture locationInView:self];
UIView *v = [self hitTest:pt withEvent:nil];
if(v.tag == 100) // if this is for calendar, return
return;
CGRect dataRect = CGRectMake(110.0,48.0,670.0,1450);
CGPoint dataPoint = CGPointMake(pt.x, pt.y);
// check to see if point is within the rectangle
if(!CGRectContainsPoint(dataRect, dataPoint)) {
NSLog(#"\n\nNOT within subViewData");
return;
}
else {
NSLog(#"\n\nIS within subViewData");
}
}
The question is why is it not capturing the taps? Is the recognizer code supposed to be in the controller or the view that is supposedly getting the taps? I have read almost everything I can find on the subject, but can't find anything within this specific scenario. Help is greatly appreciated.
You say:
This is the code to initialize the UITapGestureRecognizer found in -viewDidLoad of the CalendarViewController
And that code says:
[[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(singleFingerTap:)];
Then you must put singleFingerTap: in CalendarViewController, because you have told the gesture recognizer that the target is self, and self is an instance of CalendarViewController.
You also say:
This is the code in subViewData (#1) that is NOT getting executed
But I do not know what "in subViewData" means; you did not mention anything called "subViewData" in your explanation of the interface. Anyway it doesn't matter. You've specified self so the code needs to be in self.
However, it sounds to me as if the view in question is never receiving the touch at all. Perhaps it is not exposed (i.e. it's covered by another view). Perhaps its userInteractionEnabled is NO. There could be lots of reasons.
I have an image that I would like to set up to respond to several different gesture responders. So for example, if one part of the picture is touched I would like one selector to be called, and another selector for a different part of the picture.
I looked at the UIGestureRecognizer and UITapGestureRecognizer classes, but I couldn't find a way to specify the image zones to be associated with them. Is this at all possible in iOS? And if so what classes should I look into using?
The easiest solution is to lay invisible views over the image and put the gesture recognizers on them.
If that's not feasible you'll have to look at the locationInView in the gesture recognizer's tap handler and figure out what you want to do based on where the user tapped.
Use the locationInView: property to determine where your tap occurred and then conditionally invoke a method. You can do this by setting up some CGRects that correspond to your hit areas. Then use the CGRectContainsPoint() function to determine if the tap landed in one of the hit areas.
Your tap gesture recognizer action may look something like this:
- (void)tapGestureRecognized:(UIGestureRecognizer *)recognizer
{
// Specify some CGRects that will be hit areas
CGRect firstHitArea = CGRectMake(10.0f, 10.0f, 44.0f, 44.0f);
CGRect secondHitArea = CGRectMake(64.0f, 10.0f, 44.0f, 44.0f)
// Get the location of the touch in the view's coordinate space
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
if (CGRectContainsPoint(firstHitArea, touchLocation))
{
[self firstMethod];
}
else if (CGRectContainsPoint(secondHitArea, touchLocation))
{
[self secondMethod];
}
}