It seems you cannot rotate the map view using user's two finger gesture anymore. It has been a while since I did any iOS development but pre-ios6 it was automatically enabled.
Is this the case or is it me being ridiculous? It seems to me that its a very basic requirement for developers to be able to allow their users to rotate the map.
Any links to documentation that specifically says we can't rotate or some clarification would be much appreciated.
Try UIRotationGestureRecognizer to rotate map.Following code will help you.
UIRotationGestureRecognizer *rgrr = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotateMap:)];
[mapView addGestureRecognizer:rgrr];//mapView -->your mapview
rgrr.delegate = self;
////////
- (void) rotateMap:(UIRotationGestureRecognizer *)gestureRecognizer{
gestureRecognizer.view.transform = CGAffineTransformRotate(gestureRecognizer.view.transform, gestureRecognizer.rotation);
gestureRecognizer.rotation = 0; }
Related
I have an image as a background and I want to make certain parts of this image clickable with zooming in and out , Is there any way to do something like that ??
Don't create a new view for your gesture recognizer. The recognizer implements a locationInView: method. Set it up for the view that contains the sensitive region. On the handleGesture, hit-test the region you care about like this:
0) Do all this on the view that contains the region you care about. Don't add a special view just for the gesture recognizer.
1) Setup mySensitiveRect
#property (assign, nonatomic) CGRect mySensitiveRect;
#synthesize mySensitiveRect=_mySensitiveRect;
self.mySensitiveRect = CGRectMake(0.0, 240.0, 320.0, 240.0);
2) Create your gestureRecognizer:
gr = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[self.view addGestureRecognizer:gr];
// if not using ARC, you should [gr release];
// mySensitiveRect coords are in the coordinate system of self.view
- (void)handleGesture:(UIGestureRecognizer *)gestureRecognizer {
CGPoint p = [gestureRecognizer locationInView:self.view];
if (CGRectContainsPoint(mySensitiveRect, p)) {
//Add your zooming code here
} else {
NSLog(#"got a tap, but not where i need it");
}
}}
The sensitive rect should be initialized in myView's coordinate system, the same view to which you attach the recognizer.
Apple has a demo app called PhotoScroller that implements a zoomable, scrollable set of images (in a page view controller, but you don't need that.) That would be a good starting point for what you need.
Their sample apps used to be built into the Xcode docs. Since Xcode 6 I haven't seen them linked in the docs any more.
You can download PhotoScroller from Apple's online iOS Developer Library. (link)
I created an iOS App with XCode. There is a WebView containing a PDF file. I'm able to zoom in, zoom out and move the file on the screen.
Now I want to make the PDF file rotate. This is a street map and it would be useful if I could turn it. Like in the Apples own iOS Map, there I can turn the map in all directions.
One solution can be using UIRotationGestureRecognizer
UIRotationGestureRecognizer *rotateGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotate:)];
[yourRotatingView addGestureRecognizer:rotateGesture];
Its method defination:
- (IBAction)handleRotate:(UIRotationGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation);
recognizer.rotation = 0;
}
i have a problem with the UIPanGestureRecognizer behavior.
All works great unless I slide in from the top of my iPad. Top means here the side where the camera is located, no matter how the current device orientation is.
With the following code i debug the UIPanGestureRecognizer behavior:
- (void)viewDidLoad
{
[super viewDidLoad];
_pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panGesture:)];
[self.view addGestureRecognizer:_pan];
}
- (void)panGesture:(UIPanGestureRecognizer*)gesture
{
if (gesture.state == UIGestureRecognizerStateBegan) {
NSLog(#"BEGIN");
} else {
NSLog(#"GO");
}
}
So when i slide in from the top nothing happens.
It seems that iOS would catch that gesture, perhaps it is related to the notification center?
In principle it seems possible to get that gesture because i saw this on other apps.
What am I missing here?
Apple says in their transition guide that the existence of the notification center means that your touches at the very bottom and very top of the screen may be canceled. I think this is likely an example of that.
I have an iPad app (XCode 4.6, Storyboards, iOS 6.2). I have a scene made up like this:
UIView (subViewData - covers the right quadrant of #2, below and to the right not covering the times and names of #2- contains appointment info (cust name, and the duration is shaded)
UIView (subViewGrid - covers bottom half (in image, it contains times on the left margin and names across the top margin.)
UIScrollView (covers the bottom half view)
UIView ------- UIView (one on the top half of the window, the other on the bottom half)
UIViewController (named CalendarViewController)
This is the code to initialize the UITapGestureRecognizer found in -viewDidLoad of the CalendarViewController:
// setup for tap recognizer
UITapGestureRecognizer *fingerTap = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(singleFingerTap:)];
fingerTap.numberOfTapsRequired = 1;
fingerTap.numberOfTouchesRequired = 1;
[subViewData addGestureRecognizer:fingerTap];
This is the code in subViewData (#1) that is NOT getting executed:
- (void)singleFingerTap:(UITapGestureRecognizer*)gesture {
CGPoint pt = [gesture locationInView:self];
UIView *v = [self hitTest:pt withEvent:nil];
if(v.tag == 100) // if this is for calendar, return
return;
CGRect dataRect = CGRectMake(110.0,48.0,670.0,1450);
CGPoint dataPoint = CGPointMake(pt.x, pt.y);
// check to see if point is within the rectangle
if(!CGRectContainsPoint(dataRect, dataPoint)) {
NSLog(#"\n\nNOT within subViewData");
return;
}
else {
NSLog(#"\n\nIS within subViewData");
}
}
The question is why is it not capturing the taps? Is the recognizer code supposed to be in the controller or the view that is supposedly getting the taps? I have read almost everything I can find on the subject, but can't find anything within this specific scenario. Help is greatly appreciated.
You say:
This is the code to initialize the UITapGestureRecognizer found in -viewDidLoad of the CalendarViewController
And that code says:
[[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(singleFingerTap:)];
Then you must put singleFingerTap: in CalendarViewController, because you have told the gesture recognizer that the target is self, and self is an instance of CalendarViewController.
You also say:
This is the code in subViewData (#1) that is NOT getting executed
But I do not know what "in subViewData" means; you did not mention anything called "subViewData" in your explanation of the interface. Anyway it doesn't matter. You've specified self so the code needs to be in self.
However, it sounds to me as if the view in question is never receiving the touch at all. Perhaps it is not exposed (i.e. it's covered by another view). Perhaps its userInteractionEnabled is NO. There could be lots of reasons.
I'm going to try to describe with words something that might only be describable with video.
I have created a simple iOS app with a storyboard containing a single image view. I have added two gesture recognizers: a UIPanGestureRecognizer and a UIRotationGestureRecognizer along with their corresponding IBActions.
When I first start the application in the simulator, the image view pans correctly. The image view also rotates correctly. After a rotation, however, any subsequent pan fails. When I try to pan after a rotation, regardless of the direction of the pan, the image rapidly scales to zero and disappears, i.e., it collapses or implodes to a point that disappears.
The gesture recognizers are created using the following code. myImageView is set up as an IBOutlet UIImageView.
UIPanGestureRecognizer *panRec = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(processPan:)];
[myImageView addGestureRecognizer:panRec];
UIRotationGestureRecognizer *rotRec = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(processRotation:)];
[myImageView addGestureRecognizer:rotRec];
I've written the associated actions as best I know how. They are basically slight modifications of the methods I found in the iOS documentation. These are shown below.
-(IBAction)processPan:(UIPanGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
CGRect newFrame = myImageView.frame;
newFrame.origin.x += translation.x;
newFrame.origin.y += translation.y;
myImageView.frame = newFrame;
[sender setTranslation:CGPointMake(0, 0) inView:self.view];
}
}
-(IBAction)processRotation:(UIRotationGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateChanged)
{
myImageView.transform = CGAffineTransformRotate(myImageView.transform, sender.rotation);
[sender setRotation:0];
}
}
So what am I missing? I am new at this, so hopefully my ignorance will be tolerated.
I am running Xcode version 4.2.1 on OS X version 10.7.3 on a MacBook if that helps. Thank you so much for taking the time to read my question. Stack Overflow is an unbelievable resource!
-Dave
Well, I don't know if I've come up with a solution or if I've come up with a kludge. Basically, the pan code wasn't working for me. Any time the view was rotated or scaled, the panning code would seriously distort or collapse the view being translated. I stared at transform matrices and frame coordinate systems until I just about went blind.
The translation code I listed in my first post was basically copied from Listing 3-2, "Handling pinch, pan, and double-tap gestures" from the Gesture Recognizers section out of Apple's Event Handling Guide for iOS, so I figured it would do the trick for me. Well, I ended up writing my own code for it using the UIImageView center and not messing with the frame at all. Here is what worked for me.
CGPoint translation = [sender translationInView:self.superview];
CGPoint newCenter = CGPointMake(self.myImageView.center.x + translation.x, self.myImageView.center.y + translation.y);
[self.myImageView setCenter:newCenter];
[sender setTranslation:CGPointMake(0, 0) inView:self.superview];
I used the superview as a reference for the translation in case it was rotated. It seems to work now.
This effort probably reveals something about how my understanding of frames isn't correct. If someone can tell me how to correct my understanding, I'd appreciate it.
-Dave