I have a MKMapView with a Polygon overlay that I need to convert into a UIView. I have the reference to the MKMapView and the MKPolygon, but I can't seem to find a way to pull the coordinates of the MKPolygon and convert them back into screen coordinates for the UIView.
You can convert polygon points to view coordinates (if that is what you need) like this:
iPhone SDK: Convert MKMapPoint to CGPoint
Related
I have a MKMapview with lot of polygons added as overlays, To optimize the memory I need to know before adding an overlay if the polygon is inside visible area of the MKMapview
I can even create a whole visible area polygon with mapview all corner coordinates, for example topLeft as below
func topLeftCoordinate() -> CLLocationCoordinate2D {
return convert(bounds.origin, toCoordinateFrom: self)
}
with all the corner coordinates, I can create a current_visible_area_polygon and I want to check the polygons I add is inside this current_visible_area_polygon.
so it comes down to two questions
Is it possible to check if a polygon is inside another polygon or atleast intersects OR
if a polygon is inside visible maprect
I found the answer to be the following
let mapView: MKMapView
let mkPolygon: MKPolygon
mapView.visibleMapRect.isIntersects(mkPolygon)
I want to draw bounding box shape for RMProjectedRect structs.
I need to convert RMProjectedRect to CGRect and and then create a shape and add it to superview.
does anyone know how to do this conversion?
You will want to use the RMMapView routines under Converting Map Coordinates here:
https://www.mapbox.com/mapbox-ios-sdk/api/Classes/RMMapView.html
You can convert corners of the RMProjectedRects into pixels coordinates for a given map viewport.
thanks for your hint.
Here is what I did:
CGRect frameRect = {[self.mapView projectedPointToPixel:gridMapRect.origin], [self.mapView projectedSizeToViewSize:gridMapRect.size]};
I am a beginner programmer and this is my first app(I am still learning). I have overlaid a polygon onto a map view. I have set its fill color to an image because I'm trying to match an image to a satellite picture. I want to rotate it so that the polygon contents match the map. Is it possible to rotate the image? If not, is there an easier way to overlay an image onto a map view that I could use.
Here is my code:
-(MKOverlayView*)mapView:(MKMapView *)mapView viewForOverlay:(id )overlay {
MKPolygonView *polyView = [[MKPolygonView alloc] initWithOverlay:overlay];
polyView.strokeColor = [UIColor whiteColor];
polyView.fillColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"Campus-map labels.jpg"]];
return polyView;
}
Here's what I'm trying to do, if it helps:
http://i.stack.imgur.com/x53HU.jpg
The road which is circled in red should match up. I know that the polygon isn't in the right position -- this is to illustrate how the polygon needs to be rotated.
You can modify the transform property of the polyView object. For example:
polyView.transform = CGAffineTransformMakeRotation(M_PI_4);
will rotate the polygon by pi/4 radians (45 degrees), in a clockwise direction.
You might need to change the polygon's center property to get the effect you want. The center property determines the center of rotation around which the transform rotation takes place.
I'm using CoreGraphics in my UIView to draw a graph and I want to be able to interact with the graph using touch input. Since touches are received in device coordinates, I need to transform it into user coordinates in order to relate it to the graph, but that has become an obstacle since CGContextConvertPointToUserSpace doesn't work outside of the graphics drawing context.
Here's what I've tried.
In drawRect:
CGContextScaleCTM(ctx,...);
CGContextTranslateCTM(ctx,...); // transform graph to fit the view nicely
self.ctm = CGContextGetCTM(ctx); // save for later
// draw points using user coordinates
In my touch event handler:
CGPoint touchDevice = [gesture locationInView:self]; // touch point in device coords
CGPoint touchUser = CGPointApplyAffineTransform(touchDevice, self.ctm); // doesn't give me what I want
// CGContextConvertPointToUserSpace(touchDevice) <- what I want, but doesn't work here
Using the inverse of ctm doesn't work either. I'll admit I'm having trouble getting my head around the meaning and relationships between device coordinates, user coordinates, and the transformation matrix. I think it's not as simple as I want it to be.
EDIT: Some background from Apple's documentation (iOS Coordinate Systems and Drawing Model).
"A window is positioned and sized in screen coordinates, which are defined by the coordinate system for the display."
"Drawing commands make reference to a fixed-scale drawing space, known as the user coordinate space. The operating system maps coordinate units in this drawing space onto the actual pixels of the corresponding target device."
"You can change a view’s default coordinate system by modifying the current transformation matrix (CTM). The CTM maps points in a view’s coordinate system to points on the device’s screen."
I discovered that the CTM already included a transformation to map view coordinates (with origin at the top left) to screen coordinates (with origin at the bottom left). So (0,0) got transformed to (0,800), where the height of my view was 800, and (0,2) mapped to (0,798) etc. So I gather there are 3 coordinate systems we're talking about: screen coordinates, view/device coordinates, user coordinates. (Please correct me if I am wrong.)
The CGContext transform (CTM) maps from user coordinates all the way to screen coordinates. My solution was to maintain my own transform separately which maps from user coordinates to view coordinates. Then I could use it to go back to user coordinates from view coordinates.
My Solution:
In drawRect:
CGAffineTransform scale = CGAffineTransformMakeScale(...);
CGAffineTransform translate = CGAffineTransformMakeTranslation(...);
self.myTransform = CGAffineTransformConcat(translate, scale);
// draw points using user coordinates
In my touch event handler:
CGPoint touch = [gesture locationInView:self]; // touch point in view coords
CGPoint touchUser = CGPointApplyAffineTransform(touchPoint, CGAffineTransformInvert(self.myTransform)); // this does the trick
Alternate Solution:
Another approach is to manually setup an identical context, but I think this is more of a hack.
In my touch event handler:
#import <QuartzCore/QuartzCore.h>
CGPoint touch = [gesture locationInView:self]; // view coords
CGSize layerSize = [self.layer frame].size;
UIGraphicsBeginImageContext(layerSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// as in drawRect:
CGContextScaleCTM(...);
CGContextTranslateCTM(...);
CGPoint touchUser = CGContextConvertPointToUserSpace(context, touch); // now it gives me what I want
UIGraphicsEndImageContext();
I have a few annotations added to a MKMapView, and when the user clicks on one of the annotations, it displays a UICalloutView with a right accessory button which adds a UIView to the map, displaying some information about that specific location. This UIView is centred in the superview of the map, and in order to show that the information in that view is relative to the annotation, I would like to shift the visible map rect down (on the y axis), and center it on the x axis so that the annotation is directly under the view.
I am doing the following to centre the annotation, however, I don't know how to move the annotation down on the y axis so that it sits under the added UIView. Please can you tell me how I can do so?
[self.mapView setCenterCoordinate:[annotation coordinate] animated:YES];
If you want to shift the map down so it's centered on a particular coordinate, but shift it down, say, 40%, so you have space for something above it, you could do something like the following:
CLLocationCoordinate2D center = coordinate;
center.latitude -= self.mapView.region.span.latitudeDelta * 0.40;
[self.mapView setCenterCoordinate:center animated:YES];
You can get the size of the information view, then you know how much you want to use the map (based on the difference between its size and the map view size). Now you know the offset, you can calculate the point (in the view coordinate system) that should be moved to the centre so that the annotation is moved down). Then you can use convertPoint:toCoordinateFromView: to find the coordinate for that point to use with setCenterCoordinate:animated:.