Turn two CGPoints into a CGRect - ios

How can I, given two different CGPoints, turn them into an CGRect?
Example:
CGPoint p1 = CGPointMake(0,10);
CGPoint p2 = CGPointMake(10,0);
How can I turn this into a CGRect?

This will take two arbitrary points and give you the CGRect that has them as opposite corners.
CGRect r = CGRectMake(MIN(p1.x, p2.x),
MIN(p1.y, p2.y),
fabs(p1.x - p2.x),
fabs(p1.y - p2.y));
The smaller x value paired with the smaller y value will always be the origin of the rect (first two arguments). The absolute value of the difference between x values will be the width, and between y values the height.

A slight modification of Ken's answer. Let CGGeometry "standardize" the rect for you.
CGRect rect = CGRectStandardize(CGRectMake(p1.x, p1.y, p2.x - p1.x, p2.y - p1.y));

Swift extension:
extension CGRect {
init(p1: CGPoint, p2: CGPoint) {
self.init(x: min(p1.x, p2.x),
y: min(p1.y, p2.y),
width: abs(p1.x - p2.x),
height: abs(p1.y - p2.y))
}
}

Assuming p1 is the origin and the other point is the opposite corner of a rectangle, you could do this:
CGRect rect = CGRectMake(p1.x, p1.y, fabs(p2.x-p1.x), fabs(p2.y-p1.y));

This function takes any number of CGPoints and gives you the smallest CGRect back.
CGRect CGRectSmallestWithCGPoints(CGPoint pointsArray[], int numberOfPoints)
{
CGFloat greatestXValue = pointsArray[0].x;
CGFloat greatestYValue = pointsArray[0].y;
CGFloat smallestXValue = pointsArray[0].x;
CGFloat smallestYValue = pointsArray[0].y;
for(int i = 1; i < numberOfPoints; i++)
{
CGPoint point = pointsArray[i];
greatestXValue = MAX(greatestXValue, point.x);
greatestYValue = MAX(greatestYValue, point.y);
smallestXValue = MIN(smallestXValue, point.x);
smallestYValue = MIN(smallestYValue, point.y);
}
CGRect rect;
rect.origin = CGPointMake(smallestXValue, smallestYValue);
rect.size.width = greatestXValue - smallestXValue;
rect.size.height = greatestYValue - smallestYValue;
return rect;
}

This will return a rect of width or height 0 if the two points are on a line
float x,y,h,w;
if (p1.x > p2.x) {
x = p2.x;
w = p1.x-p2.x;
} else {
x = p1.x;
w = p2.x-p1.x;
}
if (p1.y > p2.y) {
y = p2.y;
h = p1.y-p2.y;
} else {
y = p1.y;
h = p2.y-p1.y;
}
CGRect newRect = CGRectMake(x,y,w,h);

let r0 = CGRect(origin: p0, size: .zero)
let r1 = CGRect(origin: p1, size: .zero)
let rect = r0.union(r1).standardized

Related

How to zoom from min to max coordinates positions in UIScrollView?

I have a UIScrollView, which hold an UIImageView. I have a requirements to reach and zoom to particular location in UIImageView.
So for e.g. I want to start from (x: 0, y: 0) to specified location.
What's the best practice to do that?
This is how I'm achieving this:
- (void) setInitialZoomLevelAndShowImageInCenter {
CGRect scrollViewFrame = scrollViewMap.frame;
CGFloat scaleWidth = scrollViewFrame.size.width / scrollViewMap.contentSize.width;
CGFloat scaleHeight = scrollViewFrame.size.height / scrollViewMap.contentSize.height;
CGFloat minScale = MIN(scaleWidth, scaleHeight);
scrollViewMap.minimumZoomScale = minScale/1.5;
scrollViewMap.maximumZoomScale = 10.0f;
scrollViewMap.zoomScale = minScale;
CGFloat newZoomScale = scrollViewMap.zoomScale * 2.80f;
CGFloat xPos = 0;
CGFloat yPos = 0;
BOOL isMatched = NO;
while (!isMatched) {
[self zoomToPoint:CGPointMake(xPos, yPos) withScale:newZoomScale animated:YES];
BOOL isAnyChange = NO;
if(xPos <= slotItem.xCord) {
xPos+=1.0f;
isAnyChange = YES;
}
if(yPos <= slotItem.yCord) {
yPos+=1.0f;
isAnyChange = YES;
}
if(!isAnyChange) {
isMatched = YES;
}
}
}
- (void)zoomToPoint:(CGPoint)zoomPoint withScale: (CGFloat)scale animated: (BOOL)animated
{
//Normalize current content size back to content scale of 1.0f
CGSize contentSize;
contentSize.width = (scrollViewMap.contentSize.width / scrollViewMap.zoomScale);
contentSize.height = (scrollViewMap.contentSize.height / scrollViewMap.zoomScale);
//derive the size of the region to zoom to
CGSize zoomSize;
zoomSize.width = scrollViewMap.bounds.size.width / scale;
zoomSize.height = scrollViewMap.bounds.size.height / scale;
//offset the zoom rect so the actual zoom point is in the middle of the rectangle
CGRect zoomRect;
zoomRect.origin.x = zoomPoint.x - zoomSize.width / 2.0f;
zoomRect.origin.y = zoomPoint.y - zoomSize.height / 2.0f;
zoomRect.size.width = zoomSize.width;
zoomRect.size.height = zoomSize.height;
//apply the resize
[scrollViewMap zoomToRect: zoomRect animated: animated];
}

How does CALayer convert point from and to its sublayers?

Lets assume we have a 2D space (to simplify situation), and layer S and layer C, where C is sublayer of S.
The conversion process must affect bounds, position of C, transform of C, sublayersTransform of S, anchorPoint of C. My guess was the next:
CGAffineTransform transformToChild(CALayer *S, CALayer *C) {
CGFloat txa = - C.bounds.origin.x - C.bounds.size.width * C.anchorPoint.x;
CGFloat tya = - C.bounds.origin.y - C.bounds.size.height * C.anchorPoint.y;
CGFloat txb = C.position.x;
CGFloat tyb = C.position.y;
CGAffineTransform sublayerTransform = CATransform3DGetAffineTransform(S.sublayerTransform);
CGAffineTransform fromS = CGAffineTransformTranslate(sublayerTransform, txb, tyb);
fromS = CGAffineTransformConcat(fromS, C.affineTransform);
fromS = CGAffineTransformTranslate(fromS, txa, tya);
return fromS;
}
But this is not working when transform of the child layer is not identity (e.g. in case of rotation to M_PI_2 angle).
Whole code with layers:
CALayer *l1 = [CALayer new];
l1.frame = CGRectMake(-40, -40, 80, 80);
l1.bounds = CGRectMake(40, 40, 80, 80);
CALayer *l2 = [CALayer new];
l2.frame = CGRectMake(50, 40, 20, 20);
l2.bounds = CGRectMake(40, 40, 20, 20);
CGAffineTransform t2 = CGAffineTransformMakeRotation(M_PI / 2);
l2.affineTransform = t2;
[l1 addSublayer:l2];
CGAffineTransform toL2 = transformToChild(l1, l2);
CGPoint p = CGPointApplyAffineTransform(CGPointMake(70, 50), toL2);
NSLog(#"Custom Point %#", [NSValue valueWithCGPoint:p]);
p = [l1 convertPoint:CGPointMake(70, 50) toLayer:l2];
NSLog(#"CoreAnimation Point %#", [NSValue valueWithCGPoint:p]);
Comparison to system results:
Custom Point NSPoint: {-50, 80}
CoreAnimation Point NSPoint: {50, 40}
There's an old mailing list thread with some details about this here:
http://lists.apple.com/archives/quartz-dev/2008/Mar/msg00086.html
http://lists.apple.com/archives/quartz-dev/2008/Mar/msg00087.html
those messages are quite old, so e.g. they don't include the effects of the geometryFlipped property, which was added more recently, but that would just add another term onto the merged matrix.
So, I found this way of point conversion to and from sublayer's coordinate space, which works correctly with sublayer's non-identity transformation. sublayersTransform of the super layer is not covered here, but I think it would be not hard to extend these functions to support it.
CGPoint pointToChild(CALayer *C, CGPoint p) {
CGFloat txa = - C.bounds.origin.x - C.bounds.size.width * C.anchorPoint.x;
CGFloat tya = - C.bounds.origin.y - C.bounds.size.height * C.anchorPoint.y;
CGFloat txb = C.position.x;
CGFloat tyb = C.position.y;
p.x -= txb;
p.y -= tyb;
p = CGPointApplyAffineTransform(p, CGAffineTransformInvert(C.affineTransform));
if (C.isGeometryFlipped) {
CGAffineTransform flip = CGAffineTransformMakeScale(1.0f, -1.0f);
flip = CGAffineTransformTranslate(flip, 0, C.bounds.size.height * (2.0f * C.anchorPoint.y - 1.0f));
p = CGPointApplyAffineTransform(p, CGAffineTransformInvert(flip));
}
p.x -= txa;
p.y -= tya;
return p;
}
CGPoint pointFromChild(CALayer *C, CGPoint p) {
CGFloat txb = - C.bounds.origin.x - C.bounds.size.width * C.anchorPoint.x;
CGFloat tyb = - C.bounds.origin.y - C.bounds.size.height * C.anchorPoint.y;
CGFloat txa = C.position.x;
CGFloat tya = C.position.y;
p.x += txb;
p.y += tyb;
if (C.isGeometryFlipped) {
CGAffineTransform flip = CGAffineTransformMakeScale(1.0f, -1.0f);
flip = CGAffineTransformTranslate(flip, 0, C.bounds.size.height * (2.0f * C.anchorPoint.y - 1.0f));
p = CGPointApplyAffineTransform(p, flip);
}
p = CGPointApplyAffineTransform(p, C.affineTransform);
p.x += txa;
p.y += tya;
return p;
}

Draw shape of land on view by taking latitude and longitude in iOS?

I want to draw shape of small land on view by taking latitude and longitude at the corner of land.
I have wrote following code. For now I took hard core values.
- (void)drawRect:(CGRect)rect {
CGSize screenSize = [UIScreen mainScreen].applicationFrame.size;
SCALE = MIN(screenSize.width, screenSize.height) / (2.0 * EARTH_RADIUS);
OFFSET = MIN(screenSize.width, screenSize.height) / 2.0;
CGPoint latLong1 = {18.626103, 73.805023};
CGPoint latLong2 = {18.626444, 73.804884};
CGPoint latLong3 = {18.626226, 73.804969};
CGPoint latLong4 = {18.626103, 73.805023};
NSMutableArray *points=[NSMutableArray arrayWithObjects:[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong1]],[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong2]], [NSValue valueWithCGPoint:[self convertLatLongCoord:latLong3]],[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong4]],nil];
CGContextRef ctx = UIGraphicsGetCurrentContext();
for(int i=0;i<points.count;i++)
{
// CGPoint newCoord = [self convertLatLongCoord:latLong];
NSValue *val = [points objectAtIndex:i];
CGPoint newCoord = [val CGPointValue];
if(i == 0)
{
// move to the first point
CGContextMoveToPoint(ctx, newCoord.x, newCoord.y);
}
else
{
CGContextAddLineToPoint(ctx, newCoord.x, newCoord.y);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1);
CGContextSetStrokeColorWithColor(ctx, [[UIColor redColor] CGColor]);
}
}
CGContextStrokePath(ctx);
}
Below is method which converts lat long into x,y co-ordinates.
- (CGPoint)convertLatLongCoord:(CGPoint)latLong
{
CGFloat x = EARTH_RADIUS * cos(latLong.x) * cos(latLong.y) * SCALE + OFFSET;
CGFloat y = EARTH_RADIUS * cos(latLong.x) * sin(latLong.y) * SCALE + OFFSET;
return CGPointMake(x, y);
}
My problem is when I took small land(e.g house land) area lat long its shape is not visible on view after draw. How I can show maximise shape of land on view.
Thanks in advance.

Objective-C check if subviews of rotated UIViews intersect?

I don't know where to start with this one. Obviously CGRectIntersectsRect will not work in this case, and you'll see why.
I have a subclass of UIView that has a UIImageView inside it that is placed in the exact center of the UIView:
I then rotate the custom UIView to maintain the frame of the inner UIImageView while still being able to perform a CGAffineRotation. The resulting frame looks something like this:
I need to prevent users from making these UIImageViews intersect, but I have no idea how to check intersection between the two UIImageViews, since not only do their frames not apply to the parent UIView, but also, they are rotated without it affecting their frames.
The only results from my attempts have been unsuccessful.
Any ideas?
The following algorithm can be used to check if two (rotated or otherwise transformed) views overlap:
Use [view convertPoint:point toView:nil] to convert the 4 boundary points of both views
to a common coordinate system (the window coordinates).
The converted points form two convex quadrilaterals.
Use the SAT (Separating Axis Theorem) to check if the quadrilaterals intersect.
This: http://www.geometrictools.com/Documentation/MethodOfSeparatingAxes.pdf is another description of the algorithm containing pseudo-code, more can be found by googling for "Separating Axis Theorem".
Update: I have tried to create a Objective-C method for the "Separating Axis Theorem", and this is what I got. Up to now, I did only a few tests, so I hope that there are not too many errors.
- (BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2;
tests if 2 convex polygons intersect. Both polygons are given as a CGPoint array of the vertices.
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
tests (as described above) if two arbitrary views intersect.
Implementation:
- (void)projectionOfPolygon:(CGPoint *)poly count:(int)count onto:(CGPoint)perp min:(CGFloat *)minp max:(CGFloat *)maxp
{
CGFloat minproj = MAXFLOAT;
CGFloat maxproj = -MAXFLOAT;
for (int j = 0; j < count; j++) {
CGFloat proj = poly[j].x * perp.x + poly[j].y * perp.y;
if (proj > maxproj)
maxproj = proj;
if (proj < minproj)
minproj = proj;
}
*minp = minproj;
*maxp = maxproj;
}
-(BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2
{
for (int i = 0; i < count1; i++) {
// Perpendicular vector for one edge of poly1:
CGPoint p1 = poly1[i];
CGPoint p2 = poly1[(i+1) % count1];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
// Projection intervals of poly1, poly2 onto perpendicular vector:
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
// If projections do not overlap then we have a "separating axis"
// which means that the polygons do not intersect:
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// And now the other way around with edges from poly2:
for (int i = 0; i < count2; i++) {
CGPoint p1 = poly2[i];
CGPoint p2 = poly2[(i+1) % count2];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// No separating axis found, then the polygons must intersect:
return YES;
}
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
{
CGPoint poly1[4];
CGRect bounds1 = view1.bounds;
poly1[0] = [view1 convertPoint:bounds1.origin toView:nil];
poly1[1] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y) toView:nil];
poly1[2] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y + bounds1.size.height) toView:nil];
poly1[3] = [view1 convertPoint:CGPointMake(bounds1.origin.x, bounds1.origin.y + bounds1.size.height) toView:nil];
CGPoint poly2[4];
CGRect bounds2 = view2.bounds;
poly2[0] = [view2 convertPoint:bounds2.origin toView:nil];
poly2[1] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y) toView:nil];
poly2[2] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y + bounds2.size.height) toView:nil];
poly2[3] = [view2 convertPoint:CGPointMake(bounds2.origin.x, bounds2.origin.y + bounds2.size.height) toView:nil];
return [self convexPolygon:poly1 count:4 intersectsWith:poly2 count:4];
}
Swift version. (Added this behaviour to UIView via an extension)
extension UIView {
func projection(of polygon: [CGPoint], perpendicularVector: CGPoint) -> (CGFloat, CGFloat) {
var minproj = CGFloat.greatestFiniteMagnitude
var maxproj = -CGFloat.greatestFiniteMagnitude
for j in 0..<polygon.count {
let proj = polygon[j].x * perpendicularVector.x + polygon[j].y * perpendicularVector.y
if proj > maxproj {
maxproj = proj
}
if proj < minproj {
minproj = proj
}
}
return (minproj, maxproj)
}
func convex(polygon: [CGPoint], intersectsWith polygon2: [CGPoint]) -> Bool {
//
let count1 = polygon.count
for i in 0..<count1 {
let p1 = polygon[i]
let p2 = polygon[(i+1) % count1]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m2.1
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
let count2 = polygon2.count
for i in 0..<count2 {
let p1 = polygon2[i]
let p2 = polygon2[(i+1) % count2]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m1.0
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
return true
}
func intersects(with someView: UIView) -> Bool {
//
var points1 = [CGPoint]()
let bounds1 = bounds
let p11 = convert(bounds1.origin, to: nil)
let p21 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y), to: nil)
let p31 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y + bounds1.size.height) , to: nil)
let p41 = convert(CGPoint(x: bounds1.origin.x, y: bounds1.origin.y + bounds1.size.height), to: nil)
points1.append(p11)
points1.append(p21)
points1.append(p31)
points1.append(p41)
//
var points2 = [CGPoint]()
let bounds2 = someView.bounds
let p12 = someView.convert(bounds2.origin, to: nil)
let p22 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y), to: nil)
let p32 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y + bounds2.size.height) , to: nil)
let p42 = someView.convert(CGPoint(x: bounds2.origin.x, y: bounds2.origin.y + bounds2.size.height), to: nil)
points2.append(p12)
points2.append(p22)
points2.append(p32)
points2.append(p42)
//
return convex(polygon: points1, intersectsWith: points2)
}

Calculating tiles to display in a MapRect when "over-zoomed" beyond the overlay tile set

I am working on an app that uses MKOverlay views to layer my own custom maps on top of the Google base map. I have been using Apple's excellent TileMap sample code (from WWDC 2010) as a guide.
My problem - when "overzoomed" to a level of detail deeper than my generated tile set, the code displays nothing because there are no tiles available at the calculated Z level.
The behavior I want - when "overzoomed" the app should just keep magnifying the deepest level of tiles. It is a good user experience for the overlay to become blurrier - it is a very bad experience to have the overlay vanish.
Here is the code which returns the tiles to draw - I need to figure out how to modify this to cap the Z-depth without breaking the scaling of the frame being calculated for the overlay tile. Any thoughts???
- (NSArray *)tilesInMapRect:(MKMapRect)rect zoomScale:(MKZoomScale)scale
{
NSInteger z = zoomScaleToZoomLevel(scale);
// PROBLEM: I need to find a way to cap z at my maximum tile directory depth.
// Number of tiles wide or high (but not wide * high)
NSInteger tilesAtZ = pow(2, z);
NSInteger minX = floor((MKMapRectGetMinX(rect) * scale) / TILE_SIZE);
NSInteger maxX = floor((MKMapRectGetMaxX(rect) * scale) / TILE_SIZE);
NSInteger minY = floor((MKMapRectGetMinY(rect) * scale) / TILE_SIZE);
NSInteger maxY = floor((MKMapRectGetMaxY(rect) * scale) / TILE_SIZE);
NSMutableArray *tiles = nil;
for (NSInteger x = minX; x <= maxX; x++) {
for (NSInteger y = minY; y <= maxY; y++) {
// As in initWithTilePath, need to flip y index
// to match the gdal2tiles.py convention.
NSInteger flippedY = abs(y + 1 - tilesAtZ);
NSString *tileKey = [[NSString alloc]
initWithFormat:#"%d/%d/%d", z, x, flippedY];
if ([tilePaths containsObject:tileKey]) {
if (!tiles) {
tiles = [NSMutableArray array];
}
MKMapRect frame = MKMapRectMake((double)(x * TILE_SIZE) / scale,
(double)(y * TILE_SIZE) / scale,
TILE_SIZE / scale,
TILE_SIZE / scale);
NSString *path = [[NSString alloc] initWithFormat:#"%#/%#.png",
tileBase, tileKey];
ImageTile *tile = [[ImageTile alloc] initWithFrame:frame path:path];
[path release];
[tiles addObject:tile];
[tile release];
}
[tileKey release];
}
}
return tiles;
}
FYI, here is the zoomScaleToZoomLevel helper function that someone asked about:
// Convert an MKZoomScale to a zoom level where level 0 contains 4 256px square tiles,
// which is the convention used by gdal2tiles.py.
static NSInteger zoomScaleToZoomLevel(MKZoomScale scale) {
double numTilesAt1_0 = MKMapSizeWorld.width / TILE_SIZE;
NSInteger zoomLevelAt1_0 = log2(numTilesAt1_0); // add 1 because the convention skips a virtual level with 1 tile.
NSInteger zoomLevel = MAX(0, zoomLevelAt1_0 + floor(log2f(scale) + 0.5));
return zoomLevel;
}
Imagine that the overlay is cloud cover - or in our case, cellular signal coverage. It might not "look good" while zoomed in deep, but the overlay is still conveying essential information to the user.
I've worked around the problem by adding an OverZoom mode to enhance Apple's TileMap sample code.
Here is the new tilesInMapRect function in TileOverlay.m:
- (NSArray *)tilesInMapRect:(MKMapRect)rect zoomScale:(MKZoomScale)scale
{
NSInteger z = zoomScaleToZoomLevel(scale);
// OverZoom Mode - Detect when we are zoomed beyond the tile set.
NSInteger overZoom = 1;
NSInteger zoomCap = MAX_ZOOM; // A constant set to the max tile set depth.
if (z > zoomCap) {
// overZoom progression: 1, 2, 4, 8, etc...
overZoom = pow(2, (z - zoomCap));
z = zoomCap;
}
// When we are zoomed in beyond the tile set, use the tiles
// from the maximum z-depth, but render them larger.
NSInteger adjustedTileSize = overZoom * TILE_SIZE;
// Number of tiles wide or high (but not wide * high)
NSInteger tilesAtZ = pow(2, z);
NSInteger minX = floor((MKMapRectGetMinX(rect) * scale) / adjustedTileSize);
NSInteger maxX = floor((MKMapRectGetMaxX(rect) * scale) / adjustedTileSize);
NSInteger minY = floor((MKMapRectGetMinY(rect) * scale) / adjustedTileSize);
NSInteger maxY = floor((MKMapRectGetMaxY(rect) * scale) / adjustedTileSize);
NSMutableArray *tiles = nil;
for (NSInteger x = minX; x <= maxX; x++) {
for (NSInteger y = minY; y <= maxY; y++) {
// As in initWithTilePath, need to flip y index to match the gdal2tiles.py convention.
NSInteger flippedY = abs(y + 1 - tilesAtZ);
NSString *tileKey = [[NSString alloc] initWithFormat:#"%d/%d/%d", z, x, flippedY];
if ([tilePaths containsObject:tileKey]) {
if (!tiles) {
tiles = [NSMutableArray array];
}
MKMapRect frame = MKMapRectMake((double)(x * adjustedTileSize) / scale,
(double)(y * adjustedTileSize) / scale,
adjustedTileSize / scale,
adjustedTileSize / scale);
NSString *path = [[NSString alloc] initWithFormat:#"%#/%#.png", tileBase, tileKey];
ImageTile *tile = [[ImageTile alloc] initWithFrame:frame path:path];
[path release];
[tiles addObject:tile];
[tile release];
}
[tileKey release];
}
}
return tiles;
}
And here is the new drawMapRect in TileOverlayView.m:
- (void)drawMapRect:(MKMapRect)mapRect
zoomScale:(MKZoomScale)zoomScale
inContext:(CGContextRef)context
{
// OverZoom Mode - Detect when we are zoomed beyond the tile set.
NSInteger z = zoomScaleToZoomLevel(zoomScale);
NSInteger overZoom = 1;
NSInteger zoomCap = MAX_ZOOM;
if (z > zoomCap) {
// overZoom progression: 1, 2, 4, 8, etc...
overZoom = pow(2, (z - zoomCap));
}
TileOverlay *tileOverlay = (TileOverlay *)self.overlay;
// Get the list of tile images from the model object for this mapRect. The
// list may be 1 or more images (but not 0 because canDrawMapRect would have
// returned NO in that case).
NSArray *tilesInRect = [tileOverlay tilesInMapRect:mapRect zoomScale:zoomScale];
CGContextSetAlpha(context, tileAlpha);
for (ImageTile *tile in tilesInRect) {
// For each image tile, draw it in its corresponding MKMapRect frame
CGRect rect = [self rectForMapRect:tile.frame];
UIImage *image = [[UIImage alloc] initWithContentsOfFile:tile.imagePath];
CGContextSaveGState(context);
CGContextTranslateCTM(context, CGRectGetMinX(rect), CGRectGetMinY(rect));
// OverZoom mode - 1 when using tiles as is, 2, 4, 8 etc when overzoomed.
CGContextScaleCTM(context, overZoom/zoomScale, overZoom/zoomScale);
CGContextTranslateCTM(context, 0, image.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), [image CGImage]);
CGContextRestoreGState(context);
// Added release here because "Analyze" was reporting a potential leak. Bug in Apple's sample code?
[image release];
}
}
Seems to be working great now.
BTW - I think the TileMap sample code is missing an [image release] and was leaking memory. Note where I added it in the code above.
I hope that this helps some others with the same problem.
Cheers,
Chris
This algorithm seems to produce a lot of map tiles outside of the MapRect. Adding the following inside the loop to skip tiles outside the boundaries helps a lot:
if (! MKMapRectIntersectsRect(rect, tileMapRect))
continue;
Here's the swift conversion so no one else has to do this work again. Thanks #radven, this works wonderfully.
class TileOverlay: MKTileOverlay {
return directoryUrl?.appendingPathComponent("TopoMaps/\(path.z)/\(path.x)/\(path.y)_\(path.x)_\(path.z).png")
?? Bundle.main.url(
forResource: "default",
withExtension: "png")!
}
func tiles(in rect: MKMapRect, zoomScale scale: MKZoomScale) -> [ImageTile]? {
var z = zoomScaleToZoomLevel(scale)
// OverZoom Mode - Detect when we are zoomed beyond the tile set.
var overZoom = 1
let zoomCap = MAX_ZOOM // A constant set to the max tile set depth.
if z > zoomCap {
// overZoom progression: 1, 2, 4, 8, etc...
overZoom = Int(pow(2, Double(z - zoomCap)))
z = zoomCap
}
// When we are zoomed in beyond the tile set, use the tiles
// from the maximum z-depth, but render them larger.
let adjustedTileSize = overZoom * Int(TILE_SIZE)
// Number of tiles wide or high (but not wide * high)
let tilesAtZ = Int(pow(2, Double(z)))
let minX = Int(floor((rect.minX * Double(scale)) / Double(adjustedTileSize)))
let maxX = Int(floor((rect.maxX * Double(scale)) / Double(adjustedTileSize)))
let minY = Int(floor((rect.minY * Double(scale)) / Double(adjustedTileSize)))
let maxY = Int(floor((rect.maxY * Double(scale)) / Double(adjustedTileSize)))
var tiles: [ImageTile]? = nil
for x in minX...maxX {
for y in minY...maxY {
if let url = directoryUrl?.appendingPathComponent("TopoMaps/\(z)/\(x)/\(y)_\(x)_\(z).png").relativePath,
FileManager.default.fileExists(atPath: url) {
if tiles == nil {
tiles = []
}
let frame = MKMapRect(
x: Double(x * adjustedTileSize) / Double(scale),
y: Double(y * adjustedTileSize) / Double(scale),
width: Double(CGFloat(adjustedTileSize) / scale),
height: Double(CGFloat(adjustedTileSize) / scale))
let tile = ImageTile(frame: frame, path: url)
tiles?.append(tile)
}
}
}
return tiles
}
}
struct ImageTile {
let frame: MKMapRect
let path: String
}
class TileOverlayRenderer: MKOverlayRenderer {
override func draw(
_ mapRect: MKMapRect,
zoomScale: MKZoomScale,
in context: CGContext
) {
// OverZoom Mode - Detect when we are zoomed beyond the tile set.
let z = zoomScaleToZoomLevel(zoomScale)
var overZoom = 1
let zoomCap = MAX_ZOOM
if z > zoomCap {
// overZoom progression: 1, 2, 4, 8, etc...
overZoom = Int(pow(2, Double(z - zoomCap)))
}
let tileOverlay = overlay as? TileOverlay
// Get the list of tile images from the model object for this mapRect. The
// list may be 1 or more images (but not 0 because canDrawMapRect would have
// returned NO in that case).
let tilesInRect = tileOverlay?.tiles(in: mapRect, zoomScale: zoomScale)
let tileAlpha: CGFloat = 1
context.setAlpha(tileAlpha)
for tile in tilesInRect ?? [] {
// For each image tile, draw it in its corresponding MKMapRect frame
let rect = self.rect(for: tile.frame)
let image = UIImage(contentsOfFile: tile.path)
context.saveGState()
context.translateBy(x: rect.minX, y: rect.minY)
if let cgImage = image?.cgImage, let width = image?.size.width, let height = image?.size.height {
// OverZoom mode - 1 when using tiles as is, 2, 4, 8 etc when overzoomed.
context.scaleBy(x: CGFloat(CGFloat(overZoom) / zoomScale), y: CGFloat(CGFloat(overZoom) / zoomScale))
context.translateBy(x: 0, y: image?.size.height ?? 0.0)
context.scaleBy(x: 1, y: -1)
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
context.restoreGState()
// Added release here because "Analyze" was reporting a potential leak. Bug in Apple's sample code?
}
}
}
}
let MAX_ZOOM = 13
let TILE_SIZE: Double = 256
func zoomScaleToZoomLevel(_ scale: MKZoomScale) -> Int {
let numTilesAt1_0 = MKMapSize.world.width / TILE_SIZE
let zoomLevelAt1_0 = log2(numTilesAt1_0) // add 1 because the convention skips a virtual level with 1 tile.
let zoomLevel = Int(max(0, zoomLevelAt1_0 + floor(Double(log2f(Float(scale))) + 0.5)))
return zoomLevel
}
A bit late to the party, but... Under iOS 7.0 and greater, you can use the maximumZ property on MKTileOverlay. From the docs:
If you use different overlay objects to represent different tiles at
different zoom levels, use this property to specify the maximum zoom
level supported by this overlay’s tiles. At zoom level 0, tiles cover
the entire world map; at zoom level 1, tiles cover 1/4 of the world;
at zoom level 2, tiles cover 1/16 of the world, and so on. The map
never tries to load tiles for a zoom level greater than the value
specified by this property.
- (MKOverlayRenderer *)mapView:(MKMapView *)mapView rendererForOverlay:(id<MKOverlay>)overlay {
if ([overlay isKindOfClass:[MKTileOverlay class]]) {
MKTileOverlay *ovrly = (MKTileOverlay *)overlay;
ovrly.maximumZ = 9; // Set your maximum zoom level here
MKTileOverlayRenderer *rndr = [[MKTileOverlayRenderer alloc] initWithTileOverlay:ovrly];
return rndr;
}
return nil;
}

Resources