I'm trying to find a reliable method for determining the MKTileOverlayPath of the tiles currently visible in a MKMapView at any given instant (as the user zooms in/out and pans). My application displays dynamic content from a server which is queried by MKTileOverlayPath.
Is there a way to determine this directly from the MKMapView or MKTileOverlay classes?
So far I've been trying to calculate the tilepaths using the code below. It works OK when the user pans or double-clicks to zoom-in. However, when pinching to zoom in/out (where the zoom-level can be continuous) the tiles calculated using this method are sometimes not correct - specifically the y coordinates seem to be one off.
I'm struggling to figure out why the code doesn't work correctly with the pinch zoom levels so I'm wondering if there is an alternative way to determine the currently visible tiles. I've tried implementing a dummy MKTileOVerlay to intercept the requested tiles but this it's not clear from that which are currently in view at any given time.
Any tips would be greatly appreciated.
- (NSMutableArray *)tilesInMapRect:(MKMapRect)rect zoomScale:(MKZoomScale)scale
{
NSInteger z = [self zoomLevel];
NSInteger minX = floor((MKMapRectGetMinX(rect) * scale) / TILE_SIZE);
NSInteger maxX = floor((MKMapRectGetMaxX(rect) * scale) / TILE_SIZE);
NSInteger minY = floor((MKMapRectGetMinY(rect) * scale) / TILE_SIZE);
NSInteger maxY = floor((MKMapRectGetMaxY(rect) * scale) / TILE_SIZE);
NSMutableArray *tiles = nil;
for (NSInteger x = minX; x <= maxX; x++) {
for (NSInteger y = minY; y <= maxY; y++) {
NSString *tileString = [NSString stringWithFormat:#"z%ix%iy%i",z,x,y];
if (!tiles) {
tiles = [NSMutableArray array];
}
[tiles addObject:tileString];
}
}
return tiles;
}
- (NSUInteger) zoomLevel {
return (21 - round(log2(self.mapView.region.span.longitudeDelta * MERCATOR_RADIUS * M_PI / (180.0 * self.mapView.bounds.size.width))));
}
Related
I am trying to pan and seek forwards and backwards in my AVPlayer. It is kind of working but the basic math of determining where the pan is translated to the length of the asset is wrong. Can any one offer assistance?
- (void) handlePanGesture:(UIPanGestureRecognizer*)pan{
CGPoint translate = [pan translationInView:self.view];
CGFloat xCoord = translate.x;
double diff = (xCoord);
//NSLog(#"%F",diff);
CMTime duration = self.avPlayer.currentItem.asset.duration;
float seconds = CMTimeGetSeconds(duration);
NSLog(#"duration: %.2f", seconds);
CGFloat gh = 0;
if (diff>=0) {
//If the difference is positive
NSLog(#"%f",diff);
gh = diff;
} else {
//If the difference is negative
NSLog(#"%f",diff*-1);
gh = diff*-1;
}
float minValue = 0;
float maxValue = 1024;
float value = gh;
double time = seconds * (value - minValue) / (maxValue - minValue);
[_avPlayer seekToTime:CMTimeMakeWithSeconds(time, 10) toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
//[_avPlayer seekToTime:CMTimeMakeWithSeconds(seconds*(Float64)diff , 1024) toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
}
You are not normalizing the touch location and the corresponding time values. Is there a 1:1 relationship between the two? That's not possible.
Take the minimum and maximum touch location values of the pan gesture and the minimum and maximum values of the asset's duration (obviously, from zero to the length of the video), and then apply the following formula to translate the touch location to the seek time:
// Map
#define map(x, in_min, in_max, out_min, out_max) ((x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min)
Here's the code I wrote that uses that formula:
- (IBAction)handlePanGesture:(UIPanGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateChanged){
CGPoint location = [sender locationInView:self];
float nlx = ((location.x / ((CGRectGetMidX(self.frame) / (self.frame.size.width / 2.0)))) / (self.frame.size.width / 2.0)) - 1.0;
//float nly = ((location.y / ((CGRectGetMidY(self.view.frame) / (self.view.frame.size.width / 2.0)))) / (self.view.frame.size.width / 2.0)) - 1.0;
nlx = nlx * 2.0;
[self.delegate setRate:nlx];
}
}
I culled the label that displays the rate, and the Play icon that appears while you're scrubbing, and which changes sizes depending on how fast or slow you are panning the video. Although you didn't ask for that, if you want it, just ask.
Oh, the "times-two" factor is intended to add an acceleration curve to the pan gesture value sent to the delegate's setRate method. You can use any formula for that, even an actual curve, like pow(nlx, 2.0) or whatever...
If you want to make it more precise and useful you should implement different "levels of sensitivity".
Apple does that with their slider: if you drag away from the slider and then to the sides, that pace at which the video moves changes. The farther your are from the slider the more precise it gets/the less you can reach.
With the following shape:
I was wondering how do you get it to curve like this:
Also similarly:
I'm assuming that the all of the circles / lines are packed into one CGMutablePath, and then some kind of curve, arc, or quad curve, is applied to it, though I'm having trouble coming even close to replicating it. Does anyone know how to do this?
In your first example, you start with a path that has several closed subpaths. Apparently you want to warp the centers of the subpaths, but leave the individual subpaths unwarped relative to their (new) centers. I'm going to ignore that, because the solution even without that is already terribly complex.
So, let's consider how to define the “warp field”. We'll use three control points:
The warp leaves fixedPoint unchanged. It moves startPoint to endPoint by rotation and scaling, not by simply interpolating the coordinates.
Furthermore, it applies the rotation and scaling based on distance from fixedPoint. And not just based on the simple Euclidean distance. Notice that we don't want apply any rotation or scaling to the top endpoints of the “V” shape in the picture, even though those endpoints are a measurable Euclidean distance from fixedPoint. We want to measure distance along the fixedPoint->startPoint vector, and apply more rotation/scaling as that distance increases.
This all requires some pretty heavy trigonometry. I'm not going to try to explain the details. I'm just going to dump code on you, as a category on UIBezierPath:
UIBezierPath+Rob_warp.h
#import <UIKit/UIKit.h>
#interface UIBezierPath (Rob_warp)
- (UIBezierPath *)Rob_warpedWithFixedPoint:(CGPoint)fixedPoint startPoint:(CGPoint)startPoint endPoint:(CGPoint)endPoint;
#end
UIBezierPath+Rob_warp.m
Note that you'll need the Rob_forEach category from this answer.
#import "UIBezierPath+Rob_warp.h"
#import "UIBezierPath+Rob_forEach.h"
#import <tgmath.h>
static CGPoint minus(CGPoint a, CGPoint b) {
return CGPointMake(a.x - b.x, a.y - b.y);
}
static CGFloat length(CGPoint vector) {
return hypot(vector.x, vector.y);
}
static CGFloat dotProduct(CGPoint a, CGPoint b) {
return a.x * b.x + a.y * b.y;
}
static CGFloat crossProductMagnitude(CGPoint a, CGPoint b) {
return a.x * b.y - a.y * b.x;
}
#implementation UIBezierPath (Rob_warp)
- (UIBezierPath *)Rob_warpedWithFixedPoint:(CGPoint)fixedPoint startPoint:(CGPoint)startPoint endPoint:(CGPoint)endPoint {
CGPoint startVector = minus(startPoint, fixedPoint);
CGFloat startLength = length(startVector);
CGPoint endVector = minus(endPoint, fixedPoint);
CGFloat endLength = length(minus(endPoint, fixedPoint));
CGFloat scale = endLength / startLength;
CGFloat dx = dotProduct(startVector, endVector);
CGFloat dy = crossProductMagnitude(startVector, endVector);
CGFloat radians = atan2(dy, dx);
CGPoint (^warp)(CGPoint) = ^(CGPoint input){
CGAffineTransform t = CGAffineTransformMakeTranslation(-fixedPoint.x, -fixedPoint.y);
CGPoint inputVector = minus(input, fixedPoint);
CGFloat factor = dotProduct(inputVector, startVector) / (startLength * startLength);
CGAffineTransform w = CGAffineTransformMakeRotation(radians * factor);
t = CGAffineTransformConcat(t, w);
CGFloat factoredScale = pow(scale, factor);
t = CGAffineTransformConcat(t, CGAffineTransformMakeScale(factoredScale, factoredScale));
// Note: next line is not the same as CGAffineTransformTranslate!
t = CGAffineTransformConcat(t, CGAffineTransformMakeTranslation(fixedPoint.x, fixedPoint.y));
return CGPointApplyAffineTransform(input, t);
};
UIBezierPath *copy = [self.class bezierPath];
[self Rob_forEachMove:^(CGPoint destination) {
[copy moveToPoint:warp(destination)];
} line:^(CGPoint destination) {
[copy addLineToPoint:warp(destination)];
} quad:^(CGPoint control, CGPoint destination) {
[copy addQuadCurveToPoint:warp(destination) controlPoint:warp(control)];
} cubic:^(CGPoint control0, CGPoint control1, CGPoint destination) {
[copy addCurveToPoint:warp(destination) controlPoint1:warp(control0) controlPoint2:warp(control1)];
} close:^{
[copy closePath];
}];
return copy;
}
#end
Ok, so how do you use this crazy thing? In the case of a path like the “V” in the example, you could do it like this:
CGRect rect = path.bounds;
CGPoint fixedPoint = CGPointMake(CGRectGetMidX(rect), CGRectGetMinY(rect));
CGPoint startPoint = CGPointMake(fixedPoint.x, CGRectGetMaxY(rect));
path = [path Rob_warpedWithFixedPoint:fixedPoint startPoint:startPoint endPoint:endAnchor];
I'm computing fixedPoint as the center of the top edge of the path's bounding box, and startPoint as the center of the bottom edge. The endAnchor is under user control in my test program. It looks like this in the simulator:
A bubble-type path looks like this:
You can find my test project here: https://github.com/mayoff/path-warp
I've been trying to get my MKMapView to detect whether or not a tap was on a tile with alpha > 0. I'm quite new at ObjC and Xcode as well so this functionality is a bit over my head. All help will me greatly appreciated!
So far I've tried many different strategies but always come up short. We have custom classes to replace MKOverlay and MKOverlayView that implement each respectively so I've been trying to grab the tiles when they're created and save them to an array to later reference in the MKMapViewController when the map is touched.
- (NSArray *)tilesInMapRect:(MKMapRect)rect zoomScale:(MKZoomScale)scale
{
NSInteger z = zoomScaleToZoomLevel(scale);
// Number of tiles wide or high (but not wide * high)
NSInteger tilesAtZ = pow(2, z);
NSInteger minX = floor((MKMapRectGetMinX(rect) * scale) / TILE_SIZE);
NSInteger maxX = floor((MKMapRectGetMaxX(rect) * scale) / TILE_SIZE);
NSInteger minY = floor((MKMapRectGetMinY(rect) * scale) / TILE_SIZE);
NSInteger maxY = floor((MKMapRectGetMaxY(rect) * scale) / TILE_SIZE);
NSMutableArray *tiles = nil;
for (NSInteger x = minX; x <= maxX; x++) {
for (NSInteger y = minY; y <= maxY; y++) {
// As in initWithTilePath, need to flip y index to match the gdal2tiles.py convention.
NSInteger flippedY = abs(y + 1 - tilesAtZ);
NSString *tileKey = [[NSString alloc] initWithFormat:#"%d/%d/%d", z, x, flippedY];
if ([tilePaths containsObject:tileKey]) {
if (!tiles) {
tiles = [NSMutableArray array];
}
MKMapRect frame = MKMapRectMake((double)(x * TILE_SIZE) / scale,
(double)(y * TILE_SIZE) / scale,
TILE_SIZE / scale,
TILE_SIZE / scale);
NSString *path = [[NSString alloc] initWithFormat:#"%#/%#.png", tileBase, tileKey];
ImageTile *tile = [[ImageTile alloc] initWithFrame:frame path:path];
[tiles addObject:tile];
[myTiles addObject:tile];
[path release];
[tile release];
}
[tileKey release];
}
}
return tiles;
}
That's where I populate the array which is a "class variable". If I comment out the [tiles addObject:tile]; I get the background of the map drawn but no buildings so I think adding specifically those tiles is correct.
Then in the mapviewController gesture handler function I check if the touch is in the tile.frame which is is for 8 out of 32 (it can be 0 if you click far from the buildings and the total changes when you zoom around, but always gets bigger)which seems like an odd number. But pretending that that works correctly I check the alpha at that point using a modified version of this answerer's function: how to get the RGBA value of UIImage in the specific clicked point
but I don't know if that works for mapView's like it would for imageViews. I think I might need to translate the context but I've never worked with contexts before...
Sorry for so much text! Maybe this isn't even possible? I'll add more code if clarification is needed. Any input would help!
Alright, here we go. I have a cocos2d app, and there are targets that move toward the player. When the player moves, I would like for them to slowly change their destination toward the player again, so they aren't just moving into empty space. Is it possible to change the destination of a sprite mid-runAction?
edit:
This is the code in - (void)changeTargetDest
- (void)changeTargetDest {
NSMutableArray* deleteArray = [[NSMutableArray alloc] init];
for(CCSprite* s in _targets) {
float offX = s.position.x - player.position.x;
float offY = s.position.y - player.position.y;
float adjustX;
float adjustY;
float offDistance = sqrt(powf(offX, 2.0f) + powf(offY, 2.0f));
if(offDistance < 15) {
[deleteArray addObject:s];
deaths++;
[deathLabel setString:[NSString stringWithFormat:#"Deaths: %ld", deaths]];
if(deaths == 0)
[kdLabel setString:[NSString stringWithFormat:#"K/D ratio: %ld.00", score]];
else
[kdLabel setString:[NSString stringWithFormat:#"K/D ratio: %.2f", ((float)score / (float)deaths)]];
}
else {
adjustX = offX * .99;
adjustY = offY * .99;
CGPoint point = CGPointMake(player.position.x + adjustX, player.position.y + adjustY);
[s setPosition:point];
}//else
}//for
for (CCSprite *target in deleteArray) {
[_targets removeObject:target];
[self removeChild:target cleanup:YES];
}
}
This works well, except for one problem. Because the new position is calculated by just taking .99 of the previous offset, the closer the target gets to the player, the more slowly it moves. How can I make its speed constant?
You can stop the action and run a new action each few frames in a scheduled method.
but the better way is to compute the position of targets according to players position and use setPosition to manualy change their positions each frame in your update method.
I've successfully implemented the custom map annotation callout code from the asynchrony blog post .
(When user taps a map pin, I show a customized image instead of the standard callout view).
The only remaining problem is that the callout occupies the entire width of the view, and the app would look much better if the width corresponded to the image I'm using.
I have subclassed MKAnnotationView, and when I set it's contentWidth to the width of the image, the triangle does not always point back to the pin, or the image is not even inside it's wrapper view.
Any help or suggestions would be great.
Thanks.
I ran into a similar problem when implementing the CalloutMapAnnotationView for the iPad. Basically I didn't want the iPad version to take the full width of the mapView.
In the prepareFrameSize method set your width:
- (void)prepareFrameSize {
// ...
// changing frame x/y origins here does nothing
frame.size = CGSizeMake(320.0f, height);
self.frame = frame;
}
Next you'll have to calculate the xOffset based off the parentAnnotationView:
- (void)prepareOffset {
// Base x calculations from center of parent view
CGPoint parentOrigin = [self.mapView convertPoint:self.parentAnnotationView.center
fromView:self.parentAnnotationView.superview];
CGFloat xOffset = 0;
CGFloat mapWidth = self.mapView.bounds.size.width;
CGFloat halfWidth = mapWidth / 2;
CGFloat x = parentOrigin.x + (320.0f / 2);
if( parentOrigin.x < halfWidth && x < 0 ) // left half of map
xOffset = -x;
else if( parentOrigin.x > halfWidth && x > mapWidth ) // right half of map
xOffset = -( x - mapWidth);
// yOffset calculation ...
}
Now in drawRect:(CGRect)rect before the callout bubble is drawn:
- (void)drawRect:(CGRect)rect {
// ...
// Calculate the carat lcation in the frame
if( self.centerOffset.x == 0.0f )
parentX = 320.0f / 2;
else if( self.centerOffset.x < 0.0f )
parentX = (320.0f / 2) + -self.centerOffset.x;
//...
}
Hope this helps put you on the right track.