I am trying to detect whether two MKOverlays (might be circles or polygons) intersect.
Tried using boundingMapRect but it draws a rectangle around my overlay thus giving me inaccurate results.
if polygon != nil {
intersectionOverlay = polygon!
} else {
intersectionOverlay = circle!
}
for overlay: MKOverlay in mapView.overlays {
let rect = overlay.boundingMapRect
if rect.intersects(intersectionOverlay.boundingMapRect) {
print("Intersects \(overlay.title)")
}
Using that piece of code, would return true for the situation in the images below. Is there any other better way to achieve the desired results? Thanks
Related
I have 2 SKSpriteNode:
a simple square (A)
the same square with a rotation (-45°) (B)
I need to check, at any time, if the center of another SKSpriteNode (a ball) is inside one of these squares.
The ball and the squares have the same parent (the main scene).
override func update(_ currentTime: TimeInterval) {
let spriteArray = self.nodes(at: ball.position)
let arr = spriteArray.filter {$0.name == "square"}
for square in arr {
print(square.letter)
if(square.contains(self.puck.position)) {
print("INSIDE")
}
}
}
With the simple square (A), my code works correctly. The data are right. I know, at any time, if the CGPoint center is inside or outside the square.
But with the square with the rotation (B), the data aren't as desired. The CGPoint is detected inside as soon as it's in the square which the diamond-shape is contained.
The SKSpriteNode squares are created via the level editor.
How can I do to have the correct result for the diamond-shape?
EDIT 1
Using
view.showsPhysics = true
I can see the bounds of all the SKSpriteNode with physicsBody. The bounds of my diamond-square is the diamond-square and not the grey square area.
square.frame.size -> return the grey area
square.size -> return the diamond-square
In the Apple documentation, func nodes(at p: CGPoint) -> [SKNode], the method is about node and not frame, so why it doesn't work?
There are many ways to do it, usually I like to work with paths so , if you have a perfect diamond as you describe I would like to offer a different way from the comments, you could create a path that match perfectly to your diamond with UIBezierPath because it have the method containsPoint:
let f = square.frame
var diamondPath = UIBezierPath.init()
diamondPath.moveToPoint(CGPointMake(f.size.width-f.origin.x,f.origin.y))
diamondPath.addLineToPoint(CGPointMake(f.origin.x,f.size.height-f.origin.y))
diamondPath.addLineToPoint(CGPointMake(f.size.width-f.origin.x,f.size.height))
diamondPath.addLineToPoint(CGPointMake(f.size.width,f.size.height-f.origin.y))
diamondPath.closePath()
if diamondPath.containsPoint(<#T##point: CGPoint##CGPoint#>) {
// point is inside diamond
}
I have been trying to work out if a tap gesture is in an overlay polygon, to no avail.
I am trying to make a map of country overlays - on clicking on an overlay I want to be able to tell what country the overlay is of.
First I found this: Detecting touches on MKOverlay in iOS7 (MKOverlayRenderer) and this: detect if a point is inside a MKPolygon overlay which suggest either:
make a tiny rectangle around your touch point and see if it intersects any overlays.
```
//point clicked
let point = MKMapPointForCoordinate(newCoordinates)
//make a rectangle around this click
let mapRect = MKMapRectMake(point.x, point.y, 0,0);
//loop through the polygons on the map and
for polygon in worldMap.overlays as! [MKPolygon] {
if polygon.intersectsMapRect(mapRect) {
print("found intersection")
}
}
```
Using viewForOverlay with a promising sounding function CGPathContainsPoint, however viewForOverlay is now deprecated.
This led me to find Detecting a point in a MKPolygon broke with iOS7 (CGPathContainsPoint) which suggests the following method:
Make a mutable polygon from the points of each overlay(instead of using the deprecated viewForOverlay) and then use CGPathContainsPoint to return if the clicked point is in the overlay.
However I am unable to make this code work.
```
func overlaySelected (gestureRecognizer: UIGestureRecognizer) {
let pointTapped = gestureRecognizer.locationInView(worldMap)
let newCoordinates = worldMap.convertPoint(pointTapped, toCoordinateFromView: worldMap)
let mapPointAsCGP = CGPointMake(CGFloat(newCoordinates.latitude), CGFloat(newCoordinates.longitude));
print(mapPointAsCGP.x, mapPointAsCGP.y)
for overlay: MKOverlay in worldMap.overlays {
if (overlay is MKPolygon) {
let polygon: MKPolygon = (overlay as! MKPolygon)
let mpr: CGMutablePathRef = CGPathCreateMutable()
for p in 0..<polygon.pointCount {
let mp = polygon.points()[p]
print(polygon.coordinate)
if p == 0 {
CGPathMoveToPoint(mpr, nil, CGFloat(mp.x), CGFloat(mp.y))
}
else {
CGPathAddLineToPoint(mpr, nil, CGFloat(mp.x), CGFloat(mp.y))
}
}
if CGPathContainsPoint(mpr, nil, mapPointAsCGP, false) {
print("------ is inside! ------")
}
}
}
}
```
The first method works but no matter how small I try and make the height and width of the rectangle around the click point let mapRect = MKMapRectMake(point.x, point.y, 0.00000000001,0.00000000001); the accuracy of the tap is not reliable and so you can end up clicking on several polygons at once.
Currently I am working on deciding on which county is nearer to the tap by using the 'MKPolygon' property coordinate - which gives the central point of the polygon. With this one can then measure the distance from this polygon to the tapped point to find the closest one. But this is not ideal as the user may never be able to tap on the country that they intend.
So, to sum up my questions:
Is there something that I am not implementing correctly in the second method above (one using CGPathContainsPoint)?
Is there a more accurate way to register an on click event with the rectangle method?
Any other suggestions or pointers on how to achieve my goal of clicking the map and seeing if the click is on an overlay.
I have several sprite nodes in my scene that are casting shadows. I also have a sprite that is in one of the shadows. I want to be able to tell if the user moves a sprite out of a shadow into the light. Anyway to do this in swift? Thanks.
Unfortunatly this functionality still isn't included in SpriteKit, however it is possible to implement a decent solution with some caveats.
To determine if a sprite casts a shadow, its shadowCastBitMask property "is tested against the light's categoryBitMask property by performing a logical AND operation." SpriteKit appears to generate the exact same mask data for lighting and physics body calculations, based on the description given in the documentation for the shadowColor property defined on SKLightNode:
When lighting is calculated, shadows are created as if a ray was cast
out from the light node's position. If a sprite casts a shadow, the
rays are blocked when they intersect with the sprite's physics body.
Otherwise, the sprite's texture is used to generate a mask, and any
pixel in the sprite node's texture that has an alpha value that is
nonzero blocks the light.
SKPhysicsWorld has a method, enumerateBodies(alongRayStart:end:using:), for performing this kind of ray intersection test performantly. That means we can test if a sprite is shadowed by any sprite with a physics body. So we can write a method like this to extend SKSpriteNode:
func isLit(by light: SKLightNode) -> Bool {
guard light.isEnabled else {
return false
}
var shadowed = false
scene?.physicsWorld.enumerateBodies(alongRayStart: light.position, end: position) { (body, _, _, stop) in
if let sprite = body.node as? SKSpriteNode, light.categoryBitMask & sprite.shadowCastBitMask != 0 {
shadowed = true
stop.pointee = true
}
}
if shadowed {
return false
} else {
return true
}
}
We can also retrieve which lights in the scene are lighting a particular sprite:
func lights(affecting sprite: SKSpriteNode) -> [SKLightNode] {
let lights = sprite.scene?.children.flatMap { (node) -> SKLightNode? in
node as? SKLightNode
} ?? []
return lights.filter { (light) -> Bool in
sprite.isLit(by: light)
}
}
It'd be great if SpriteKit provided a way to retrieve this information without coupling it to the physics API, or else requiring developers to roll their own ray cast implementations.
I have integrated Google Maps in my application and also using Google Places API. After I am getting all the results from Google Places API(around 60), I am displaying them with the help of custom marker. The custom marker which I am making comprises of "Place Image" and "Place Name" because of which I have to first draw it in a UIView and then render it as a UIImage with the help of following function
- (UIImage *)imageFromView:(UIView *) view
{
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, [[UIScreen mainScreen] scale]);
} else {
UIGraphicsBeginImageContext(view.frame.size);
}
[view.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
At first all the markers are rendered and drawn easily.
Now I have a slider ranging from 100m to 5km, which acts as a search radius optimiser. As the slider will be moved(suppose to a value 2km), then all the markers are removed and only those markers whose distance from user location is less then slider value are drawn again. While I am testing the slider functionality, the application crashes saying
((null)) was false: Reached the max number of texture atlases, can not allocate more.
I am uploading screen shots for clear understanding of situation.
Please help.
Also to mention, in the screens you will see green markers as well as blue markers. Blue markers are those which are closer to user location while green ones are far off by a particular distance. As the user location will change there are 2 cases:
If it is approaching a green marker, then it will turn to a blue marker
If it is going far from a blue marker, then it will turn to a green marker.
I am working on an app which can have several thousand avatars moving around on the map and encounter this bug also. Clustering is a potential solution, but with this many avatars all moving, I suspect the calculations will be too CPU intensive.
The solution I use is to keep a reference to the base avatar image and use it when > 50 avatars are on the screen. Only when there are < 50 avatars on the screen, then I will generate unique images for each avatars with their names.
// GMSMarker
static var avatarDic:[String:UIImage] = Dictionary()
func removeName() {
// use a single image reference here so that google map does not crash
if let image = avatarDic[avatarBase] {
self.icon = image
}
else {
avatarDic[avatarBase] = UIImage(named:avatarBase)
self.icon = avatarDic[avatarBase]
}
}
func addName() {
self.icon = // draw name on base image
}
// GMSMapView
var userIcons:[String:MyMarker] = Dictionary()
var iconWithNames:Set<MyMarker> = Set()
func mapView(mapView: GMSMapView!, didChangeCameraPosition position: GMSCameraPosition!) {
// find visible avatars til limit
let bottomLeft = self.mapView.projection.visibleRegion().nearLeft
let topRight = self.mapView.projection.visibleRegion().farRight
var visibleMarkerSet:Set<MyMarker> = Set()
for (key, marker) in self.userIcons {
if (marker.position.latitude > bottomLeft.latitude && marker.position.latitude < topRight.latitude && marker.position.longitude > bottomLeft.longitude && marker.position.longitude < topRight.longitude) {
visibleMarkerSet.insert(marker)
}
// not showing if > 50
if (visibleMarkerSet.count > 50) {
visibleMarkerSet = Set()
break
}
}
// remove names
for markerWithName in self.iconWithNames {
if (visibleMarkerSet.contains(markerWithName) == false) {
markerWithName.removeName()
}
}
// add names
for visibleMarker in visibleMarkerSet {
visibleMarker.addName()
}
self.iconWithNames = visibleMarkerSet
}
Instead of setting marker's iconView, set marker's icon. That too initialize the image outside of for loop, as below
func displayMarkers() {
let iconImage = UIImage(named: "locationgreen")
for partner in partners {
let lat : Double = Double(partner.location?.coordinates![1] ?? 0)
let lng : Double = Double(partner.location?.coordinates![0] ?? 0)
let position = CLLocationCoordinate2D(latitude: lat, longitude: lng)
let marker = GMSMarker(position: position)
marker.title = partner.name
marker.icon = iconImage
}
}
Like in a Mario game where if you jump and land on top of a monster, the monster gets knocked out.
I'm using CGRectIntersectsRect between the two objects (player and monster); however, the monster will get knocked out from any direction.
How do I intersect two objects at specific points of the objects?
I actually created separate blank objects in each direction for this to work. Is there a more efficient solution?
Instead of CGRectIntersectsRect you can use CGRectIntersection to get a new CGRect of the area of intersection. If the player hit from the side, then this rectangle will be taller than it is wide. If the player hit from the top the rectangle will be wider than it is tall, but you will need to check for top vs. bottom maybe. In that case, you can compare the Y values of the enemy rectangle with the intersection. If the player hit from the top, the Y values of the intersection and the enemy rectangle will be equal. Otherwise, the player hit from the bottom.
typedef NS_ENUM(NSUInteger, IntersectFrom) {
IntersectFromNotIntersect,
IntersectFromTop,
IntersectFromBottom,
IntersectFromLeft,
IntersectFromRight
};
IntersectFrom CGRectGetIntersectFrom(CGRect rectPlayer, CGRect rectMonster) {
CGRect rectIntersect = CGRectIntersection(rectPlayer, rectMonster);
if(!CGRectIsNull(rectIntersect)) {
if(CGRectGetWidth(rectIntersect) < CGRectGetHeight(rectIntersect)) {
// LEFT or RIGHT
if(CGRectGetMinX(rectIntersect) == CGRectGetMinX(rectMonster)) {
return IntersectFromLeft;
}
else {
return IntersectFromRight;
}
}
else {
// TOP or BOTTOM
if(CGRectGetMinY(rectIntersect) == CGRectGetMinY(rectMonster)) {
return IntersectFromBottom;
}
else {
return IntersectFromTop;
}
}
}
else {
return IntersectFromNotIntersect;
}
}