Swift: How to convert CGPoints from SpriteKit to CGRect? - uiview

In SpriteKit, I can use touch locatons to record "Hits" in a target, where center of the target, "bulls eye" have the coordinates (0,0). After plenty of shooting, I will fetch all hits as an array with CGPoints. Since the target is 500 x 500 points (SKScene, sks-file), all hits can have a x position from -250 to +250 and likewise for y position.
In the attatched photo, the hits are registered as points at around (150, 150).
The problem arises when I will use the famous LFHeatMap https://github.com/gpolak/LFHeatMap.
+ (UIImage *)heatMapWithRect:(CGRect)rect
boost:(float)boost
points:(NSArray *)points
weights:(NSArray *)weights;
The LFHeatMap generates a UIImage based on the array, which I add to a UIImageView. The problem is that the UIViews has the x and y values arranged differently from SKScenes
func setHeatMap() {
let points = getPointsFromCoreData()
let weigths = getWeightsFromCoreData()
var rect = CGRectMake(0, 0, 500, 500)
rect.origin = CGPointMake(-250, -250)
let image =
LFHeatMap.heatMapWithRect(rect, boost: 1, points: points, weights: weights)
heatMapView.contentMode = UIViewContentMode.ScaleAspectFit
heatMapView.image = image
}
Lowering the shots makes the heat move higher.
How can I solve this? Either All points have to be converted to fit another coordinate system, or the coordiate of the CGrect making the heatmap, must be changed. How can this be done?

This was embarrasingly easy when the solution first occured.
Run a loop trough the points array, and multiply the point.y with -1...
Then all the valus on the y-axis is correct.

Related

SCNBox – Map a texture onto five of six sides

I'm trying to create something like canvas in SceneKit using an SCNBox, with a UIImage "wrapped" around from one surface and onto the four others adjacent to it.
The only way I can currently think to do this would be to chop up the UIImage into five separate images and put those onto the sides as materials, but I'm sure there must be an easier way.
Can anyone steer me in the right direction here? The box will have a separate texture/material on the side opposite the "front".
The easiest way would probably be to create a custom geometry with matching texture coordinates using +geometryWithSources:elements:
You can use contentsTransform property from SCNMaterialProperty, for adjust needed texture coordinates from your image to SCNBox
Some explanations with simplified example:
Lets suppose that you are using cube and you have a texture like this
By dividing it into rectangles, you will have
You want to skip rectangles 1, 3, 7, 9 and cover your cube with this texture.
For this just normalize the size of side from your SCNBox between 0 and 1, and use it to set the scale and transform in contentsTransform matrix.
I have a cube with equal sides in my example - so it will be the third part of the whole texture. For taking the 5 rectangle from the texture
let normalizedWidth = 1/3
let normilizedHeight = 1/3
let xOffset = 1 //skip 1,4,7 line
let yOffset = 1 //skip 1,2,3 line
let sideMaterial = SCNMaterial()
sideMaterial.diffuse.contents = textureImage
let scaleMatrix = SCNMatrix4MakeScale(normalizedWidth, normilizedHeight, 0.0)
sideMaterial.diffuse.contentsTransform = SCNMatrix4Translate(scaleMatrix,
normalizedWidth * xOffset, yOffset * yOffset, 0.0)
You can fill 5 sides with configured materials, and the last on (on the back) just with the color and set them to materials property of your SCNBox.
In the result you will have

UIViews with subviews: calculating position when scaling

I have a view that I draw using Core Graphics, which in this example is a segmented circle. The user can touch the circle to create a point along its circumference; this creates a subview on the UIView that contains the circle graphic.
Then I've implemented a pinch-zoom gesture which causes the circle to redraw to its new size. I've seen most implementations of pinch zoom use transform properties, but I've chosen to redraw because it's all vectors and gives a clean result.
My problem is repositioning the point views. I calculate the required position of those points based on the scale of the parent view: as it changes I update the x/y coords of the point views. However, it seems there are some precision issues: as the circle shape size increases, the points drift so they aren't right on the line anymore. Here's a couple examples:
This is where the circle is at 100% scale. Note the perfect positioning of that black point. But when you zoom in...
The point drifts off-line.
And here's some code. I derive the new size of the circle from the pinch gesture's scale (I modify if a bit to constrain and slow it down for UI purposes, so that's deltaScale) and then draw it like so:
let currentSize = self.shape!.bounds.size
let newSize = CGSize(width: self.originalSize.width * deltaScale, height: self.originalSize.height * deltaScale)
self.shape?.frame.size = newSize
self.shape?.center = self.originalCentre!
self.shape?.shapeSize = newSize
self.shape?.setNeedsDisplay()
As the pinch-zoom gesture completes, I calculate the factor:
let xScale = Double(newSize.width) / Double(currentSize.width)
let yScale = Double(newSize.height) / Double(currentSize.height)
self.points = self.points.map{(thisPoint) -> UIView in
thisPoint.center = CGPoint(x: Double(thisPoint.center.x) * xScale, y: Double(thisPoint.center.y) * yScale)
return thisPoint
}
(I was using CGFloats, but switched to Doubles in the hope that it would give me the precision I needed. Alas.)
You're accumulating roundoff errors. This is getting executed repeatedly:
thisPoint.center = CGPoint(x: Double(thisPoint.center.x) * xScale, y: Double(thisPoint.center.y) * yScale)
Repeating any calculation of the form 'x=f(x)' with anything less than unlimited precision will result in drift.
Trick is to not have 'thisPoint.center' on both sides of the equal sign. Best way to do that is to have thisPoint.center be a pure function of some other state. Commenter suggested storing desired angle, that would work well. Then you could do:
thisPoint.center = f(thisPoint.someRadians), where 'f' converts from polar to rectangular coordinates, factoring in the scale of the circle.

How to attach sprites that collide?

I essentially want the "sprites" to collide when they stick together. However, I don't want the "joint" to be rigid; I essentially want the sprites to be able to move around as long as they are in contact with each other. Imagine two circles connected, and you can move one circle around the other, as long as it remains in contact.
I found this question: How to make one body stick to another moving object in SpriteKit and a lot of other resources that explain how to make sprites stick upon collision, but they all use SKJoints, which are rigid are not really flexible.
I guess another way to phrase it would be to say that I want the sprites to stick, but I want them to be able to "slide" on each other.
Well, I can think of one workaround, but this wouldn't work with non-normal polygons.
Sticking (pun unintended) with your circles example, what if you lock the position of the circle?
let circle1 = center circle
let circle2 = movable circle
Knowing the width of both circles, you can place in the update function that the position should be exactly the distance of:
((circle1.frame.width / 2) + (circle2.frame.width / 2))
If you're up to it, here's some code to help you on your way.
override func update(currentTime: CFTimeInterval) {
{
let distance = hypotf(Float(circle1.position.x - circle2.position.x), Float(circle1.position.y - circle2.position.y))
//calculate circle distances from each other
let radius = ((circle1.frame.width / 2) + (circle2.frame.width / 2))
//distance of circle positions
if distance != radius
{
//if distance is less or more than radius
let pointA = circle1.position
let pointB = circle2.position
let pointC = CGPointMake(pointB.x + 2, pointB.y)
let angle_ab = atan2(pointA.y - pointB.y, pointA.x - pointB.x)
let angle_cb = atan2(pointC.y - pointB.y, pointC.x - pointB.x)
let angle_abc = angle_ab - angle_cb
//get angle of circles from each other using atan2
let vectorx = cos(angle_abc)
let vectory = sin(angle_abc)
//convert angle into vectors
let x = circle1.position.x + radius * vectorx
let y = circle1.position.y + radius * vectory
//get new coordinates from vector, radius and center circle position
circle2.position = CGPointMake(x, y)
//set new position
}
}
Well you need to write code to make sure the movable circle, is well movable.
But, this should work.
I haven't tested this yet though, and I haven't even learned geometry let alone trig in school yet.
If I'm reading your question as you intended it, you can still use joints- just create actions with Inverse Kinematic constraints that allow rotation and translation around the contacting circles' joint.
https://developer.apple.com/library/prerelease/ios/documentation/SpriteKit/Reference/SKAction_Ref/index.html#//apple_ref/doc/uid/TP40013017-CH1-SW72

Passing UIBezierPath to view class for drawing

I am making a level based game with many different objects, all different. In each level, there will be different amounts of each type of object. Thus, I have been trying to make the drawing part as generic as possible so that all I have to do is pass in the coords and it will automatically draw. To do this, I have made a protocol that forces each object class to implement the method getBP(), which returns the UIBezierPath to draw for each. Then, the view class just has to say
Object.getBP().fill()
However, this has been leading to some strange problems. The object does not draw at the correct coordinates. The y coordinate is correct, but the x coordinate puts it always at the left of the screen. I think it may be the fact that the Bezier Path is not being created in the view class. Here is my code in Surface.swift (this is meant to draw a surface in the game):
func getBP() -> UIBezierPath {
var rect:CGRect
var length:Double = getSurfaceVector().getMagnitude()//length of the surface
var cx = points.1.x+(points.0.x-points.1.x)//center coords of the surface
var cy = points.1.y+(points.0.y-points.1.y)
var bp = UIBezierPath(roundedRect: CGRectMake(CGFloat(cx - length/2), CGFloat(cy-RECT_HEIGHT/2), CGFloat(length), CGFloat(RECT_HEIGHT)), cornerRadius: CGFloat(5))
let transform:CGAffineTransform = CGAffineTransformMakeRotation(CGFloat(Double(angle)*(Double(M_PI)/Double(180))))
bp.applyTransform(transform)
return bp
}
points is just a tuple with the start and end points of the surface. RECT_HEIGHT is the height of the rectangle that is drawn to represent the surface. angle is the angle from horizontal of the surface.
Creating the surface in View.swift, I do this:
Surface(fixed: true, points: (Vector(x: 50, y:100), Vector(x: Double(UIScreen.mainScreen().bounds.width), y: 100)))
I add that surface to the array of objects in the game. I draw it in the View.swift file by saying
surface.stroke()
The surface draws on the screen with a y value of 100, but it is centered at x = 0 so that it is half on and half off of the screen. Also, it doesn't draw at the angle - it is always horizontal. Is there some better way of doing this? What is happening?

Total height of SCNNode childNodes

I'm currently using the following to get the total height of all of the child nodes in a SCNNode. Is there a more efficient/better/shorter/more swift-like way to do this?
CGFloat(columnNode.childNodes.reduce(CGFloat()) {
let geometry = $1.geometry! as SCNBox
return $0 + geometry.height
})
Yes, and a way that'll get you a more correct answer, too. Summing the height of all the child nodes' geometries...
only works if the geometry is an SCNBox
doesn't account for the child nodes' transforms (what if they're moved, rotated or scaled?)
doesn't account for the parent node's transform (what if you want height in scene space?)
What you want is the SCNBoundingVolume protocol, which applies to all nodes and geometries, and describes the smallest rectangular or spherical space containing all of a node's (and its subnodes') content.
In Swift 3, this is super easy:
let (min, max) = columnNode.boundingBox
After this, min and max are the coordinates of the lower-rear-left and upper-front-right corners of the smallest possible box containing everything inside columnNode, no matter how that content is arranged (and what kind of geometry is involved). These coordinates are expressed in the same system as columnNode.position, so if the "height" you're looking for is in the y-axis direction of that space, just subtract the y-coordinates of those two vectors:
let height = max.y - min.y
In Swift 2, the syntax for it is a little weird, but it works well enough:
var min = SCNVector3Zero
var max = SCNVector3Zero
columnNode.getBoundingBoxMin(&min, max: &max)

Resources