Swift / SceneKit - Change a custom SCNGeometry's pivot - ios

I have created a custom SCNGeometry object with some help from this question. However, when applying the geometry to a SCNNode, the pivot seems to not quite be in the correct location. When rotating the node, I want to rotate the node around the center of the geometry, but instead it rotates around another point. I can fix this problem by changing the node's pivot using node.pivot = SCNMatrix4MakeTranslation(ARROW_WIDTH / 2, 0, ARROW_LENGTH / 2), where the ARROW_WIDTH refers to the width of the geometry and the ARROW_LENGTH refers to the length of the geometry. This is however not ideal as every time I create a new node with the geometry, I have to manually fix the pivot of the node. Is there a way to define the "pivot" of a geometry somehow?
Current code that creates the custom SCNGeometry:
/**
Default constructor of an arrow geometry. This constructor takes in the needed parameters to construct the geometry at the specified size.
- parameters:
- length: The length of the arrow, which is the dimension that the arrow is pointing in.
- height: The height of the arrow, which is the thickness of the arrow.
- width: The width of the arrow, which defines the width of the arrow.
- indent: The indent of the arrow, which is the point and gap on the front and back of the arrow.
*/
init(length: Float, height: Float, width: Float, indent: Float) {
self.length = length
self.height = height
self.width = width
self.indent = indent > length ? length : indent
// Vertices
let v0 = SCNVector3(0, height / 2, 0)
let v1 = SCNVector3(width / 2, height / 2, indent)
... more vertices
let h4 = SCNVector3(width, -height / 2, indent)
let h5 = SCNVector3(width / 2, -height / 2, length - indent)
let vertices = [
// Top layer bottom triangles
v0, v1, h0,
v1, v2, h1,
... more vertices
v4, v10, v5,
v10, v11, v5
]
// Normals
let pX = SCNVector3(1, 0, 0)
... more normals
let topRight = calculateNormal(v1: v3, v2: v9, v3: v4)
let normals = [
// Top layer bottom triangles
pY, pY, pY,
... more normals
topLeft, topLeft, topLeft
]
// Indices
let indices: [Int32] = vertices.enumerated().map({ Int32($0.0) })
// Sources
let vertexSource = SCNGeometrySource(vertices: vertices)
let normalSource = SCNGeometrySource(vertices: normals)
// Create the geometry
let pointer = UnsafeRawPointer(indices)
let indexData = NSData(bytes: pointer, length: MemoryLayout<Int32>.size * indices.count)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<Int32>.size)
self._geometry = SCNGeometry(sources: [vertexSource, normalSource], elements: [element])
}
Without applying the manual pivot fix on the node, the arrow renders like this: (Note that the red point is the scene origin (0, 0, 0) and the arrow is positioned in the root node of the scene at that same position)
no manual pivot fix
When applying the manual pivot fix on the node, the arrow renders like this:
manual pivot fix

Related

How to show 3D path between two SCNVector3 points

I want to show 3D path between two SCNVector3 points and want to achieve like below screenshot in iOS Swift. from below code shows only simple line between points
let indices: [Int32] = [0, 1]
let source = SCNGeometrySource(vertices: [vector1, vector2])
let element = SCNGeometryElement(indices: indices, primitiveType: .line)
SCNGeometry(sources: [source], elements: [element])
You can use a lerp (linear interpolation) to get points between the start and end nodes. You can factor in a percentage - like 10,20,30 to get those 3d positions in line between your start and end node.

Get 3D model's height and its transformed screen coordinates

I'm rendering a Collada (*.dae) file with ARKit. As an overlay of my ARSCNView I'm adding a SKScene that simply shows a message bubble (without text yet).
Currently, I know how to modify the position of the bubble so that it looks like it's always at the feet of my 3D model. I'm doing like this:
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
if let overlay = sceneView.overlaySKScene as? BubbleMessageScene {
guard let borisNode = sceneView.scene.rootNode.childNode(withName: "boris", recursively: true) else { return }
let boxWorldCoordinates = sceneView.scene.rootNode.convertPosition(borisNode.position, from:sceneView.scene.rootNode.parent)
let screenCoordinates = self.sceneView.projectPoint(boxWorldCoordinates)
let boxY = overlay.size.height - CGFloat(screenCoordinates.y)
overlay.bubbleNode?.position.x = CGFloat(screenCoordinates.x) - (overlay.bubbleNode?.size.width)!/2
overlay.bubbleNode?.position.y = boxY
}
}
However my bubble is always at the feet of the 3D model because I can only get the SCNNode position of my model, where it is anchored. I would like it to be at the head of my model.
Is there a way I can get the height of my 3D model, and then its transformed screen coordinates, so no matter where I am with my phone it looks like the bubble message is always next to the head?
Each SCNNode has a boundingBox property which is the:
The minimum and maximum corner points of the object’s bounding box.
So what this means is that:
Scene Kit defines a bounding box in the local coordinate space using two points identifying its corners, which implicitly determine six axis-aligned planes marking its limits. For example, if a geometry’s bounding box has the minimum corner {-1, 0, 2} and the maximum corner {3, 4, 5}, all points in the geometry’s vertex data have an x-coordinate value between -1.0 and 3.0, inclusive.
If you look in SceneKit Editor you will also be able to see the size of your model in meters (I am saying this simply as a point you can refer to in order to check the calculations):
In my example I am using a Pokemon model with the size above.
I scaled the model (which you likely did as well) e.g:
pokemonModel.scale = SCNVector3(0.01, 0.01, 0.01)
So in order to get the boundingBox of the SCNNode we can do this:
/// Returns The Original Width & Height Of An SCNNode
///
/// - Parameter node: SCNNode
func getSizeOfModel(_ node: SCNNode){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
}
Calling it like so:
getSizeOfModel(pokemonModel)
Now of course since our SCNNode has been scaled this doesn't help much so obviously we need to take this into account, by re-writing the function:
/// Returns The Original & Scaled With & Height On An SCNNode
///
/// - Parameters:
/// - node: SCNode
/// - scalar: Float
func getOriginalAndScaledSizeOfNode(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
}
Which would be called like so:
getOriginalAndScaledSizeOfNode(pokemonModel, scalar: 0.01)
Having done this you say you want to position a 'bubble' above your model, which could then be done like so:
func getSizeOfNodeAndPositionBubble(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
//3. Create A Buubble
let pointNodeHolder = SCNNode()
let pointGeometry = SCNSphere(radius: 0.04)
pointGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
pointNodeHolder.geometry = pointGeometry
//4. Place The Bubble At The Origin Of The Model, At The Models Origin + It's Height & At The Z Position
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled, node.position.z)
self.augmentedRealityView.scene.rootNode.addChildNode(pointNodeHolder)
}
This yields the following result (which I also tested on a few other unfortunate Pokemon as well):
You will probably want to add a bit of 'padding' as well to the calculation, so that the node is a bit higher up than the top of the model e.g:
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled + 0.1, node.position.z)
I am not great at Maths, and this uses an SCNNode for the bubble rather than an SKScene, but hopefully it will point you in the right direction...
you can get borisNode.boundingBox : (float3, float3) to calculate the size of the node, you get a tuple of 2 points, then calculate the heigh by subtracting the y from one point from the other. Finally move your overlay's Y position by the number you get.

iOS revert camera projection

I'm trying to estimate my device position related to a QR code in space. I'm using ARKit and the Vision framework, both introduced in iOS11, but the answer to this question probably doesn't depend on them.
With the Vision framework, I'm able to get the rectangle that bounds a QR code in the camera frame. I'd like to match this rectangle to the device translation and rotation necessary to transform the QR code from a standard position.
For instance if I observe the frame:
* *
B
C
A
D
* *
while if I was 1m away from the QR code, centered on it, and assuming the QR code has a side of 10cm I'd see:
* *
A0 B0
D0 C0
* *
what has been my device transformation between those two frames? I understand that an exact result might not be possible, because maybe the observed QR code is slightly non planar and we're trying to estimate an affine transform on something that is not one perfectly.
I guess the sceneView.pointOfView?.camera?.projectionTransform is more helpful than the sceneView.pointOfView?.camera?.projectionTransform?.camera.projectionMatrix since the later already takes into account transform inferred from the ARKit that I'm not interested into for this problem.
How would I fill
func get transform(
qrCodeRectangle: VNBarcodeObservation,
cameraTransform: SCNMatrix4) {
// qrCodeRectangle.topLeft etc is the position in [0, 1] * [0, 1] of A0
// expected real world position of the QR code in a referential coordinate system
let a0 = SCNVector3(x: -0.05, y: 0.05, z: 1)
let b0 = SCNVector3(x: 0.05, y: 0.05, z: 1)
let c0 = SCNVector3(x: 0.05, y: -0.05, z: 1)
let d0 = SCNVector3(x: -0.05, y: -0.05, z: 1)
let A0, B0, C0, D0 = ?? // CGPoints representing position in
// camera frame for camera in 0, 0, 0 facing Z+
// then get transform from 0, 0, 0 to current position/rotation that sees
// a0, b0, c0, d0 through the camera as qrCodeRectangle
}
====Edit====
After trying number of things, I ended up going for camera pose estimation using openCV projection and perspective solver, solvePnP This gives me a rotation and translation that should represent the camera pose in the QR code referential. However when using those values and placing objects corresponding to the inverse transformation, where the QR code should be in the camera space, I get inaccurate shifted values, and I'm not able to get the rotation to work:
// some flavor of pseudo code below
func renderer(_ sender: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let currentFrame = sceneView.session.currentFrame, let pov = sceneView.pointOfView else { return }
let intrisics = currentFrame.camera.intrinsics
let QRCornerCoordinatesInQRRef = [(-0.05, -0.05, 0), (0.05, -0.05, 0), (-0.05, 0.05, 0), (0.05, 0.05, 0)]
// uses VNDetectBarcodesRequest to find a QR code and returns a bounding rectangle
guard let qr = findQRCode(in: currentFrame) else { return }
let imageSize = CGSize(
width: CVPixelBufferGetWidth(currentFrame.capturedImage),
height: CVPixelBufferGetHeight(currentFrame.capturedImage)
)
let observations = [
qr.bottomLeft,
qr.bottomRight,
qr.topLeft,
qr.topRight,
].map({ (imageSize.height * (1 - $0.y), imageSize.width * $0.x) })
// image and SceneKit coordinated are not the same
// replacing this by:
// (imageSize.height * (1.35 - $0.y), imageSize.width * ($0.x - 0.2))
// weirdly fixes an issue, see below
let rotation, translation = openCV.solvePnP(QRCornerCoordinatesInQRRef, observations, intrisics)
// calls openCV solvePnP and get the results
let positionInCameraRef = -rotation.inverted * translation
let node = SCNNode(geometry: someGeometry)
pov.addChildNode(node)
node.position = translation
node.orientation = rotation.asQuaternion
}
Here is the output:
where A, B, C, D are the QR code corners in the order they are passed to the program.
The predicted origin stays in place when the phone rotates, but it's shifted from where it should be. Surprisingly, if I shift the observations values, I'm able to correct this:
// (imageSize.height * (1 - $0.y), imageSize.width * $0.x)
// replaced by:
(imageSize.height * (1.35 - $0.y), imageSize.width * ($0.x - 0.2))
and now the predicted origin stays robustly in place. However I don't understand where the shift values come from.
Finally, I've tried to get an orientation fixed relatively to the QR code referential:
var n = SCNNode(geometry: redGeometry)
node.addChildNode(n)
n.position = SCNVector3(0.1, 0, 0)
n = SCNNode(geometry: blueGeometry)
node.addChildNode(n)
n.position = SCNVector3(0, 0.1, 0)
n = SCNNode(geometry: greenGeometry)
node.addChildNode(n)
n.position = SCNVector3(0, 0, 0.1)
The orientation is fine when I look at the QR code straight, but then it shifts by something that seems to be related to the phone rotation:
Outstanding questions I have are:
How do I solve the rotation?
where do the position shift values come from?
What simple relationship do rotation, translation, QRCornerCoordinatesInQRRef, observations, intrisics verify? Is it O ~ K^-1 * (R_3x2 | T) Q ? Because if so that's off by a few order of magnitude.
If that's helpful, here are a few numerical values:
Intrisics matrix
Mat 3x3
1090.318, 0.000, 618.661
0.000, 1090.318, 359.616
0.000, 0.000, 1.000
imageSize
1280.0, 720.0
screenSize
414.0, 736.0
==== Edit2 ====
I've noticed that the rotation works fine when the phone stays horizontally parallel to the QR code (ie the rotation matrix is [[a, 0, b], [0, 1, 0], [c, 0, d]]), no matter what the actual QR code orientation is:
Other rotation don't work.
Coordinate systems' correspondence
Take into consideration that Vision/CoreML coordinate system doesn't correspond to ARKit/SceneKit coordinate system. For details look at this post.
Rotation's direction
I suppose the problem is not in matrix. It's in vertices placement. For tracking 2D images you need to place ABCD vertices counter-clockwise (the starting point is A vertex located in imaginary origin x:0, y:0). I think Apple Documentation on VNRectangleObservation class (info about projected rectangular regions detected by an image analysis request) is vague. You placed your vertices in the same order as is in official documentation:
var bottomLeft: CGPoint
var bottomRight: CGPoint
var topLeft: CGPoint
var topRight: CGPoint
But they need to be placed the same way like positive rotation direction (about Z axis) occurs in Cartesian coordinates system:
World Coordinate Space in ARKit (as well as in SceneKit and Vision) always follows a right-handed convention (the positive Y axis points upward, the positive Z axis points toward the viewer and the positive X axis points toward the viewer's right), but is oriented based on your session's configuration. Camera works in Local Coordinate Space.
Rotation direction about any axis is positive (Counter-Clockwise) and negative (Clockwise). For tracking in ARKit and Vision it's critically important.
The order of rotation also makes sense. ARKit, as well as SceneKit, applies rotation relative to the node’s pivot property in the reverse order of the components: first roll (about Z axis), then yaw (about Y axis), then pitch (about X axis). So the rotation order is ZYX.
Math (Trig.):
Notes: the bottom is l (the QR code length), the left angle is k, and the top angle is i (the camera)

SceneKit - Draw 3D Parabola

I'm given three points and need to draw a smooth 3D parabola. The trouble is that curved line is choppy and has some weird divots in it
Here is my code...
func drawJump(jump: Jump){
let halfDistance = jump.distance.floatValue/2 as Float
let tup = CalcParabolaValues(0.0, y1: 0.0, x2: halfDistance, y2: jump.height.floatValue, x3: jump.distance.floatValue, y3: 0)
println("tuple \tup")
var currentX = 0 as Float
var increment = jump.distance.floatValue / Float(50)
while currentX < jump.distance.floatValue - increment {
let x1 = Float(currentX)
let x2 = Float((currentX+increment))
let y1 = calcParabolaYVal(tup.a, b: tup.b, c: tup.c, x: x1)
let y2 = calcParabolaYVal(tup.a, b: tup.b, c: tup.c, x: x2)
drawLine(x1, y1: y1, x2: x2, y2: y2)
currentX += increment
}
}
func CalcParabolaValues(x1: Float, y1: Float, x2: Float, y2: Float, x3: Float, y3: Float) -> (a: Float, b: Float, c: Float) {
println(x1, y1, x2, y2, x3, y3)
let a = y1/((x1-x2)*(x1-x3)) + y2/((x2-x1)*(x2-x3)) + y3/((x3-x1)*(x3-x2))
let b = (-y1*(x2+x3)/((x1-x2)*(x1-x3))-y2*(x1+x3)/((x2-x1)*(x2-x3))-y3*(x1+x2)/((x3-x1)*(x3-x2)))
let c = (y1*x2*x3/((x1-x2)*(x1-x3))+y2*x1*x3/((x2-x1)*(x2-x3))+y3*x1*x2/((x3-x1)*(x3-x2)))
return (a, b, c)
}
func calcParabolaYVal(a:Float, b:Float, c:Float, x:Float)->Float{
return a * x * x + b * x + c
}
func drawLine(x1: Float, y1: Float,x2: Float, y2: Float) {
println("drawLine \(x1) \(y1) \(x2) \(y2)")
let positions: [Float32] = [
x1, y1, 0,
x2, y2, 0
]
let positionData = NSData(bytes: positions, length: sizeof(Float32)*positions.count)
let indices: [Int32] = [0, 1]
let indexData = NSData(bytes: indices, length: sizeof(Int32) * indices.count)
let source = SCNGeometrySource(data: positionData, semantic: SCNGeometrySourceSemanticVertex, vectorCount: indices.count, floatComponents: true, componentsPerVector: 3, bytesPerComponent: sizeof(Float32), dataOffset: 0, dataStride: sizeof(Float32) * 3)
let element = SCNGeometryElement(data: indexData, primitiveType: SCNGeometryPrimitiveType.Line, primitiveCount: indices.count, bytesPerIndex: sizeof(Int32))
let line = SCNGeometry(sources: [source], elements: [element])
self.rootNode.addChildNode( SCNNode(geometry: line))
}
func renderer(aRenderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: NSTimeInterval) {
glLineWidth(20)
}
I also have to figure out how to animate the arc from left to right. Can someone help me out? Swift or Objective C is fine. Any help is appreciated. Thanks!
I'd recommend using SCNShape to create your parabola. To start, you'll need to represent your parabola as a Bézier curve. You can use UIBezierPath for that. For animation, I personally find shader modifiers are a nice fit for cases like this.
The Parabola
Watch out, though — you probably want a path that represents just the open stroke of the arc. If you do something like this:
let path = UIBezierPath()
path.moveToPoint(CGPointZero)
path.addQuadCurveToPoint(CGPoint(x: 100, y: 0), controlPoint: CGPoint(x: 50, y: 200))
You'll get a filled-in parabola, like this (seen in 2D in the debugger quick look, then extruded in 3D with SCNShape):
To create a closed shape that's just the arc, you'll need to trace back over the curve, a little bit away from the original:
let path = UIBezierPath()
path.moveToPoint(CGPointZero)
path.addQuadCurveToPoint(CGPoint(x: 100, y: 0), controlPoint: CGPoint(x: 50, y: 200))
path.addLineToPoint(CGPoint(x: 99, y: 0))
path.addQuadCurveToPoint(CGPoint(x: 1, y: 0), controlPoint: CGPoint(x: 50, y: 198))
That's better.
... in Three-Dee!
How to actually make it 3D? Just make an SCNShape with the extrusion depth you like:
let shape = SCNShape(path: path, extrusionDepth: 10)
And set it in your scene:
shape.firstMaterial?.diffuse.contents = SKColor.blueColor()
let shapeNode = SCNNode(geometry: shape)
shapeNode.pivot = SCNMatrix4MakeTranslation(50, 0, 0)
shapeNode.eulerAngles.y = Float(-M_PI_4)
root.addChildNode(shapeNode)
Here I'm using a pivot to make the shape rotate around the major axis of the parabola, instead of the y = 0 axis of the planar Bézier curve. And making it blue. Also, root is just a shortcut I made for the view's scene's root node.
Animating
The shape of the parabola doesn't really need to change through your animation — you just need a visual effect that progressively reveals it along its x-axis. Shader modifiers are a great fit for that, because you can make the animated effect per-pixel instead of per-vertex and do all the expensive work on the GPU.
Here's a shader snippet that uses a progress parameter, varying from 0 to 1, to set opacity based on x-position:
// declare a variable we can set from SceneKit code
uniform float progress;
// tell SceneKit this shader uses transparency so we get correct draw order
#pragma transparent
// get the position in model space
vec4 mPos = u_inverseModelViewTransform * vec4(_surface.position, 1.0);
// a bit of math to ramp the alpha based on a progress-adjusted position
_surface.transparent.a = clamp(1.0 - ((mPos.x + 50.0) - progress * 200.0) / 50.0, 0.0, 1.0);
Set that as a shader modifier for the Surface entry point, and then you can animate the progress variable:
let modifier = "uniform float progress;\n #pragma transparent\n vec4 mPos = u_inverseModelViewTransform * vec4(_surface.position, 1.0);\n _surface.transparent.a = clamp(1.0 - ((mPos.x + 50.0) - progress * 200.0) / 50.0, 0.0, 1.0);"
shape.shaderModifiers = [ SCNShaderModifierEntryPointSurface: modifier ]
shape.setValue(0.0, forKey: "progress")
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(10)
shape.setValue(1.0, forKey: "progress")
SCNTransaction.commit()
Further Considerations
Here's the whole thing in a form you can paste into a (iOS) playground. A few things left as exercises to the reader, plus other notes:
Factor out the magic numbers and make a function or class so you can alter the size/shape of your parabola. (Remember that you can scale SceneKit nodes relative to other scene elements, so they don't have to use the same units.)
Position the parabola relative to other scene elements. If you take away my line that sets the pivot, the shapeNode.position is the left end of the parabola. Change the parabola's length (or scale it), then rotate it around its y-axis, and you can make the other end line up with some other node. (For you to fire ze missiles at?)
I threw this together with Swift 2 beta, but I don't think there's any Swift-2-specific syntax in there — porting back to 1.2 if you need to deploy soon should be straightforward.
If you also want to do this on OS X, it's a bit trickier — there, SCNShape uses NSBezierPath, which unlike UIBezierPath doesn't support quadratic curves. Probably an easy way out would be to fake it with an elliptical arc.
I don't think your table has enough points, assuming the renderer is connecting them with straight line segments. On top of this, the thickness and dashing of the line make it difficult to see that. Try getting a smooth curve with a thin solid line first.
If you want to animate the progression of the curve, as if it were showing the flight of a projectile, it will probably be easiest to just write your function for the motion: y = k*x^2, and just render from x=0 to x=T for increasing values of T.

Calculate vertices for n sided regular polygon

I have tried to follow this answer
It works fine for creating the polygons, however I can see that it doesn't reach the edges of the containing rectangle.
The following gif shows what I mean. Especially for the 5 sided polygon it is clear that it doesn't "span" the rectangle which I would like it to do
This is the code I use for creating the vertices
func verticesForEdges(_edges: Int) -> [CGPoint] {
let offset = 1.0
var vertices: [CGPoint] = []
for i in 0..._edges {
let angle = M_PI + 2.0 * M_PI * Double(i) / Double(edges)
var x = (frame.width / 2.0) * CGFloat(sin(angle)) + (bounds.width / 2.0)
var y = (frame.height / 2.0) * CGFloat(cos(angle)) + (bounds.height / 2.0)
vertices.append(CGPoint(x: x, y: y))
}
return vertices
}
And this is the code that uses the the vertices
override func layoutSublayers() {
super.layoutSublayers()
var polygonPath = UIBezierPath()
let vertices = verticesForEdges(edges)
polygonPath.moveToPoint(vertices[0])
for v in vertices {
polygonPath.addLineToPoint(v)
}
polygonPath.closePath()
self.path = polygonPath.CGPath
}
So the question is. How do I make the the polygons fill out the rectangle
Update:
The rectangle is not necessarily a square. It can have a different height from its width. From the comments it seems that I am fitting the polygon in a circle, but what is intentioned is to fit it in a rectangle.
If the first (i=0) vertice is fixed at the middle of top rectangle edge, we can calculate minimal width and height of bounding rectangle:
The rightmost vertice index
ir = (N + 2) / 4 // N/4, rounded to the closest integer, not applicable to triangle
MinWidth = 2 * R * Sin(ir * 2 * Pi / N)
The bottom vertice index
ib = (N + 1) / 2 // N/2, rounded to the closest integer
MinHeight = R * (1 + Abs(Cos(ib * 2 * Pi / N)))
So for given rectangle dimensions we can calculate R parameter to inscribe polygon properly

Resources