UIBezierPath Scaling - ios

I've written this function to attempt to scale UIBezierPaths that I have as they are being drawn...
func fit(into:CGRect) -> Self {
let bounds = self.cgPath.boundingBox
let sw = into.size.width/bounds.width
let sh = into.size.height/bounds.height
let factor = min (5, min(sw, max(sh, 0.0)))
return scale(x: factor, y: factor, into: into)
}
It works ok. The problem however exists. There are dashes inbetween points sometimes. If I'm not making sense please tell me and I'll try to rephrase my question.

Related

Issues with scaling a node (.simdPivot vs. simdScale & .scale properties)

I don't understand how node scaling on nodes work.
I'm trying to understand how the code on Apple's Creating Face Based AR Experiences sample project works. Specifically, I'm trying to understand the TransformVisualization.swift file and the transformations applied to its nodes.
The method addEyeTransformNodes() is called and left and right eye nodes are both scaled using the simdScale properties. That's the part I'm confused about.
I tried scaling the same node using .scale and .simdScale properties, but both of them did nothing.
Moreover, what's more confusing is the fact that even though the values for .simdPivot are greater than 1 the node is scaled down. I expected nodes to scale up.
Why would we need to set .simdPivot to scale the nodes but not .scale and .simdScale properties?
Here's the function I'm talking about.
func addEyeTransformNodes() {
guard #available(iOS 12.0, *), let anchorNode = contentNode else { return }
// Scale down the coordinate axis visualizations for eyes.
rightEyeNode.simdPivot = float4x4(diagonal: float4(3, 3, 3, 1))
leftEyeNode.simdPivot = float4x4(diagonal: float4(3, 3, 3, 1))
anchorNode.addChildNode(rightEyeNode)
anchorNode.addChildNode(leftEyeNode)
}
Here's what I tried:
func addEyeTransformNodes() {
guard #available(iOS 12.0, *), let anchorNode = contentNode else { return }
// Does nothing
rightEyeNode.simdScale = float3(3, 3, 3)
// Does nothing
leftEyeNode.scale = SCNVector3(x: 3, y: 3, z: 3)
anchorNode.addChildNode(rightEyeNode)
anchorNode.addChildNode(leftEyeNode)
}
I expected to scale the node with the way I did it, but nothing happened.
Looking forward to your answers and help.
If you need to offset a pivot (before applying rotate and/or scale) point use simdPivot instance property.
Use my testing code to find out how it works:
let sphereNode1 = SCNNode(geometry: SCNSphere(radius: 1))
sphereNode1.geometry?.firstMaterial?.diffuse.contents = UIColor.red
sphereNode1.position = SCNVector3(-5, 0, 0)
scene.rootNode.addChildNode(sphereNode1)
let sphereNode2 = SCNNode(geometry: SCNSphere(radius: 1))
sphereNode2.geometry?.firstMaterial?.diffuse.contents = UIColor.green
sphereNode2.simdPivot.columns.3.x = -1
sphereNode2.scale = SCNVector3(2, 2, 2) // WORKS FINE
//sphereNode2.simdScale = float3(2, 2, 2) // WORKS FINE
scene.rootNode.addChildNode(sphereNode2)
let sphereNode3 = SCNNode(geometry: SCNSphere(radius: 1))
sphereNode3.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
sphereNode3.position = SCNVector3(5, 0, 0)
scene.rootNode.addChildNode(sphereNode3)
Pivot offset is x: -1:
Pivot offset is x: 0:
Using init(diagonal:) initializer helps you creating a new 4x4 Matrix with the specified vector on the main diagonal. This method has an issue: it scales down objects when you're assigning diagonal values greater than 1, and vice versa. So, if you want to scale up character's eyes use the following approach as a workaround:
rightEyeNode.simdPivot = float4x4(diagonal: float4(1/3, 1/3, 1/3, 1))
leftEyeNode.simdPivot = float4x4(diagonal: float4(1/3, 1/3, 1/3, 1))
I think Apple engineers will fix this issue in the future.
Hope this helps.

Create UIBezierPath shape in 3D world ARKit

I'm making an app where the user can create some flat shapes by positioning some points on a 3D space with ARKit, but it seems that the part where I create the UIBezierPath using these points is problematic.
In my app, the user starts by positioning a virtual transparent wall in AR at the same place that his device by pressing a button:
guard let currentFrame = sceneView.session.currentFrame else {
return
}
let imagePlane = SCNPlane(width: sceneView.bounds.width, height: sceneView.bounds.height)
imagePlane.firstMaterial?.diffuse.contents = UIColor.black
imagePlane.firstMaterial?.lightingModel = .constant
var windowNode = SCNNode()
windowNode.geometry = imagePlane
sceneView.scene.rootNode.addChildNode(windowNode)
windowNode.simdTransform = currentFrame.camera.transform
windowNode.opacity = 0.1
Then, the user place some points (some sphere nodes) on that wall to determine the shape of the flat object that he wants to create by pressing a button. If the user points back to the first sphere node created, I close the shape, create a node of it and place it at the same position that the wall:
let hitTestResult = sceneView.hitTest(self.view.center, options: nil)
if let firstHit = hitTestResult.first {
if firstHit.node == windowNode {
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
let pointCoordinates = CGPoint(x: x , y: y)
let sphere = SCNSphere(radius: 0.02)
sphere.firstMaterial?.diffuse.contents = UIColor.white
sphere.firstMaterial?.lightingModel = .constant
let sphereNode = SCNNode(geometry: sphere)
sceneView.scene.rootNode.addChildNode(sphereNode)
sphereNode.worldPosition = firstHit.worldCoordinates
if points.isEmpty {
windowPath.move(to: pointCoordinates)
} else {
windowPath.addLine(to: pointCoordinates)
}
points.append(sphereNode)
if undoButton.alpha == 0 {
undoButton.alpha = 1
}
} else if firstHit.node == points.first {
windowPath.close()
let windowShape = SCNShape(path: windowPath, extrusionDepth: 0)
windowShape.firstMaterial?.diffuse.contents = UIColor.white
windowShape.firstMaterial?.lightingModel = .constant
let tintedWindow = SCNNode(geometry: windowShape)
let worldPosition = windowNode.worldPosition
tintedWindow.worldPosition = worldPosition
sceneView.scene.rootNode.addChildNode(tintedWindow)
//removing all the sphere nodes from points and reinitializing the UIBezierPath windowPath
removeAllPoints()
}
}
That code works when I create a first invisible wall and a first shape, but when I create a second wall, when I'm done to draw my shape, the shape appears to be deformed and not at the right place like really not at the right place at all. So I think that I'm missing something with the coordinates of my UIBezierPath points but what ?
EDIT
Ok so after several tests, it seems that it depends on the orientation of the device at the launch of the AR session. When the device, at launch, faces the first wall that the user will create, the shape is created and places as expected. But if the user for exemple launch the app with his device pointed in one direction, then do a rotation of 90 degrees on himself, place the first wall and create his shape, the shape will be deformed and not at the right place.
So it seems that it's a problem of 3D coordinates but I still don't figure it out.
Ok I just found the problem ! I was just using the wrong vectors and coordinates... I've never been a math/geometry guy haha
So instead of using:
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
I now use:
let x = Double(firstHit.localCoordinates.x)
let y = Double(firstHit.localCoordinates.y)
And instead of using:
let worldPosition = windowNode.worldPosition
I now use:
let worldPosition = windowNode.transform
That's why the position of my shape node was depending of the initialisation of the AR session, I was working with world coordinates, seems obvious to me now.

How to apply SCNVector3 force/impulse from an orientation in scenekit?

In ARKit/SceneKit, when the user taps the button, I want to apply an impulse to my node. I want the impulse to come from the current user's perspective. This means the node would be moving away from the user's perspective. I'm able to get the current orientation/direction, thanks to this code:
func getUserVector() -> (SCNVector3, SCNVector3) { // (direction, position)
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform) // 4x4 transform matrix describing camera in world space
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33) // orientation of camera in world space
let pos = SCNVector3(mat.m41, mat.m42, mat.m43) // location of camera in world space
return (dir, pos)
}
return (SCNVector3(0, 0, -1), SCNVector3(0, 0, -0.2))
}
via https://github.com/farice/ARShooter/blob/master/ARViewer/ViewController.swift#L191
I have an arbitrary SCNVector, that I've created. It contains info on how high (Y axis), how much to the left or right, and how much forward to apply to the node.
I want to convert/translate my SCNVector3 to come from the orientation/direction of the camera.
Meaning, I have
let (direction, position) = self.getUserVector()
let force = SCNVector3(x: 1.67, y: 13.83, z: -18.3)
How do I apply the force from the location/origin of the direction?
Figured it out after lots of googling. To convert the impulse vector3 to the direction I need, I used something like this:
let original = SCNVector3(x: 1.67, y: 13.83, z: -18.3)
let force = simd_make_float4(original.x, original.y, original.z, 0)
let rotatedForce = simd_mul(currentFrame.camera.transform, force)
let vectorForce = SCNVector3(x:rotatedForce.x, y:rotatedForce.y, z:rotatedForce.z)
node.physicsBody?.applyForce(vectorForce, asImpulse: true)

UIBezierPath translation transform gives wrong answer

When I attempt to translate a path, to move it to an origin of {0, 0}, the resulting path bounds is in error. (Or, my assumptions are in error).
e.g. the path gives the following bounds info:
let bezier = UIBezierPath(cgPath: svgPath)
print(bezier.bounds)
// (0.0085, 0.7200, 68.5542, 41.1379)
print(bezier.cgPath.boundingBoxOfPath)
// (0.0085, 0.7200, 68.5542, 41.1379)
print(bezier.cgPath.boundingBox)
// (-1.25, -0.1070, 70.0360, 41.9650)
I (attempt to) move the path to the origin:
let origin = bezier.bounds.origin
bezier.apply(CGAffineTransform(translationX: -origin.x, y: -origin.y))
print(bezier.bounds)
// (0.0, -2.7755, 68.5542, 41.1379)
As you can see, the x origin component is correct at 0. But, the y component (-2.7755) has gone all kittywumpus. It should be 0, non?
The same thing happens when I perform the transform on the cgPath property.
Does anyone know what kind of circumstances could cause a UIBezierPath/CGPath to behave like this when translated? After reading the Apple docs, it seems that UIBezierPath/CGPath do not hold a transform state; the points are transformed immediately when the transform is called.
Thanks for any help.
Background:
The path data is from Font-Awesome SVGs, via PocketSVG. All files parse, and most draw OK. But a small subset exhibit the above translation issue. I'd like to know if I'm doing something fundamentally wrong or silly before I go ferreting through the SVG parsing, path-building code looking for defects.
BTW I am not drawing at this stage or otherwise dealing with a context; I am building paths prior to drawing.
[edit]
To check that PocketSVG was giving me properly formed data, I passed the same SVG to SwiftSVG, and got the same path data as PocketSVG, and the same result:
let svgURL = Bundle.main.url(forResource: "fa-mars-stroke-h", withExtension: "svg")!
var bezier = UIBezierPath.pathWithSVGURL(svgURL)!
print(bezier.bounds)
// (0.0085, 0.7200, 68.5542, 41.1379)
let origin = bezier.bounds.origin
let translation = CGAffineTransform(translationX: -origin.x, y: -origin.y)
bezier.apply(translation)
print(bezier.bounds)
// (0.0, -2.7755, 68.5542, 41.1379)
Once again, that y component should be 0, but is not. Very weird.
On a whim, I thought I'd try to apply a transformation again. And, it worked!
let translation2 = CGAffineTransform(translationX: -bezier.bounds.origin.x, y: -bezier.bounds.origin.y)
bezier.apply(translation2)
print(bezier.bounds)
// (0.0, 0.0, 68.5542491336633, 41.1379438254997)
Baffling! Am I overlooking something really basic here?
I have tried the same as you and is working for me in Xcode 8.3.2 / iOS 10
I struggled myself with the same problem, I managed to solve it by using the following snippet of code (Swift 5). I tested on an organic bezier shape and it works as expected:
extension CGRect {
var center: CGPoint { return CGPoint(x: midX, y: midY) }
}
extension UIBezierPath {
func center(inRect rect:CGRect) {
let rectCenter = rect.center
let bezierCenter = self.bounds.center
let translation = CGAffineTransform(translationX: rectCenter.x - bezierCenter.x, y: rectCenter.y - bezierCenter.y)
self.apply(translation)
}
}
Usage example:
override func viewDidLoad() {
super.viewDidLoad()
let bezier = UIBezierPath() // replace this with your bezier object
let shape = CAShapeLayer()
shape.strokeColor = UIColor.black.cgColor
shape.fillColor = UIColor.clear.cgColor
shape.bounds = self.view.bounds
shape.position = self.view.bounds.center
bezier.center(inRect: shape.bounds)
shape.path = bezier.cgPath
self.view.layer.addSublayer(shape)
}
It will display the shape in the center of the screen.

Strange bug/artefacts on SCNNode rendering

On some iOS devices (iPhone 6s Plus) there is a partial and arbitrary disappearance of object parts.
How to avoid this?
All sticks must be the same and are clones of one SCNNode.
16 complex SCNNodes, from 3 SCNNode: box, ball and stick. Node containing a geometry by node.flattenedClone().
It must be like this:
Сode fragment:
func initBox()
{
var min: SCNVector3 = SCNVector3()
var max: SCNVector3 = SCNVector3()
let geom1 = SCNBox(width: boxW, height: boxH, length: boxL, chamferRadius: boxR)
geom1.firstMaterial?.reflective.contents = UIImage(data: BoxData)
geom1.firstMaterial?.reflective.intensity = 1.2
geom1.firstMaterial?.fresnelExponent = 0.25
geom1.firstMaterial?.locksAmbientWithDiffuse = true
geom1.firstMaterial?.diffuse.wrapS = SCNWrapMode.Repeat
let geom2 = SCNSphere(radius: 0.5 * boxH)
geom2.firstMaterial?.reflective.contents = UIImage(data: BalData)
geom2.firstMaterial?.reflective.intensity = 1.2
geom2.firstMaterial?.fresnelExponent = 0.25
geom2.firstMaterial?.locksAmbientWithDiffuse = true
geom2.firstMaterial?.diffuse.wrapS = SCNWrapMode.Repeat
let geom3 = SCNCapsule(capRadius: stickR, height: stickH)
geom3.firstMaterial?.reflective.contents = UIImage(data: StickData)
geom3.firstMaterial?.reflective.intensity = 1.2
geom3.firstMaterial?.fresnelExponent = 0.25
geom3.firstMaterial?.locksAmbientWithDiffuse = true
geom3.firstMaterial?.diffuse.wrapS = SCNWrapMode.Repeat
let box = SCNNode()
box.castsShadow = false
box.position = SCNVector3Zero
box.geometry = geom1
Material.setFirstMaterial(box, materialName: Materials[boxMatId])
let bal = SCNNode()
bal.castsShadow = false
bal.position = SCNVector3(0, 0.15 * boxH, 0)
bal.geometry = geom2
Material.setFirstMaterial(bal, materialName: Materials[balMatId])
let stick = SCNNode()
stick.castsShadow = false
stick.position = SCNVector3Zero
stick.geometry = geom3
stick.getBoundingBoxMin(&min, max: &max)
stick.pivot = SCNMatrix4MakeTranslation(0, min.y, 0)
Material.setFirstMaterial(stick, materialName: Materials[stickMatId])
box.addChildNode(bal)
box.addChildNode(stick)
boxmain = box.flattenedClone()
boxmain.name = "box"
}
Add nodes to the scene:
func Boxesset()
{
let Boxes = SCNNode()
Boxes.name = "Boxes"
var z: Float = -4.5 * radius
for _ in 0..<4
{
var x: Float = -4.5 * radius
for _ in 0..<4
{
let B: SCNNode = boxmain.clone()
B.position = SCNVector3(x: x, y: radius, z: z)
Boxes.addChildNode(B)
x += 3 * Float(radius)
}
z += 3 * Float(radius)
}
self.rootNode.addChildNode(Boxes)
}
This is tested and works great on the simulator - all devices,
on the physical devices - iPad Retina and iPhone 5.
Glitch is observed only at ultra modern iPhone 6s Plus (128 Gb).
The problem is clearly visible on the video ->
The problem with graphics can be solved by changing the Default rendering API to OpenGL ES...
...but you may have unexpected problems in pure computing modules that are not associated with graphics on iPhone 6S Plus. (the iPhone 6 has no such problems).
What's wrong?
TL;DR
Add scnView.prepareObject(boxmain, shouldAbortBlock: nil) to the end of your initBox.
I had a quick look at your code running on my 6s Plus and saw similar results. One of the corner nodes was missing, and was consistently missing each run. But we're not running the same code, mine's missing the materials data...
SceneKit is lazy, often things are not done until an object is added to a scene. I first came across this extracting geometry from a SceneKit primitive (SCNSphere etc), you're finding it when you clone a clone of something via the following lines.
let B: SCNNode = boxmain.clone()
...
boxmain = box.flattenedClone()
I'd say SceneKit is simply not completing the clone before the second clone occurs consistently. I have no way of knowing this for sure.
Removing the first clone fixes this issue for me. For example replace boxmain = box.flattenedClone() with boxmain = box. But I'd say what you've done is best practice, flattening these nodes will reduce the number of draw calls and improve performance (probably not an issue on the 6s).
SceneKit also provides a method - prepareObject:shouldAbortBlock: that will perform the operations required before an object is added to a scene (in this case the .flattenedClone()).
Adding the following line to the end of your initBox function also fixes the problem and is a better solution.
scnView.prepareObject(boxmain, shouldAbortBlock: nil)
Just say, I don't know the right answer to my question, but I found an acceptable solution for myself.
It turned out, it's all in the "diffuse" property of the SCNMaterial.
For whatever reason, Metal does not very like when diffuse = UIColor(...)
But if at least one element in a compound SCNNode (as in my case) is diffuse.contents = UIImage(...), then everything begins to work perfectly.
it works
diffuse=<SCNMaterialProperty: 0x7a6d50a0 | contents=<UIImage: 0x7a6d5b40> size {128, 128} orientation 0 scale 1.000000>
it doesn't work
diffuse=<SCNMaterialProperty: 0x7e611a50 | contents=UIDeviceRGBColorSpace 0.25 0.25 0.25 0.99>
I have found the solution of the problem is simply:
I just added one small, inconspicuous element with diffuse.contents = UIImage(...) to the existing 3 elements with diffuse.contents = UIColor(...) and it worked great.
So, my recommendations:
be careful when working with Metal. (I have a problems on the 5S devices and above)
thoroughly test the SceneKit applications on real devices, don't trust only the simulator
I hope, it's temporary bugs and it will be fix in future releases of Xcode.
Have a nice apps!
P.S. By the way, the finished app is completely free in the AppStore now
Qubic: tic-tac-toe 4x4x4

Resources