I have it set up so that the mesh I'm adding to my scene is rendering correctly (right side up so that the text I add as a child is legible), but the rotation is always the same (globally) whereas I want the rotation (on the XZ plane) to be towards the camera, but I'm not exactly sure how to go about this
My code looks like this:
#objc func handleTap() {
guard let view = self.view else { return }
guard let query = view.makeRaycastQuery(from: view.center, allowing: .estimatedPlane, alignment: .any) else { return }
guard let raycastResult = view.session.raycast(query).first else { return }
// set a transform to an existing entity
var transform = Transform(matrix: raycastResult.worldTransform)
// Create a new anchor to add content to
if(oldAnchor != nil) {
view.scene.removeAnchor(oldAnchor!)
}
let anchor = AnchorEntity()
oldAnchor = anchor
view.scene.anchors.append(anchor)
let material = SimpleMaterial(color: .lightGray, isMetallic: false)
// Add a curve entity
let curveEntity = try! ModelEntity.loadModel(named: "curve")
curveEntity.transform = transform
curveEntity.transform.rotation = simd_quatf(angle: 0, axis: SIMD3<Float>(1, 1, 1))
curveEntity.scale = [0.0002, 0.0002, 0.0002]
let curveRadians = 90.0 * Float.pi / 180.0
curveEntity.setOrientation(simd_quatf(angle: curveRadians, axis: SIMD3<Float>(1,0,0)), relativeTo: curveEntity)
// Adding text and children and materials etc.
...
anchor.addChild(curveEntity)
}
Without the curveEntity.transform.rotation = simd_quatf(angle: 0, axis: SIMD3<Float>(1, 1, 1)) line, the rotation of the curve is relative to the normal of the surface that gets raycasted upon, instead of constant.
Related
I get ARFrame's from the session delegate of an ARView where I then perform inference with CoreML + Vision using a YOLOv5 model. I successfully get an array of [VNRecognizedObjectObservation]'s
I pass these observations to a function like this:
func add(inferenceResults: [VNRecognizedObjectObservation], from frame: ARFrame) {
for inference in inferenceResults {
//NOTE: 1
let flippedNormalizedBoundingBox = inference.boundingBox.flipYCoordinateFromBottomLeftToUpperLeft
let point = flippedNormalizedBoundingBox.center()
let label = inference.labels.first?.identifier ?? "Unknown"
//PROBLEM: 1
guard arView.entity(at: point) == nil else {
break
}
let estimatedPlane = ARRaycastQuery.Target.estimatedPlane
let alignment = ARRaycastQuery.TargetAlignment.any
//NOTE: 2
let raycastQuery = frame.raycastQuery(from: point, allowing: estimatedPlane, alignment: alignment)
guard let raycastResult = arView.session.raycast(raycastQuery).first else {
print("No Ray cast results")
break
}
let newAnchor = AnchorEntity(world: raycastResult.worldTransform)
//PROBLEM: 2
let squareMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let textMaterial = SimpleMaterial(color: .white, isMetallic: true)
let squareEntity = ModelEntity(mesh: MeshResource.generatePlane(width: 0.1, height: 0.1, cornerRadius: 0), materials: [squareMaterial])
let textMesh = MeshResource.generateText(label, extrusionDepth: 0.1, font: .systemFont(ofSize: 2), containerFrame: .zero, alignment: .center, lineBreakMode: .byCharWrapping)
let textEntity = ModelEntity(mesh: textMesh, materials: [textMaterial])
textEntity.scale = SIMD3<Float>(0.03, 0.03, 0.1)
squareEntity.addChild(textEntity)
newAnchor.name = label
newAnchor.addChild(squareEntity)
//PROBLEM 3
self.arView.scene.addAnchor(newAnchor)
}
}
Some extensions
extension CGRect {
/// This will change the Y origin from the lower left corner to the upper left corner
public var flipYCoordinateFromBottomLeftToUpperLeft: CGRect {
return CGRect.init(x: self.origin.x, y: (1 - self.origin.y - self.height), width: self.width, height: self.height)
}
/// Returns a `CGPoint` that represents the center of the `CGRect`
/// - Returns: A `CGPoint` constructed by obtaining the `midX` and `midY` values
public func center() -> CGPoint {
let midY = self.midY
let midX = self.midX
let point = CGPoint(x: midX, y: midY)
return point
}
}
I end up getting results like this
NOTE 1: BBOX's from vision are normalized and have an odd origin.
PROBLEM 1: Because I can do inference quickly I don't want to keep adding AnchorEntities at the same location. This is an attempt to stop further processing but it does not ever break
NOTE 2: I know there is a rayCast function from the ARView but it seems like I want to use the rayCast function from the ARFrame I speculate that after a few milliseconds of inference on a background thread the results may be different depending on which object I do the recast from? Because the user moved?
PROBLEM 2: My AnchorEntities are alway black
PROBLEM 3: The text and BBOX is never aligned with the camera. "Billboard style"
In general I would like to apply a square with a label in AR that was reflective of the size of the BBOX from vision. I need to get past these few problems first before I refine to that level. Any help is appreciated! AR is Fun.
I am developing ARKit Application using 3d models. So for that I have used 3d models & added gestures for move, rotate & zoom 3d models.
Now I am facing only 1 issue but I am not sure if this issue relates to what. Is there an issue in 3d model or if anything missing in my program.
Issue is the 3d model I am using shows very big & goes out of the screen. I am trying to scale it down size but its very big.
Here is my code :
#IBOutlet var mySceneView: ARSCNView!
var selectedNode = SCNNode()
var prevLoc = CGPoint()
var touchCount : Int = 0
override func viewDidLoad() {
super.viewDidLoad()
self.lblTitle.text = self.sceneTitle
let mySCN = SCNScene.init(named: "art.scnassets/\(self.sceneImagename).scn")!
self.mySceneView.scene = mySCN
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3Make(0, 0, 0)
self.mySceneView.scene.rootNode.addChildNode(cameraNode)
self.mySceneView.allowsCameraControl = true
self.mySceneView.autoenablesDefaultLighting = true
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(detailPage.doHandleTap(_:)))
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(detailPage.doHandlePan(_:)))
let gesturesArray = NSMutableArray()
gesturesArray.add(tapGesture)
gesturesArray.add(panGesture)
gesturesArray.addObjects(from: self.mySceneView.gestureRecognizers!)
self.mySceneView.gestureRecognizers = (gesturesArray as! [UIGestureRecognizer])
}
//MARK:- Handle Gesture
#objc func doHandlePan(_ sender: UIPanGestureRecognizer) {
var delta = sender.translation(in: self.view)
let loc = sender.location(in: self.view)
if sender.state == .began {
self.prevLoc = loc
self.touchCount = sender.numberOfTouches
} else if sender.state == .changed {
delta = CGPoint(x: loc.x - prevLoc.x, y: loc.y - prevLoc.y)
prevLoc = loc
if self.touchCount != sender.numberOfTouches {
return
}
var rotMat = SCNMatrix4()
if touchCount == 2 {
rotMat = SCNMatrix4MakeTranslation(Float(delta.x * 0.025), Float(delta.y * -0.025), 0)
} else {
let rotMatX = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.y), 1, 0, 0)
let rotMatY = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.x), 0, 1, 0)
rotMat = SCNMatrix4Mult(rotMatX, rotMatY)
}
let transMat = SCNMatrix4MakeTranslation(selectedNode.position.x, selectedNode.position.y, selectedNode.position.z)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(transMat))
let parentNodeTransMat = SCNMatrix4MakeTranslation((selectedNode.parent?.worldPosition.x)!, (selectedNode.parent?.worldPosition.y)!, (selectedNode.parent?.worldPosition.z)!)
let parentNodeMatWOTrans = SCNMatrix4Mult(selectedNode.parent!.worldTransform, SCNMatrix4Invert(parentNodeTransMat))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, parentNodeMatWOTrans)
let camorbitNodeTransMat = SCNMatrix4MakeTranslation((self.mySceneView.pointOfView?.worldPosition.x)!, (self.mySceneView.pointOfView?.worldPosition.y)!, (self.mySceneView.pointOfView?.worldPosition.z)!)
let camorbitNodeMatWOTrans = SCNMatrix4Mult(self.mySceneView.pointOfView!.worldTransform, SCNMatrix4Invert(camorbitNodeTransMat))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(camorbitNodeMatWOTrans))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, rotMat)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, camorbitNodeMatWOTrans)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(parentNodeMatWOTrans))
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, transMat)
}
}
#objc func doHandleTap(_ sender: UITapGestureRecognizer) {
let p = sender.location(in: self.mySceneView)
var hitResults = self.mySceneView.hitTest(p, options: nil)
if (p.x > self.mySceneView.frame.size.width-100 || p.y < 100) {
self.mySceneView.allowsCameraControl = !self.mySceneView.allowsCameraControl
}
if hitResults.count > 0 {
let result = hitResults[0]
let material = result.node.geometry?.firstMaterial
selectedNode = result.node
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.3
SCNTransaction.completionBlock = {
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.3
SCNTransaction.commit()
}
material?.emission.contents = UIColor.white
SCNTransaction.commit()
}
}
My Question is :
Can we set any size of 3d object model Aspect fit in screen size in the centre of the screen ? Please suggest if there is some way for it.
Any guidence or suggestions will be highly appreciated.
What you need to is to use getBoundingSphereCenter to get the bounding sphere size, then can project that to the screen. Or alternatively get the ratio of that radius over the distance between scenekit camera and the object position. This way you will know how big the object will look on the screen. To the scale down, you simple set the scale property of your object.
For the second part, you can use projectPoint.
The way I handled this is making sure the 3D model always has a fixed size.
For example, if the 3D model is a small cup or a large house, I insure it always has a width of 25 cm on the scene's coordinate space (while maintaining the ratios between x y z).
You can calculate the width of the bounding box of the node like this:
let mySCN = SCNScene(named: "art.scnassets/\(self.sceneImagename).scn")!
let minX = mySCN.rootNode.boundingBox.min.x
let maxX = mySCN.rootNode.boundingBox.max.x
// change 0.25 to whatever you need
// this value is in meters
let scaleValue = 0.25 / abs(minX - maxX)
// scale all axes of the node using `scaleValue`
// this maintains ratios and does not stretch the model
mySCN.rootNode.scale = SCNVector3(scaleValue, scaleValue, scaleValue)
self.mySceneView.scene = mySCN
You can also calculate the scale value based on height or depth by using the y or z value of the bounding box.
My question pertains to how to mimic this Carousel view Youtube video only using a UIView not it's layer or a CALayer, which means actually transforming the UIViews its self.
I found a stack overflow question that actually is able to convert a
CATransform3D into a CGAffineTransform. That was written by some genius here as an answer, but my problem is a little unique.
The animation you see below is using CALayer to create. I need to create this same animation but transforming the UIView instead of its layer.
What it's Supposed to look like:
Code (Creates animation using Layers):
This takes an image card which is a CALayer() with a image attached to it and transforms which places it in the Carousel of images.
Note: turnCarousel() is also called when the user pans which moves / animates the Carousel.
let transformLayer = CATransformLayer()
func turnCarousel() {
guard let transformSubLayers = transformLayer.sublayers else {return}
let segmentForImageCard = CGFloat(360 / transformSubLayers.count)
var angleOffset = currentAngle
for layer in transformSubLayers {
var transform = CATransform3DIdentity
transform.m34 = -1 / 500
transform = CATransform3DRotate(transform, degreeToRadians(deg: angleOffset), 0, 1, 0)
transform = CATransform3DTranslate(transform, 0, 0, 175)
CATransaction.setAnimationDuration(0)
layer.transform = transform
angleOffset += segmentForImageCard
}
}
What It Currently Looks Like:
So basically it's close, but it seems as though there is a scaling issue with the cards that are supposed to be seen as in the front and the cards that are in the back of the carousel.
Fo this what I did is use a UIImageView as the base view for the carousel and then added more UIImageViews as cards to it. So now we are trying do a transformation on a UIImageView/UIView
Code:
var carouselTestView = UIImageView()
func turnCarouselTestCarousel() {
let segmentForImageCard = CGFloat(360 / carouselTestView.subviews.count)
var angleOffset = currentAngleTestView
for subview in carouselTestView.subviews {
var transform2 = CATransform3DIdentity
transform2.m34 = -1 / 500
transform2 = CATransform3DRotate(transform2, degreeToRadians(deg: angleOffset), 0, 1, 0)
transform2 = CATransform3DTranslate(transform2, 0, 0, 175)
CATransaction.setAnimationDuration(0)
// m13, m23, m33, m43 are not important since the destination is a flat XY plane.
// m31, m32 are not important since they would multiply with z = 0.
// m34 is zeroed here, so that neutralizes foreshortening. We can't avoid that.
// m44 is implicitly 1 as CGAffineTransform's m33.
let fullTransform: CATransform3D = transform2
let affine = CGAffineTransform(a: fullTransform.m11, b: fullTransform.m12, c: fullTransform.m21, d: fullTransform.m22, tx: fullTransform.m41, ty: fullTransform.m42)
subview.transform = affine
angleOffset += segmentForImageCard
}
}
the sub image that actually make up the Carousel are add with this function which simply goes through a for loop of image named 1...6 in my assets folder.
Code:
func CreateCarousel() {
carouselTestView.frame.size = CGSize(width: self.view.frame.width, height: self.view.frame.height / 2.9)
carouselTestView.center = CGPoint(self.view.frame.width * 0.5, self.view.frame.height * 0.5)
carouselTestView.alpha = 1.0
carouselTestView.backgroundColor = UIColor.clear
carouselTestView.isUserInteractionEnabled = true
self.view.insertSubview(carouselTestView, at: 0)
for i in 1 ... 6 {
addImageCardTestCarousel(name: "\(i)")
}
// Set the carousel for the first time. So that now we can see it like an actual carousel animation
turnCarouselTestCarousel()
let panGestureRecognizerTestCarousel = UIPanGestureRecognizer(target: self, action: #selector(self.performPanActionTestCarousel(recognizer:)))
panGestureRecognizerTestCarousel.delegate = self
carouselTestView.addGestureRecognizer(panGestureRecognizerTestCarousel)
}
The addImageCardTestCarousel function is here:
Code:
func addImageCardTestCarousel(name: String) {
let imageCardSize = CGSize(width: carouselTestView.frame.width / 2, height: carouselTestView.frame.height)
let cardPanel = UIImageView()
cardPanel.frame.size = CGSize(width: imageCardSize.width, height: imageCardSize.height)
cardPanel.frame.origin = CGPoint(carouselTestView.frame.size.width / 2 - imageCardSize.width / 2 , carouselTestView.frame.size.height / 2 - imageCardSize.height / 2)
guard let imageCardImage = UIImage(named: name) else {return}
cardPanel.image = imageCardImage
cardPanel.contentMode = .scaleAspectFill
cardPanel.layer.masksToBounds = true
cardPanel.layer.borderColor = UIColor.white.cgColor
cardPanel.layer.borderWidth = 1
cardPanel.layer.cornerRadius = cardPanel.frame.height / 50
carouselTestView.addSubview(cardPanel)
}
Purpose:
The purpose of this is that I want to build a UI that can take UIViews on the rotating cards you see, and a CALayer cannot add a UIView as a subview. It can only add the UIView's layer to its own Layer. So to solve this problem I need to actually achieve this animation with UIViews not CALayers.
I solved it the view that Appears to be behind the front most view are actually grabbing all the touch even if you touch a card right in the front the back card will prevent touches for the front card. So i made a function that can calculate. Which views are in the front. Than disable and enable touches for then. Its like when 2 cards get stacked on top of each other the card to the back will stop the card from the front from taking/userInteraction.
Code:
func DetermineFrontViews(view subview: UIView, angle angleOffset: CGFloat) {
let looped = Int(angleOffset / 360) // must round down to Int()
let loopSubtractingReset = CGFloat(360 * looped) // multiply 360 how ever many times we have looped
let finalangle = angleOffset - loopSubtractingReset
if (finalangle >= -70 && finalangle <= 70) || (finalangle >= 290) || (finalangle <= -260) {
print("In front of view")
if subview.isUserInteractionEnabled == false {
subview.isUserInteractionEnabled = true
}
} else {
print("Back of view")
if subview.isUserInteractionEnabled == true {
subview.isUserInteractionEnabled = false
}
}
}
I added this to the turn function to see if it could keep track of the first card being either in the back of the carousel or the front.
if subview.layer.name == "1" {
DetermineFrontViews(view: subview, angle: angleOffset)
}
I want my app to lay the nodes on the surface, which can be vertical or horizontal. However, the node is always vertical. Here's a pic, these nodes aren't placed correctly.
#objc func didTapAddButton() {
let screenCentre = CGPoint(x: self.sceneView.bounds.midX, y: self.sceneView.bounds.midY)
let arHitTestResults: [ARHitTestResult] = sceneView.hitTest(screenCentre, types: [.featurePoint]) // Alternatively, we could use '.existingPlaneUsingExtent' for more grounded hit-test-points.
if let closestResult = arHitTestResults.first {
let transform: matrix_float4x4 = closestResult.worldTransform
let worldCoord: SCNVector3 = SCNVector3Make(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
if let node = createNode() {
sceneView.scene.rootNode.addChildNode(node)
node.position = worldCoord
}
}
}
func createNode() -> SCNNode? {
guard let theView = myView else {
print("Failed to load view")
return nil
}
let plane = SCNPlane(width: 0.06, height: 0.06)
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = true
imageMaterial.diffuse.contents = theView.asImage()
plane.materials = [imageMaterial]
let node = SCNNode(geometry: plane)
return node
}
The app is able to see the ground but the nodes are still parallel to us. How can I fix this?
Edit: I figured I can use node.eulerAngles.x = -.pi / 2, this makes sure that the plane is laid down horizontally but it's still horizontal on vertical surfaces as well.
Solved! Here's how to make the view "parallel" to the camera at all times:
let yourNode = SCNNode()
let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = [.X, .Y, .Z]
yourNode.constraints = [billboardConstraint]
Or
guard let currentFrame = sceneView.session.currentFrame else {return nil}
let camera = currentFrame.camera
let transform = camera.transform
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.z = -0.1
let modifiedMatrix = simd_mul(transform, translationMatrix)
let node = SCNNode(geometry: plane)
node.simdTransform = modifiedMatrix
I'm playing around with Apples CIDetector to detect a face from live video using the front camera of the phone. I've been following this article and have nearly got it working. The problem I'm having is a new red box is being created on every frame rather the same one being reused.
The tutorial I'm following is meant to have code to stop that from happening but it doesn't seem to be working. I'm still very new to Swift and am struggling to work it out.
Here's the code I'm using:
func drawFaceMasksFor(features: [CIFaceFeature], bufferFrame: CGRect) {
CATransaction.begin()
CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions)
//Hide all current masks
view.layer.sublayers?.filter({ $0.name == "MaskFace" }).forEach { $0.isHidden = true }
//Do nothing if no face is dected
guard !features.isEmpty else {
CATransaction.commit()
return
}
//The problem is we detect the faces on video image size
//but when we show on the screen which might smaller or bigger than your video size
//so we need to re-calculate the faces bounds to fit to your screen
let xScale = view.frame.width / bufferFrame.width
let yScale = view.frame.height / bufferFrame.height
let transform = CGAffineTransform(rotationAngle: .pi).translatedBy(x: -bufferFrame.width,
y: -bufferFrame.height)
for feature in features {
var faceRect = feature.bounds.applying(transform)
faceRect = CGRect(x: faceRect.minX * xScale,
y: faceRect.minY * yScale,
width: faceRect.width * xScale,
height: faceRect.height * yScale)
//Reuse the face's layer
var faceLayer = view.layer.sublayers?
.filter { $0.name == "MaskFace" && $0.isHidden == true }
.first
if faceLayer == nil {
// prepare layer
faceLayer = CALayer()
faceLayer?.backgroundColor = UIColor.clear.cgColor
faceLayer?.borderColor = UIColor.red.cgColor
faceLayer?.borderWidth = 3.0
faceLayer?.frame = faceRect
faceLayer?.masksToBounds = true
faceLayer?.contentsGravity = kCAGravityResizeAspectFill
view.layer.addSublayer(faceLayer!)
} else {
faceLayer?.frame = faceRect
faceLayer?.position = faceRect.origin
faceLayer?.isHidden = false
}
//You can add some masks for your left eye, right eye, mouth
}
CATransaction.commit()
}