Question:
How do I create a new SKSpriteNode that is the same height and width of the ARObjectAnchor.referenceObject.
Context:
I'm currently fiddling with ARKit's new object detection feature and have working code to detect an object that I scanned in. When ARKit detects the object it provides a new ARAnchor of type ARObjectAnchor.
I know ARSCNView provides a projectPoint method, but I can't find any equivalent function for ARSKView. How can I map the ARObjectAnchor dimensions to the new Sprite?
Here's how I'm processing the detected object:
func view(_ view: ARSKView, didAdd node: SKNode, for anchor: ARAnchor) {
if let objectAnchor = anchor as? ARObjectAnchor {
let width = objectAnchor.referenceObject.extent.x
let height = objectAnchor.referenceObject.extent.y
// How to translate above height/width to the below size?
let box = SKSpriteNode(color: .white, size: ???)
node.addChild(box)
}
}
...
let size = CGSize(width: width, height: height)
let box = SKSpriteNode(color: .white, size: size)
...
When I add a SKScene to my SCNPlane in my image recognition app, I do a * 10000 of the size of my SCNPlane (don't ask me why it's 10000 i have no idea) as follow
let toto = SCNNode(geometry: SCNPlane(width: 0.3, height: 0.1))
let sk = SKScene(size: CGSize(width: 3000, height: 1000))
and for me it's the same size, try this out and tell me if it works for you !
Related
This is the first time I am creating an ARKit app, and also not very familiar with iOS development, however I am trying to achieve something relitively simple...
All the the app needs to do is get the world position of the phone and send it continously to a rest api.
I am using the default ARKit project in Xcode, and am able to get the phone's position with the following function in ViewController.swift:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.z))
plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 0.75)
let planeNode = SCNNode(geometry: plane)
planeNode.position = SCNVector3Make(planeAnchor.center.x, planeAnchor.center.x, planeAnchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
print(currentPositionOfCamera)
}
this function works as expected, in renders a plane in the view and prints the position, but only once
I need to get the phone's position as it is moved, I tried to add:
updateAtTime time: NSTimeInterval
to the function definition, but after doing this even the plane was not rendered any longer.
How can I continuously get the phone's position vector?
TIA
So I have a function like below that sets the world origin to a node placed where an image is after scanning it. I load the saved nodes from a database that adds them to the scene in a separate function.
For some reason, the nodes will not show when I run the app. It works when setWorldOrigin is commented out.
I would like for the nodes to show relative to the image as the origin.
Am I missing something? Does setWorldOrigin change the session?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let nodeGeometry = SCNText(string: "Welcome!", extrusionDepth: 1)
nodeGeometry.font = UIFont(name: "Helvetica", size: 30)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.black
anchorNode.geometry = nodeGeometry
anchorNode.scale = SCNVector3(0.1, 0.1, 0.1)
anchorNode.constraints = [SCNBillboardConstraint()]
anchorNode.position = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.25
/*
* Plane is rotated to match the picture location
*/
planeNode.eulerAngles.x = -.pi / 2
/*
* Scan runs as an action for a set amount of time
*/
planeNode.runAction(self.imageHighlightAction)
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
sceneView.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
sceneView.scene.rootNode.addChildNode(node)
/*
* Populates the scene
*/
handleData()
} // renderer
I figured it out. The size I had for my image was incorrect. The image I was using is 512x512 pixels. I got that by right clicking on the image and selecting "Get Info" and looking at dimensions.
I also had the measurements as meters. I changed it to centimeters and used a pixel to centimeters converter
So I understand that in order to track images, we need to create a AR Resource Folder and place all the images we intend to track there, as well as configuring thru the inspector their real world size properties.
Then we set the array of ARReferenceImages to the Session's World Config.
All good with that.
But HOW MANY can we track ? 10? 100? 1000000? and would it be possible to download those images and create ARReferences on the fly, instead of having them in the bundle from the very beginning ?
Having a look at the Apple Docs it doesn't seem to specify a limit. As such it is likely to assume it would likely depend on memory management etc.
Regarding creating images on the fly, this is definitely possible.
According to the docs this can be done one of two ways:
Creating a a new reference image from a Core Graphics image object:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Creating a new reference image from a Core Video pixel buffer:
init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Here is an example of creating a referenceImage on the fly using an image from the standard Assets Bundle, although this can easily be adapted for parsing an image from a URL etc:
// Create ARReference Images From Somewhere Other Than The Default Folder
func loadDynamicImageReferences(){
//1. Get The Image From The Folder
guard let imageFromBundle = UIImage(named: "moonTarget"),
//2. Convert It To A CIImage
let imageToCIImage = CIImage(image: imageFromBundle),
//3. Then Convert The CIImage To A CGImage
let cgImage = convertCIImageToCGImage(inputImage: imageToCIImage)else { return }
//4. Create An ARReference Image (Remembering Physical Width Is In Metres)
let arImage = ARReferenceImage(cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.2)
//5. Name The Image
arImage.name = "CGImage Test"
//5. Set The ARWorldTrackingConfiguration Detection Images Assuming A Configuration Is Running
configuration.detectionImages = [arImage]
}
/// Converts A CIImage To A CGImage
///
/// - Parameter inputImage: CIImage
/// - Returns: CGImage
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
We can then test this within ARSCNViewDelegate e.g.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
As you can see the process if fairly easy. So in your case, you are probably more interested in the conversion function above which uses this method to create the dynamic images:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Paraphrasing the Human Interface Guidelines for AR... image detection performance/accuracy deteriorates as the number of images increases. So there’s no hard limit in the API, but if you try to put more than around 25 images in the current detection set, it’ll start getting to where it’s too slow/inaccurate to be useful.
There are lots of other factors affecting performance/accuracy, too, so consider that a guideline, not a hard limit. Depending on scene conditions in the place where you’re running the app, how much you’re stressing the CPU with other tasks, how distinct your reference images are from one another, etc, you might manage a few more than 25... or start having detection problems with a few less than 25.
I am working on an AR project using ARKit.
I want to add a UIView to ARKit Scene. When I tap on an object, I want to get information as a "pop-up" next to the object. This information is in a UIView.
Is it possible to add this UIView to ARKit Scene?
I set up this UIView as a scene and what can I do then?
Can I give it a node and then add it to the ARKit Scene? If so, how it works?
Or is there another way?
Thank you!
EDIT: Code of my SecondViewController
class InformationViewController: UIViewController {
#IBOutlet weak var secondView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
self.view = secondView
}
}
EDIT 2: Code in firstViewController
guard let secondViewController = storyboard?.instantiateViewController(withIdentifier: "SecondViewController") as? SecondViewController else {
print ("No secondController")
return
}
let plane = SCNPlane(width: CGFloat(0.1), height: CGFloat(0.1))
plane.firstMaterial?.diffuse.contents = secondViewController.view
let node = SCNNode(geometry: plane)
I only get a white screen of a plane, not the view.
The simplest (although undocumented) way to achieve that is to set a UIView backed by a view controller as diffuse contents of a material on a SCNPlane (or any other geometry really, but it works best with planes for obvious reasons).
let plane = SCNPlane()
plane.firstMaterial?.diffuse.contents = someViewController.view
let planeNode = SCNNode(geometry: plane)
You will have to persist the view controller somewhere otherwise it's going to be released and plane will not be visible. Using just a UIView without any UIViewController will throw an error.
The best thing about it is that it keeps all of the gestures and practically works just as a simple view. For example, if you use UITableViewController's view you will be able to scroll it right inside a scene.
I haven't tested it on iOS 10 and lower, but it's been working on iOS 11 so far. Works both in plain SceneKit scenes and with ARKit.
I cannot provide you code now but this is how to do it.
Create a SCNPlane.
Create your UIView with all elements you need.
Create image context from UIView.
Use this image as material for SCNPlane.
Or even easier make SKScene with label and add it as material for SCNPlane.
Example: https://stackoverflow.com/a/74380559/294884
To place text in a label in the world you draw it into an image and then attach that image to a SCNNode.
For example:
let text = "Hello, Stack Overflow."
let font = UIFont(name: "Arial", size: CGFloat(size))
let width = 128
let height = 128
let fontAttrs: [NSAttributedStringKey: Any] =
[NSAttributedStringKey.font: font as UIFont]
let stringSize = self.text.size(withAttributes: fontAttrs)
let rect = CGRect(x: CGFloat((width / 2.0) - (stringSize.width/2.0)),
y: CGFloat((height / 2.0) - (stringSize.height/2.0)),
width: CGFloat(stringSize.width),
height: CGFloat(stringSize.height))
let renderer = UIGraphicsImageRenderer(size: CGSize(width: CGFloat(width), height: CGFloat(height)))
let image = renderer.image { context in
let color = UIColor.blue.withAlphaComponent(CGFloat(0.5))
color.setFill()
context.fill(rect)
text.draw(with: rect, options: .usesLineFragmentOrigin, attributes: fontAttrs, context: nil)
}
let plane = SCNPlane(width: CGFloat(0.1), height: CGFloat(0.1))
plane.firstMaterial?.diffuse.contents = image
let node = SCNNode(geometry: plane)
EDIT:
I added these lines:
let color = UIColor.blue.withAlphaComponent(CGFloat(0.5))
color.setFill()
context.fill(rect)
This lets you set the background color and the opacity. There are other ways of doing this - which also let you draw complex shapes - but this is the easiest for basic color.
EDIT 2: Added reference to stringSize and rect
I created a new augmented reality app project in Xcode 9 with the content technology set to SpriteKit.
This generated a template project which places this little guy in the centre of your current view when you tap on the screen.
I wish to create something which looks a little like this instead of the placeholder emoji:
The idea behind this is to be able to set the Name, Type and Rating String values in code and create an SKNode object from that, to place in the centre of the screen.
I understand how to create a rectangle:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
//more of a square than a rectangle..
var rect = SKShapeNode(rectOf: CGSize(width: 10, height: 10))
rect.fillColor = SKColor.white
rect.position = CGPoint(x: 0.5, y: 0.5)
return rect;
}
^ which gives me a white square on the screen
But how can I place text on top of that, as part of the same 'object' as it were? Is that possible? I've not been able to come across a solution to this anywhere and I'm not sure how to implement it.
EDIT:
Here is what I created based on a suggestion in the comments:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
let rect = SKShapeNode(rectOf: CGSize(width: 100, height: 50))
rect.fillColor = SKColor.white
rect.position = CGPoint(x: 0.5, y: 0.5)
let nameNode = SKLabelNode(text: "Name")
nameNode.fontSize = 10
nameNode.fontColor = UIColor.black
let typeNode = SKLabelNode(text: "Type")
typeNode.fontSize = 10
typeNode.fontColor = UIColor.black
let ratingNode = SKLabelNode(text: "Rating")
ratingNode.fontSize = 10
ratingNode.fontColor = UIColor.black
rect.addChild(nameNode)
rect.addChild(typeNode)
rect.addChild(ratingNode)
return rect;
}
The text is all bundled together in the centre of the rectangle however. How can I align the labels like in my concept image above?
Plus the text seems to be very pixelated, any suggestions on particular settings I need to apply to combat this?