Show image on SCNPlane in Scene kit & AR kit in swift iOS - ios

I followed this raywenderlich tutorial to detect image using AR and Scene Kit. In this example they showed displaying video on top of detected image.
https://www.raywenderlich.com/6957-building-a-museum-app-with-arkit-2
I want to display image on top of detected image. Please help me how can I add childNode to SCNNode so that I can display image on top of it.
Here is the code which I tried it's not working showing a blank white screen.
DispatchQueue.main.async {
let size = imageAnchor.referenceImage.physicalSize
let imageView = UIImageView(image: UIImage(named: "BG.png")!)
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: size.width, height: size.height)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
Please help to show image on top of detected image. Thanks in advance.
I'm able to show image on top of detected image but it takes lot of memory what is wrong in that code.
And same as image view I'm displaying gif play on top of detected image with the below code.
let size = imageAnchor.referenceImage.physicalSize
let imgNameFromDoc = person["actionToTake"] as! String
let documentsPathURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let imgpath = documentsPathURL?.appendingPathComponent("/Packages/\(self.selectedDatasetName)/3DScenes/\(imgNameFromDoc)")
let imageData = try! Data(contentsOf: URL(fileURLWithPath: imgpath!.path))
let imageURL = UIImage.gifImageWithData(imageData)
self.gifImgView = UIImageView(image: imageURL)
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = self.gifImgView
let imgPlane = SCNPlane(width: CGFloat(scaleActionSize(wid: Float(size.width), hei:Float(size.height))[0]), height: CGFloat(scaleActionSize(wid: Float(size.width), hei:Float(size.height))[1]))
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
With the help of image extension I'm showing GIF on image view on top of detected image.
https://github.com/kiritmodi2702/GIF-Swift/blob/master/GIF-Swift/iOSDevCenters%2BGIF.swift
How to load GIF image in Swift?
I dragged iOSDevCenters+GIF.swift class inside app and displayed gif.
When I use this gif image viewto display gif and image view to display image it takes more than 800 MB memory in Xcode running with iPad.
can somebody help me to find any memory leaks(tried with instruments not able to fix) or anything wrong in this code.

I guess you have image tracking working.
The next method is called when a new anchor is added. In this case when your image tracked is detected. So the parameter node is the image tracked and you can add a child to that image as in the tutorial. So you can try the next (use ARSCNViewDelegate):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
let imageView = UIImageView(image: UIImage())
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: 0.1, height: 0.1)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
}
The imgNode is added to the node and the node is already added to the scene. So what happen if the node is not added to the scene? You have to add it by your own. So in case you are not using the renderer method, an example of it is the next one:
self.imgNode?.position = SCNVector3(0, 0, 0)
sceneView.scene.rootNode.addChildNode(self.imgNode)

Displaying UIKit views as material property contents is not supported.
Instead of a UIImageView you should directly use the UIImage or even better just a String or URL that points to the image on disk. This way the image won't be loaded twice in memory (in SceneKit and UIKit).

Related

Can I use SKScene for material on an Obj file loaded in SceneKit?

My goal is to be able to tap on a specific model and color the surface. I have managed to do this with generic SCNSphere, SCNBox, etc.
I set it up like this, and it basically works for SCNSphere, etc:
let node = SCNNode(geometry: geometry)
let material = SCNMaterial()
material.specular.contents = SKColor.init(white: 0.1, alpha: 1)
material.shininess = 2.0;
material.normal.contents = "wood-normal.png"
let skScene = SKScene.init(size: CGSize(width: SPRITE_SIZE, height: SPRITE_SIZE))
skScene.backgroundColor = SKColor.orange
skScene.scaleMode = .aspectFill
material.diffuse.contents = skScene
geometry.firstMaterial = material
scnScene.rootNode.addChildNode(node)
However, when I load a .obj file and try to set the material with a SKScene I don't get anything. Here is how I set up the obj
let bundle = Bundle.main
let path = bundle.path(forResource: name, ofType: "obj")
let url = NSURL(fileURLWithPath: path!)
let asset = MDLAsset(url: url as URL)
guard let object = asset.object(at: 0) as? MDLMesh else {
print("Failed to get mesh from obj asset")
return
}
let material = SCNMaterial()
material.specular.contents = SKColor.init(white: 0.1, alpha: 1)
let skScene = SKScene.init(size: CGSize(width: SPRITE_SIZE, height: SPRITE_SIZE))
skScene.backgroundColor = SKColor.orange
skScene.scaleMode = .aspectFill
material.diffuse.contents = skScene
let geometry = SCNGeometry.init(mdlMesh: object)
let node = SCNNode(geometry: geometry)
node.geometry?.materials = [material]
scnScene.rootNode.addChildNode(node)
But as you can see, the color of the teapot is not orange
My question is, am I doing something wrong in terms of using a SKScene as a material for an obj file, or is there some limitation that I'm aware of? I've spent a few hours on this, and am open to suggestions if anyone has any ideas.
Also, in this sort of situation, how do I decide what the size of the SKScene is since I want to arbitrarily wrap the whole mesh with a SKScene?
Update 1
I sort of figured something out. My obj file doesn't have texture coordinates, only vertices. When I tried another obj file with texture coordinates, then the skScene seems to have loaded correctly. So I guess my teapot doesn't have uv coordinates, so it can't map the texture. Is there a way to add uv coordinates if the obj file doesn't have them already?

Image texture in SCNMaterial is always grey. How to apply color?

I am writing an image recognition app by following this example from Apple documentation: https://developer.apple.com/documentation/arkit/detecting_images_in_an_ar_experience
Everything works fine so far, now i'd like to change the transparent plane overlay to an image. I set the material of the plane like this.
let image = "overlay"
let material = SCNMaterial()
material.locksAmbientWithDiffuse = true;
material.isDoubleSided = false
material.diffuse.contents = image
material.ambient.contents = UIColor.white
let planeNode = SCNNode(geometry: plane)
planeNode.geometry?.materials = [material]
The image is loaded and rendered, only that it is always lacking the color. Original image is red, the overlay is b/w.
I already tried many different settings for ambient and diffuse without success. What am i missing here?
Thanks
In your code the following line is a String type, not UIImage type:
let image: String = "overlay"
Try the following code:
let plane = SCNPlane(width: 10, height: 10)
let image = UIImage(named: "art.scnassets/texture")
// let image = UIColor.red
// let image = UIColor(hue: 0.25, saturation: 0.5, brightness: 0.75, alpha: 1)
let material = SCNMaterial()
material.locksAmbientWithDiffuse = true
material.isDoubleSided = false
material.diffuse.contents = image
material.ambient.contents = UIColor.white
let planeNode = SCNNode(geometry: plane)
planeNode.geometry?.materials = [material]
scene.rootNode.addChildNode(planeNode)
Also, save your image for diffuse material slot as PNG format.
For future reference, here is what i found out:
Images shall not have the same name, even when saved in different groups or folders. I was using the name of the image as reference to load an overlay. When i changed the name and got the reference manually (by switch/case) everything works just fine.

ARKIT - how many tracking images can it track?

So I understand that in order to track images, we need to create a AR Resource Folder and place all the images we intend to track there, as well as configuring thru the inspector their real world size properties.
Then we set the array of ARReferenceImages to the Session's World Config.
All good with that.
But HOW MANY can we track ? 10? 100? 1000000? and would it be possible to download those images and create ARReferences on the fly, instead of having them in the bundle from the very beginning ?
Having a look at the Apple Docs it doesn't seem to specify a limit. As such it is likely to assume it would likely depend on memory management etc.
Regarding creating images on the fly, this is definitely possible.
According to the docs this can be done one of two ways:
Creating a a new reference image from a Core Graphics image object:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Creating a new reference image from a Core Video pixel buffer:
init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Here is an example of creating a referenceImage on the fly using an image from the standard Assets Bundle, although this can easily be adapted for parsing an image from a URL etc:
// Create ARReference Images From Somewhere Other Than The Default Folder
func loadDynamicImageReferences(){
//1. Get The Image From The Folder
guard let imageFromBundle = UIImage(named: "moonTarget"),
//2. Convert It To A CIImage
let imageToCIImage = CIImage(image: imageFromBundle),
//3. Then Convert The CIImage To A CGImage
let cgImage = convertCIImageToCGImage(inputImage: imageToCIImage)else { return }
//4. Create An ARReference Image (Remembering Physical Width Is In Metres)
let arImage = ARReferenceImage(cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.2)
//5. Name The Image
arImage.name = "CGImage Test"
//5. Set The ARWorldTrackingConfiguration Detection Images Assuming A Configuration Is Running
configuration.detectionImages = [arImage]
}
/// Converts A CIImage To A CGImage
///
/// - Parameter inputImage: CIImage
/// - Returns: CGImage
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
We can then test this within ARSCNViewDelegate e.g.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
As you can see the process if fairly easy. So in your case, you are probably more interested in the conversion function above which uses this method to create the dynamic images:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Paraphrasing the Human Interface Guidelines for AR... image detection performance/accuracy deteriorates as the number of images increases. So there’s no hard limit in the API, but if you try to put more than around 25 images in the current detection set, it’ll start getting to where it’s too slow/inaccurate to be useful.
There are lots of other factors affecting performance/accuracy, too, so consider that a guideline, not a hard limit. Depending on scene conditions in the place where you’re running the app, how much you’re stressing the CPU with other tasks, how distinct your reference images are from one another, etc, you might manage a few more than 25... or start having detection problems with a few less than 25.

ARKit: How can I add a UIView to ARKit Scene?

I am working on an AR project using ARKit.
I want to add a UIView to ARKit Scene. When I tap on an object, I want to get information as a "pop-up" next to the object. This information is in a UIView.
Is it possible to add this UIView to ARKit Scene?
I set up this UIView as a scene and what can I do then?
Can I give it a node and then add it to the ARKit Scene? If so, how it works?
Or is there another way?
Thank you!
EDIT: Code of my SecondViewController
class InformationViewController: UIViewController {
#IBOutlet weak var secondView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
self.view = secondView
}
}
EDIT 2: Code in firstViewController
guard let secondViewController = storyboard?.instantiateViewController(withIdentifier: "SecondViewController") as? SecondViewController else {
print ("No secondController")
return
}
let plane = SCNPlane(width: CGFloat(0.1), height: CGFloat(0.1))
plane.firstMaterial?.diffuse.contents = secondViewController.view
let node = SCNNode(geometry: plane)
I only get a white screen of a plane, not the view.
The simplest (although undocumented) way to achieve that is to set a UIView backed by a view controller as diffuse contents of a material on a SCNPlane (or any other geometry really, but it works best with planes for obvious reasons).
let plane = SCNPlane()
plane.firstMaterial?.diffuse.contents = someViewController.view
let planeNode = SCNNode(geometry: plane)
You will have to persist the view controller somewhere otherwise it's going to be released and plane will not be visible. Using just a UIView without any UIViewController will throw an error.
The best thing about it is that it keeps all of the gestures and practically works just as a simple view. For example, if you use UITableViewController's view you will be able to scroll it right inside a scene.
I haven't tested it on iOS 10 and lower, but it's been working on iOS 11 so far. Works both in plain SceneKit scenes and with ARKit.
I cannot provide you code now but this is how to do it.
Create a SCNPlane.
Create your UIView with all elements you need.
Create image context from UIView.
Use this image as material for SCNPlane.
Or even easier make SKScene with label and add it as material for SCNPlane.
Example: https://stackoverflow.com/a/74380559/294884
To place text in a label in the world you draw it into an image and then attach that image to a SCNNode.
For example:
let text = "Hello, Stack Overflow."
let font = UIFont(name: "Arial", size: CGFloat(size))
let width = 128
let height = 128
let fontAttrs: [NSAttributedStringKey: Any] =
[NSAttributedStringKey.font: font as UIFont]
let stringSize = self.text.size(withAttributes: fontAttrs)
let rect = CGRect(x: CGFloat((width / 2.0) - (stringSize.width/2.0)),
y: CGFloat((height / 2.0) - (stringSize.height/2.0)),
width: CGFloat(stringSize.width),
height: CGFloat(stringSize.height))
let renderer = UIGraphicsImageRenderer(size: CGSize(width: CGFloat(width), height: CGFloat(height)))
let image = renderer.image { context in
let color = UIColor.blue.withAlphaComponent(CGFloat(0.5))
color.setFill()
context.fill(rect)
text.draw(with: rect, options: .usesLineFragmentOrigin, attributes: fontAttrs, context: nil)
}
let plane = SCNPlane(width: CGFloat(0.1), height: CGFloat(0.1))
plane.firstMaterial?.diffuse.contents = image
let node = SCNNode(geometry: plane)
EDIT:
I added these lines:
let color = UIColor.blue.withAlphaComponent(CGFloat(0.5))
color.setFill()
context.fill(rect)
This lets you set the background color and the opacity. There are other ways of doing this - which also let you draw complex shapes - but this is the easiest for basic color.
EDIT 2: Added reference to stringSize and rect

How do I render a jpg image using ARKit ARSKViewDelegate?

I'd like to render a jpg image as a 2d rectangle floating in space. Using the SpriteKit example - how do I return a jpg image from the ARSKViewDelegate?
The demo returns a SKLabelNode - is there a Node class that would be appropriate for a jpg image that I would fetch from the network, maybe a UIImage?
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
let labelNode = SKLabelNode(text: "👾")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode;
}
You can use a SCNPlane and assign a UIImage as content of that plane.
let imagePlane = SCNPlane(width: sceneView.bounds.width/6000, height: sceneView.bounds.height/6000)
imagePlane.firstMaterial?.diffuse.contents = //<-- UIImage here
imagePlane.firstMaterial?.lightingModel = .constant
let planeNode = SCNNode(geometry: imagePlane)
UPDATE: now I noticed that you're using SpriteKit. The code I shared is using SceneKit.
Looks like I can use an SKSpriteNode from an SKTexture. The only issue with that is I see in the logs warnings about degraded ar performance. [Technique] World tracking performance is being affected by resource constraints [1]
let url = URL(string: imageURL)
let data = try? Data(contentsOf: url!) //make sure your image in this url does exist, otherwise unwrap in a if let check
let theImage = UIImage(data: data!)
let Texture = SKTexture(image: theImage!)
return SKSpriteNode(texture: Texture)

Resources